performance issue with opendev gitea interface.

Clark Boylan cboylan at
Thu Jun 27 22:40:11 UTC 2019

On Thu, Jun 27, 2019, at 7:22 AM, Jim Rollenhagen wrote:
> On Thu, Jun 27, 2019 at 9:49 AM Sean Mooney <smooney at> wrote:
> > i have started this as a separate thread form the Github organization management
> >  but it has been in the back of my mind as we are considering not syncing to gitub
> >  going forward. for larger project like nova 
> >  preforms quite poorly
> AFAIK we have never discussed dropping syncing for the openstack 
> namespace,
> it just may be implemented differently. We did drop mirroring for 
> unofficial projects,
> but they can set it up themselves if they want it.

Correct. We've transitioned to a world where everything isn't in the openstack/ namespace just to make github mirroring easy. Instead we've built flexible git mirror tooling that should allow you to mirror git repos to arbitrary locations on the Internet including GitHub and as far as I know OpenStack intends to keep mirroring to GitHub.

> > 
> >  in both firefox and chome i am seeing ~ 15second before the first response form
> > for nova.
> > 
> >  os-vif or other smaller projects seem to respond ok but for nova it makes navigating the
> >  code or liking code to other via opendev quite hard.
> > 
> >  i brought this up on the infra irc a few weeks ago and asked if gitea had any kind of caching
> >  and while the initial response was "we do not believe so".

Reading the doc you link below I believe we are using the default cache option of "memory".

> > 
> >  before archiving or stopping syncing to github i was wondering if we could explore
> >  options to improve the performace. if is not currently fronted by a cdn perhaps
> >  that would help.
> > 
> >  Similarly looking at it may be possible to either
> >  change the cache or database parameters to improve performance. I really don't know how gitea has
> >  been deployed but at present the web interface is not usable in nova in a responsive manner
> >  so i have continued to use github when linking code to others but it would be nice
> >  to be able to use opendev instead.

We should definitely do what we can to improve the performance of these larger repos. Gitea is a very receptive upstream so if we can identify the issue and/or fix it I'm sure they would be happy to help with that.

The way we have deployed Gitea is 8 backend nodes behind an haproxy. Currently Gitea does not operate in a shared state manner so each backend operates independently with Gerrit replicating to each of them. Due to the lack of shared state here the haproxy load balancer balances you to a specific backend based on your source address (without this we observed git clients being unhappy on subsequent requests if objects weren't packed identically).

The haproxy and gitea deployments are all done via ansible driving docker(-compose). The ansible roles are here [0][1], but probably the most interesting bit is the docker-compose [2] as you should be able to take that and run docker-compose locally to deploy a local gitea install for debugging.

It might also be useful to know that the gitea backends can be addressed individually via replacing the X with values 1-8 (inclusive). Its possible that some backends perform better than others? Cacti data [3] may also be useful here. I notice that it doesn't show significant memory use by the gitea hosts [4]. This may mean we aren't caching in memory aggressively enough or gitea just doesn't cache what we need it to cache.

I expect we'll eventually get to digging into this ourselves, but help is much appreciated (other items like replacing gitea host with corrupted disk have been bigger priorities).

I do wonder if our replication of refs/changes and refs/notes has impacted gitea in a bad way. I don't have any data to support that yet other than it seemed gitea was quicker with our big repos in the past and that is the only major change we've made to gitea. We have upgraded gitea a few times so it may also just be a regression in the service.


Hope this helps,

More information about the openstack-discuss mailing list