[OpenStack-Infra] Report from Gerrit User Summit
Clark Boylan
cboylan at sapwetik.org
Wed Sep 4 17:07:00 UTC 2019
On Wed, Sep 4, 2019, at 9:53 AM, James E. Blair wrote:
> Hi,
>
> Monty and I attended the Gerrit User Summit and hackathon last week. It
> was very productive: we learned some good information about upgrading
> Gerrit, received offers of help doing so if we need it, formed closer
> ties with the Gerrit community, and fielded a lot of interest in Reno
> and Zuul. In general, people were happy that we attended as
> representatives of the OpenDev/OpenStack/Zuul communities and (re-)
> engaged with the Gerrit community.
>
> Gerrit Upgrade
> --------------
>
> We learned some practical things about upgrading to 3.0:
>
> * We can turn off rebuilding the secondary index ("reindexing") on
> startup to speed both or normal restarts as well as prevent unwanted
> reindexes during upgrades. (Monty pushed a change for this.)
>
> * We can upgrade from 2.13 -> 2.14 -> 2.15 -> 2.16 during a relatively
> quick downtime. We could actually do some of that while up, but Monty
> and I advocate just taking a downtime to keep things simple.
>
> * We should, under no circumstances, enable NoteDB before 2.16. The
> migration implementation in 2.15 is flawed and will cause delays or
> errors in later upgrades.
>
> * Once on 2.16, we should enable NoteDB and perform the migration. This
> can happen online in the background.
>
> * We should GC the repos before starting, to make reindexing faster.
>
> * We should ensure that we have a sufficiently sized diff cache, as that
> Gerrit will be able to re-use previously computed patchset diffs when
> reindexing. This can considerably speed an onlide reindex.
>
> * We should probably run 2.16 in production for some time (1 month?) to
> allow users to acclimate to polygerrit, and deal with hideCI.
>
> * Regarding hideCI -- will someone implement that for polygerrit? will
> it be obviated by improvements in Zuul reporting (tagged or robot
> comments)? even if we improve Zuul, will third-party CI's upgrade?
> do we just ignore it?
>
> * The data in the AccountPatchReviewDb are not very important, and we
> don't need to be too concerned if we lose them during the upgrade.
>
> * We need to pay attention to H2 tuning parameters, because many of the
> caches use H2.
>
> * Luca has offered to provide any help if we need it.
>
> I'm sure there's more, but that's a pretty good start. Monty has
> submitted several changes to our configuration of Gerrit with the topic
> "gus2019" based on some of this info.
This is excellent information and makes the Gerrit upgrade seem far more doable. Thank you for this.
>
> Gerrit Community
> ----------------
Snip
> Reno
> ----
Snip
> Zuul
> ----
>
> Zuul is happily used at Volvo by the propulsion team at Volvo (currently
> v2, working on moving to v3) [2]. Other teams are looking into it.
>
> The Gerrit maintainers are interested in using Zuul to run Gerrit's
> upstream CI system. Monty and I plan on helping to implement that.
>
> We spoke at length with Edwin and Alice who are largely driving the
> development of the new "checks" API in Gerrit. It is partially
> implemented now and operational in upstream Gerrit. As written, we
> would have some difficulty using it effectively with Zuul. However,
> with Zuul as a use case, some further changes can be made so that I
> think it will integrate quite well, and with more work could be a quite
> nice integration.
>
> At a very high level, a "checker" in Gerrit represents a single
> pass/fail result from a CI system or code analyzer, and must be
> configured on the project in advance by an administrator. Since we want
> Zuul to report the result of each job it runs on a change, and we don't
> know that set of jobs until we start, the current implementation doesn't
> fit the model very well. For the moment, we can use the checks API to
> report the overall buildset result, but not the jobs. We can, of
> course, still use Gerrit change messages to report the results of
> individual jobs just as we do now. But ideally, we'd like to take full
> advantage of the structured data and reporting the checks API provides.
>
> To that end, I've offered to write a design document describing an
> implementation of support for "sub-checks" -- an idea which appeared in
> the original checks API design as a potential follow-up.
>
> Sub-checks would simply be structured data about individual jobs which
> are reported along with the overall check result. With this in place,
> Zuul could get out of the business of leaving comments with links to
> logs, as each sub-check would support its own pass/fail, duration, and
> log url.
>
> Later, we could extend this to support reporting artifact locations as
> well, so that within Gerrit, we would see links to the log URL and docs
> preview sites, etc.
>
> There is an opportunity to do some cross-repo testing between Zuul and
> Gerrit as we work on this.
>
> Upstream Gerrit's Gerrit does not have the SSH event stream available,
> so before we can do any work against it, we need an alternative. I
> think the best way forward is to implement partial (experimental)
> support for the checks API, so that we can at least use it to trigger
> and report on changes, get OpenDev's Zuul added as a checker, and then
> work on implementing sub-checks in upstream Gerrit and then Zuul.
How does triggering work with the checks api? I seem to recall reading the original design spec for the feature and that CI systems would Poll Gerrit for changes that apply to their checks giving them a list of items to run? Then as a future improvement there was talk of having a callback system similar to Github's app system?
>
> Conclusion
> ----------
>
> I'm sure I'm leaving stuff out, so feel free to prompt me with
> questions. In general we got a lot of work done and I think we're set
> up very well for future collaboration.
>
> -Jim
>
> [1]
> https://gerrit-review.googlesource.com/Documentation/dev-contributing.html#design-driven-contribution-process
> [2]
> https://model-engineers.com/en/company/references/success-stories/volvo-cars/
More information about the OpenStack-Infra
mailing list