From ssbarnea at redhat.com Sun Sep 1 10:50:18 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Sun, 1 Sep 2019 11:50:18 +0100 Subject: [OpenStack-Infra] is basic auth broken on our gerrit: review.opendev.org? Message-ID: I was trying to query gerrit using pygerrit2, requests and later curl just to find out that apparently our gerrit does not work with basic-auth. $ cat ~/.netrc [11:40:36] machine review.opendev.org login ssbarnea password pL1...EQ $ curl -vn "https://review.opendev.org/changes/?q=owner:self%20status:open" [11:39:54] * Server auth using Basic with user 'ssbarnea' > GET /changes/?q=owner:self%20status:open HTTP/1.1 > Host: review.opendev.org > Authorization: Basic c3NiY....VR Must be signed-in to use owner:self Note: user and password were taken from https://review.opendev.org/#/settings/http-password as we were supposed to. Am I doing something wrong or if not, can we have this fixed? Please note that I really want to be able to load credentials from .netrc file because that is more secure than dealing with credentials in your code. Also, all 3 tools mentioned above are able to load credentials from .netrc file. Thanks Sorin Sbarnea From fungi at yuggoth.org Sun Sep 1 11:28:37 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 1 Sep 2019 11:28:37 +0000 Subject: [OpenStack-Infra] is basic auth broken on our gerrit: review.opendev.org? In-Reply-To: References: Message-ID: <20190901112836.2s5ebz5piunlkzny@yuggoth.org> On 2019-09-01 11:50:18 +0100 (+0100), Sorin Sbarnea wrote: > I was trying to query gerrit using pygerrit2, requests and later > curl just to find out that apparently our gerrit does not work > with basic-auth. [...] > Am I doing something wrong or if not, can we have this fixed? [...] https://review.opendev.org/Documentation/rest-api.html#authentication By default Gerrit uses HTTP digest authentication with the HTTP password from the user’s account settings page. HTTP basic authentication is used if auth.gitBasicAuth is set to true in the Gerrit configuration. It seems like turning that on is probably a reasonable choice if the default prevents pygerrit2 from working. This would, however, need a Gerrit restart to update. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ssbarnea at redhat.com Mon Sep 2 15:58:48 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Mon, 2 Sep 2019 16:58:48 +0100 Subject: [OpenStack-Infra] is basic auth broken on our gerrit: review.opendev.org? In-Reply-To: <20190901112836.2s5ebz5piunlkzny@yuggoth.org> References: <20190901112836.2s5ebz5piunlkzny@yuggoth.org> Message-ID: <850186E2-F437-4B1C-9D86-829B86E7849E@redhat.com> it would be really nice if we could enable HTTPBasicAuth because at this moment talking with multiple gerrit servers seems to be a real challenge. For example: * review.rdoprpject.org server works only with BasicAuth and also requires custom url suffix "/r". * review.opendev.org works only with Digest auth Considering that from 2.14 only BasicAuth will be supported, it makes sense to make the move now. PS. I stopped using pygerrit2 and I am using requests directly, as I did not find many benefits in pygerrit so far. > On 1 Sep 2019, at 12:28, Jeremy Stanley wrote: > > On 2019-09-01 11:50:18 +0100 (+0100), Sorin Sbarnea wrote: >> I was trying to query gerrit using pygerrit2, requests and later >> curl just to find out that apparently our gerrit does not work >> with basic-auth. > [...] >> Am I doing something wrong or if not, can we have this fixed? > [...] > > https://review.opendev.org/Documentation/rest-api.html#authentication > > By default Gerrit uses HTTP digest authentication with the HTTP > password from the user’s account settings page. HTTP basic > authentication is used if auth.gitBasicAuth is set to true in > the Gerrit configuration. > > It seems like turning that on is probably a reasonable choice if the > default prevents pygerrit2 from working. This would, however, need a > Gerrit restart to update. > -- > Jeremy Stanley > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From fungi at yuggoth.org Mon Sep 2 16:09:57 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 2 Sep 2019 16:09:57 +0000 Subject: [OpenStack-Infra] is basic auth broken on our gerrit: review.opendev.org? In-Reply-To: <850186E2-F437-4B1C-9D86-829B86E7849E@redhat.com> References: <20190901112836.2s5ebz5piunlkzny@yuggoth.org> <850186E2-F437-4B1C-9D86-829B86E7849E@redhat.com> Message-ID: <20190902160957.c7qvk26ukyfxntdo@yuggoth.org> On 2019-09-02 16:58:48 +0100 (+0100), Sorin Sbarnea wrote: [...] > Considering that from 2.14 only BasicAuth will be supported, it > makes sense to make the move now. Yes, I find this a compelling argument, if only to flush out problems before we upgrade to 2.14. > PS. I stopped using pygerrit2 and I am using requests directly, as > I did not find many benefits in pygerrit so far. Pretty much all the OpenDev tools also just interface directly with the Gerrit API. Same for projects like Gertty. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Mon Sep 2 16:21:17 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Sep 2019 09:21:17 -0700 Subject: [OpenStack-Infra] =?utf-8?q?is_basic_auth_broken_on_our_gerrit=3A?= =?utf-8?q?_review=2Eopendev=2Eorg=3F?= In-Reply-To: <850186E2-F437-4B1C-9D86-829B86E7849E@redhat.com> References: <20190901112836.2s5ebz5piunlkzny@yuggoth.org> <850186E2-F437-4B1C-9D86-829B86E7849E@redhat.com> Message-ID: On Mon, Sep 2, 2019, at 8:58 AM, Sorin Sbarnea wrote: > it would be really nice if we could enable HTTPBasicAuth because at > this moment talking with multiple gerrit servers seems to be a real > challenge. > > For example: > * review.rdoprpject.org server works only with BasicAuth and also > requires custom url suffix "/r". > * review.opendev.org works only with Digest auth This doesn't seem to be that difficult: http://paste.openstack.org/show/769640/ Note the custon /r suffix is a choice that was made by the hosts of that Gerrit iirc and not a default Gerrit behavior. Basically they moved the root of the Gerrit servers a path level down under /r. > > Considering that from 2.14 only BasicAuth will be supported, it makes > sense to make the move now. From cboylan at sapwetik.org Mon Sep 2 16:28:19 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Sep 2019 09:28:19 -0700 Subject: [OpenStack-Infra] =?utf-8?q?is_basic_auth_broken_on_our_gerrit=3A?= =?utf-8?q?_review=2Eopendev=2Eorg=3F?= In-Reply-To: <20190902160957.c7qvk26ukyfxntdo@yuggoth.org> References: <20190901112836.2s5ebz5piunlkzny@yuggoth.org> <850186E2-F437-4B1C-9D86-829B86E7849E@redhat.com> <20190902160957.c7qvk26ukyfxntdo@yuggoth.org> Message-ID: On Mon, Sep 2, 2019, at 9:09 AM, Jeremy Stanley wrote: > On 2019-09-02 16:58:48 +0100 (+0100), Sorin Sbarnea wrote: > [...] > > Considering that from 2.14 only BasicAuth will be supported, it > > makes sense to make the move now. > > Yes, I find this a compelling argument, if only to flush out > problems before we upgrade to 2.14. No objections to changing it; however, if this is expected to be a breaking change (basic auth means no more digest auth) we'll likely need to communicate that similarly to a gerrit upgrade. > > > PS. I stopped using pygerrit2 and I am using requests directly, as > > I did not find many benefits in pygerrit so far. > > Pretty much all the OpenDev tools also just interface directly with > the Gerrit API. Same for projects like Gertty. > -- > Jeremy Stanley From chussler2050 at gmail.com Mon Sep 2 08:45:34 2019 From: chussler2050 at gmail.com (Chance Huss) Date: Mon, 2 Sep 2019 04:45:34 -0400 Subject: [OpenStack-Infra] [infra][nova] Corrupt nova-specs repo Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Sep 2 18:08:47 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Sep 2019 11:08:47 -0700 Subject: [OpenStack-Infra] [infra][nova] Corrupt nova-specs repo In-Reply-To: References: Message-ID: <972d63fe-8ed9-45c9-adbe-d5652d12eb09@www.fastmail.com> On Mon, Sep 2, 2019, at 1:45 AM, Chance Huss wrote: > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra Can you provide more information? I am able to clone nova-specs from all eight of our gitea backends as well as github and they all fsck cleanly. The gitea web ui also seems to render the base repo page for nova-specs on all eight backends too. Where is the corrupt repo? How are you detecting the corruption or how is it reported? Any info like that will be useful to debug this for you. Clark From cboylan at sapwetik.org Tue Sep 3 00:26:42 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Sep 2019 17:26:42 -0700 Subject: [OpenStack-Infra] Infra Team Meeting September 3, 2019 at 19:00 UTC Message-ID: Sorry for getting this out late. We will meet with this agenda: == Agenda for next meeting == * Announcements ** OpenStack election season is upon us * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev * General topics ** Trusty Upgrade Progress (clarkb 201900903) *** Wiki updates ** static.openstack.org (ianw 20190903) *** Sign up for tasks at https://etherpad.openstack.org/p/static-services ** AFS mirroring status (ianw 20190903) *** Did debugging additions help? *** Do we think rsync updates are to blame? Perhaps in newer rsync on Bionic? ** Project Renaming September 16 (clarkb 20190903) ** PTG Planning (clarkb 20190903) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 * Open discussion From corvus at inaugust.com Wed Sep 4 16:53:54 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 04 Sep 2019 09:53:54 -0700 Subject: [OpenStack-Infra] Report from Gerrit User Summit Message-ID: <87imq8vw7x.fsf@meyer.lemoncheese.net> Hi, Monty and I attended the Gerrit User Summit and hackathon last week. It was very productive: we learned some good information about upgrading Gerrit, received offers of help doing so if we need it, formed closer ties with the Gerrit community, and fielded a lot of interest in Reno and Zuul. In general, people were happy that we attended as representatives of the OpenDev/OpenStack/Zuul communities and (re-) engaged with the Gerrit community. Gerrit Upgrade -------------- We learned some practical things about upgrading to 3.0: * We can turn off rebuilding the secondary index ("reindexing") on startup to speed both or normal restarts as well as prevent unwanted reindexes during upgrades. (Monty pushed a change for this.) * We can upgrade from 2.13 -> 2.14 -> 2.15 -> 2.16 during a relatively quick downtime. We could actually do some of that while up, but Monty and I advocate just taking a downtime to keep things simple. * We should, under no circumstances, enable NoteDB before 2.16. The migration implementation in 2.15 is flawed and will cause delays or errors in later upgrades. * Once on 2.16, we should enable NoteDB and perform the migration. This can happen online in the background. * We should GC the repos before starting, to make reindexing faster. * We should ensure that we have a sufficiently sized diff cache, as that Gerrit will be able to re-use previously computed patchset diffs when reindexing. This can considerably speed an onlide reindex. * We should probably run 2.16 in production for some time (1 month?) to allow users to acclimate to polygerrit, and deal with hideCI. * Regarding hideCI -- will someone implement that for polygerrit? will it be obviated by improvements in Zuul reporting (tagged or robot comments)? even if we improve Zuul, will third-party CI's upgrade? do we just ignore it? * The data in the AccountPatchReviewDb are not very important, and we don't need to be too concerned if we lose them during the upgrade. * We need to pay attention to H2 tuning parameters, because many of the caches use H2. * Luca has offered to provide any help if we need it. I'm sure there's more, but that's a pretty good start. Monty has submitted several changes to our configuration of Gerrit with the topic "gus2019" based on some of this info. Gerrit Community ---------------- During the hackathon, Monty and I bootstrapped our workstations with a full development environment for Gerrit. We learned a bit about the new build system (bazel) -- mostly that it's very complicated, changes frequently from version to version, and many of the options are black magic. However, the bazel folks have been convinced that stability is in the community's interest, and an initial stable version is forthcoming. The key practical things we learned are: * Different versions of Gerrit may want different bazel versions (however, I was able to build the tips of all 3 supported branches with the latest bazel). * There is a tool to manage bazel for you (bazelisk), and it will help get the right version of bazel for a given branch/project (it is highly recommended, but in theory (especially with the forthcoming stable release) should not be required (see last point)). * The configuration options specified in the developer documentation are important and correct. Monty fixed instability in our docker image builds by reverting to just those options. * Eclipse (or intelliJ) are the IDEs of choice. Note that the latest version of Eclipse (which may not be in distros) is required. Of course, these aren't required, but it's Java, so they help a lot. There is a helper script to generate the Eclipse project file. * Switching between branches requires a full rebuild with bazel, and a regeneration/re-import of the Eclipse project. Given that, I suggest this pro-tip: maintain a git repo+Eclipse project for each Gerrit branch you work on. Same for your test Gerrit instance (so you don't have to run "gerrit init" over and over). * The Gerrit maintainers are most easily reachable on Slack. * Monty and I have been given some additional permissions to edit bugs in the issue tracker. They seem fairly willing to give out those permissions if others are interested. * The issue tracker, like most, doesn't receive enough attention to dealing with old issues. But for newer issues, still seems of practical use. * The project has formed a steering committee and adopted a design-driven contribution process[1] (not dissimilar to our own specs process). More on this later. Reno ---- The Gerrit maintainers like to make releases at the end of hackathons, and so we all (most especially the maintainers) observed that the current process around manually curating release notes was cumbersome and error-prone. Monty demonstrated Reno to an enthusiastic reception and therefore, Monty will be working on integrating Reno into Gerrit's release process. Zuul ---- Zuul is happily used at Volvo by the propulsion team at Volvo (currently v2, working on moving to v3) [2]. Other teams are looking into it. The Gerrit maintainers are interested in using Zuul to run Gerrit's upstream CI system. Monty and I plan on helping to implement that. We spoke at length with Edwin and Alice who are largely driving the development of the new "checks" API in Gerrit. It is partially implemented now and operational in upstream Gerrit. As written, we would have some difficulty using it effectively with Zuul. However, with Zuul as a use case, some further changes can be made so that I think it will integrate quite well, and with more work could be a quite nice integration. At a very high level, a "checker" in Gerrit represents a single pass/fail result from a CI system or code analyzer, and must be configured on the project in advance by an administrator. Since we want Zuul to report the result of each job it runs on a change, and we don't know that set of jobs until we start, the current implementation doesn't fit the model very well. For the moment, we can use the checks API to report the overall buildset result, but not the jobs. We can, of course, still use Gerrit change messages to report the results of individual jobs just as we do now. But ideally, we'd like to take full advantage of the structured data and reporting the checks API provides. To that end, I've offered to write a design document describing an implementation of support for "sub-checks" -- an idea which appeared in the original checks API design as a potential follow-up. Sub-checks would simply be structured data about individual jobs which are reported along with the overall check result. With this in place, Zuul could get out of the business of leaving comments with links to logs, as each sub-check would support its own pass/fail, duration, and log url. Later, we could extend this to support reporting artifact locations as well, so that within Gerrit, we would see links to the log URL and docs preview sites, etc. There is an opportunity to do some cross-repo testing between Zuul and Gerrit as we work on this. Upstream Gerrit's Gerrit does not have the SSH event stream available, so before we can do any work against it, we need an alternative. I think the best way forward is to implement partial (experimental) support for the checks API, so that we can at least use it to trigger and report on changes, get OpenDev's Zuul added as a checker, and then work on implementing sub-checks in upstream Gerrit and then Zuul. Conclusion ---------- I'm sure I'm leaving stuff out, so feel free to prompt me with questions. In general we got a lot of work done and I think we're set up very well for future collaboration. -Jim [1] https://gerrit-review.googlesource.com/Documentation/dev-contributing.html#design-driven-contribution-process [2] https://model-engineers.com/en/company/references/success-stories/volvo-cars/ From cboylan at sapwetik.org Wed Sep 4 17:07:00 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 04 Sep 2019 10:07:00 -0700 Subject: [OpenStack-Infra] Report from Gerrit User Summit In-Reply-To: <87imq8vw7x.fsf@meyer.lemoncheese.net> References: <87imq8vw7x.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Sep 4, 2019, at 9:53 AM, James E. Blair wrote: > Hi, > > Monty and I attended the Gerrit User Summit and hackathon last week. It > was very productive: we learned some good information about upgrading > Gerrit, received offers of help doing so if we need it, formed closer > ties with the Gerrit community, and fielded a lot of interest in Reno > and Zuul. In general, people were happy that we attended as > representatives of the OpenDev/OpenStack/Zuul communities and (re-) > engaged with the Gerrit community. > > Gerrit Upgrade > -------------- > > We learned some practical things about upgrading to 3.0: > > * We can turn off rebuilding the secondary index ("reindexing") on > startup to speed both or normal restarts as well as prevent unwanted > reindexes during upgrades. (Monty pushed a change for this.) > > * We can upgrade from 2.13 -> 2.14 -> 2.15 -> 2.16 during a relatively > quick downtime. We could actually do some of that while up, but Monty > and I advocate just taking a downtime to keep things simple. > > * We should, under no circumstances, enable NoteDB before 2.16. The > migration implementation in 2.15 is flawed and will cause delays or > errors in later upgrades. > > * Once on 2.16, we should enable NoteDB and perform the migration. This > can happen online in the background. > > * We should GC the repos before starting, to make reindexing faster. > > * We should ensure that we have a sufficiently sized diff cache, as that > Gerrit will be able to re-use previously computed patchset diffs when > reindexing. This can considerably speed an onlide reindex. > > * We should probably run 2.16 in production for some time (1 month?) to > allow users to acclimate to polygerrit, and deal with hideCI. > > * Regarding hideCI -- will someone implement that for polygerrit? will > it be obviated by improvements in Zuul reporting (tagged or robot > comments)? even if we improve Zuul, will third-party CI's upgrade? > do we just ignore it? > > * The data in the AccountPatchReviewDb are not very important, and we > don't need to be too concerned if we lose them during the upgrade. > > * We need to pay attention to H2 tuning parameters, because many of the > caches use H2. > > * Luca has offered to provide any help if we need it. > > I'm sure there's more, but that's a pretty good start. Monty has > submitted several changes to our configuration of Gerrit with the topic > "gus2019" based on some of this info. This is excellent information and makes the Gerrit upgrade seem far more doable. Thank you for this. > > Gerrit Community > ---------------- Snip > Reno > ---- Snip > Zuul > ---- > > Zuul is happily used at Volvo by the propulsion team at Volvo (currently > v2, working on moving to v3) [2]. Other teams are looking into it. > > The Gerrit maintainers are interested in using Zuul to run Gerrit's > upstream CI system. Monty and I plan on helping to implement that. > > We spoke at length with Edwin and Alice who are largely driving the > development of the new "checks" API in Gerrit. It is partially > implemented now and operational in upstream Gerrit. As written, we > would have some difficulty using it effectively with Zuul. However, > with Zuul as a use case, some further changes can be made so that I > think it will integrate quite well, and with more work could be a quite > nice integration. > > At a very high level, a "checker" in Gerrit represents a single > pass/fail result from a CI system or code analyzer, and must be > configured on the project in advance by an administrator. Since we want > Zuul to report the result of each job it runs on a change, and we don't > know that set of jobs until we start, the current implementation doesn't > fit the model very well. For the moment, we can use the checks API to > report the overall buildset result, but not the jobs. We can, of > course, still use Gerrit change messages to report the results of > individual jobs just as we do now. But ideally, we'd like to take full > advantage of the structured data and reporting the checks API provides. > > To that end, I've offered to write a design document describing an > implementation of support for "sub-checks" -- an idea which appeared in > the original checks API design as a potential follow-up. > > Sub-checks would simply be structured data about individual jobs which > are reported along with the overall check result. With this in place, > Zuul could get out of the business of leaving comments with links to > logs, as each sub-check would support its own pass/fail, duration, and > log url. > > Later, we could extend this to support reporting artifact locations as > well, so that within Gerrit, we would see links to the log URL and docs > preview sites, etc. > > There is an opportunity to do some cross-repo testing between Zuul and > Gerrit as we work on this. > > Upstream Gerrit's Gerrit does not have the SSH event stream available, > so before we can do any work against it, we need an alternative. I > think the best way forward is to implement partial (experimental) > support for the checks API, so that we can at least use it to trigger > and report on changes, get OpenDev's Zuul added as a checker, and then > work on implementing sub-checks in upstream Gerrit and then Zuul. How does triggering work with the checks api? I seem to recall reading the original design spec for the feature and that CI systems would Poll Gerrit for changes that apply to their checks giving them a list of items to run? Then as a future improvement there was talk of having a callback system similar to Github's app system? > > Conclusion > ---------- > > I'm sure I'm leaving stuff out, so feel free to prompt me with > questions. In general we got a lot of work done and I think we're set > up very well for future collaboration. > > -Jim > > [1] > https://gerrit-review.googlesource.com/Documentation/dev-contributing.html#design-driven-contribution-process > [2] > https://model-engineers.com/en/company/references/success-stories/volvo-cars/ From corvus at inaugust.com Wed Sep 4 17:41:20 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 04 Sep 2019 10:41:20 -0700 Subject: [OpenStack-Infra] Report from Gerrit User Summit In-Reply-To: (Clark Boylan's message of "Wed, 04 Sep 2019 10:07:00 -0700") References: <87imq8vw7x.fsf@meyer.lemoncheese.net> Message-ID: <87r24wt0vz.fsf@meyer.lemoncheese.net> "Clark Boylan" writes: > How does triggering work with the checks api? I seem to recall reading > the original design spec for the feature and that CI systems would > Poll Gerrit for changes that apply to their checks giving them a list > of items to run? Then as a future improvement there was talk of having > a callback system similar to Github's app system? Yes, that's essentially correct. That's actually why implementing support for this now in Zuul helps us with the effort to run jobs against upstream Gerrit, since our *only* option there is to poll, due to the lack of stream-events support. The polling operation is designed to be very efficient -- each time you get back a list of changes which are configured to run the checker, but where it hasn't reported start yet. A future enhancement is an event (which would show up in stream-events, so perhaps useful in most installations, but still not upstream gerrit) and also a webhook (which would work in upstream gerrit I think). The event would merely indicate that a poll should be performed. That's good enough, and would allow us to achieve the near-instantaneous response we have now. (Having said that, we may be able to have a fairly frequent poll interval on upstream gerrit without problems.) -Jim From cboylan at sapwetik.org Mon Sep 9 20:52:59 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 09 Sep 2019 13:52:59 -0700 Subject: [OpenStack-Infra] Meeting Agenda for September 10, 2019 Message-ID: We will meet on September 10, 2019 at 19:00UTC with this agenda: == Agenda for next meeting == * Announcements ** clarkb PTL for Ussuri cycle. Planning to not run next cycle. * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev * General topics ** Trusty Upgrade Progress (clarkb 201900910) *** Wiki updates ** static.openstack.org (ianw 20190910) *** Sign up for tasks at https://etherpad.openstack.org/p/static-services ** AFS mirroring status (ianw 20190910) *** afs02.dfw.openstack.org was down and rebooted. ** Project Renaming September 16 (clarkb 20190910) ** PTG Planning (clarkb 20190910) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 ** Volume of files from ARA html reports is problematic (dmsimard 20190910) *** https://etherpad.openstack.org/p/Vz5IzxlWFz *** We disabled root ARA report in jobs *** Added better sharding to upload-logs-swift to spread out the objects better * Open discussion From cboylan at sapwetik.org Mon Sep 16 20:22:19 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 16 Sep 2019 13:22:19 -0700 Subject: [OpenStack-Infra] Meeting Agenda for September 17, 2019 Message-ID: <524bfeb3-4382-40fd-9cdd-214874c5641d@www.fastmail.com> Hello, We will have our weekly meeting on September 17 at 19:00UTC with this agenda: == Agenda for next meeting == * Announcements ** Clarkb at Ansiblefest next week. Likely need volunteer meeting chair. * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev * General topics ** Trusty Upgrade Progress (clarkb 201900917) *** Wiki updates ** static.openstack.org (ianw 20190917) *** Sign up for tasks at https://etherpad.openstack.org/p/static-services ** PTG Planning (clarkb 20190917) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 * Open discussion From cboylan at sapwetik.org Mon Sep 23 13:32:44 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 23 Sep 2019 06:32:44 -0700 Subject: [OpenStack-Infra] Infra Meeting Cancelled on September 24, 2019 Message-ID: Per our last meeting [0] we are cancelling tomorrow's meeting. Many of us are traveling and will be unable to attend the normal meeting time. [0] http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-09-17-19.01.log.html#l-19 See you next week! Clark From thierry at openstack.org Tue Sep 24 14:16:39 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 24 Sep 2019 16:16:39 +0200 Subject: [OpenStack-Infra] Retiring stale repositories from the OpenStack org on GitHub Message-ID: Hi everyone, The migration of our infrastructure to the Opendev domain gave us the opportunity to no longer have everything under "openstack" and stop the confusion around what is a part of OpenStack and what is just hosted on the same infrastructure. To that effect, in April we transferred non-OpenStack repositories to their own organization names on Opendev, with the non-claimed ones being kept for the time being under a "x" default organization. One consequence of that transition is that non-OpenStack repositories that were previously mirrored to GitHub under the "openstack" organization are now stale and no longer updated, which is very misleading. Those should now be retired, with a clear pointer to the original repository on Opendev. Jim and I volunteered to build tools to do handle that retirement and we are now ready to run those Thursday. This will not affect OpenStack repositories or repositories that were already retired or migrated off the OpenStack org on GitHub (think openstack-infra, opendev, airship...). That will only clean up no-longer-mirrored, stale, non-openstack repositories still present in the OpenStack GitHub organization. If you own a non-openstack repository on Opendev and would like to enable GitHub mirroring (to a GitHub org of your choice), it is possible to configure it as part of your Zuul jobs. You can follow instructions at: http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005007.html Cheers, -- Jim and Thierry From thierry at openstack.org Thu Sep 26 16:15:46 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 26 Sep 2019 18:15:46 +0200 Subject: [OpenStack-Infra] Retiring stale repositories from the OpenStack org on GitHub In-Reply-To: References: Message-ID: <64f59674-71f7-3ca4-cd0e-01516ccbe04a@openstack.org> Jim and Thierry wrote: > [...] > One consequence of that transition is that non-OpenStack repositories > that were previously mirrored to GitHub under the "openstack" > organization are now stale and no longer updated, which is very > misleading. Those should now be retired, with a clear pointer to the > original repository on Opendev. Jim and I volunteered to build tools to > do handle that retirement and we are now ready to run those Thursday. > > This will not affect OpenStack repositories or repositories that were > already retired or migrated off the OpenStack org on GitHub (think > openstack-infra, opendev, airship...). That will only clean up > no-longer-mirrored, stale, non-openstack repositories still present in > the OpenStack GitHub organization. > > If you own a non-openstack repository on Opendev and would like to > enable GitHub mirroring (to a GitHub org of your choice), it is possible > to configure it as part of your Zuul jobs. You can follow instructions at: > > http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005007.html The stale repositories have now been retired. Let us know if you have any question. -- Jim and Thierry From iwienand at redhat.com Fri Sep 27 07:26:15 2019 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 27 Sep 2019 17:26:15 +1000 Subject: [OpenStack-Infra] CentOS 8 as a Python 3-only base image Message-ID: <20190927072615.GA24684@fedora19.localdomain> Hello, All our current images use dib's "pip-and-virtualenv" element to ensure the latest pip/setuptools/virtualenv are installed, and /usr/bin/ installs Python 2 packages and /usr/bin/ install Python 3 packages. The upshot of this is that all our base images have Python 2 and 3 installed (even "python 3 first" distros like Bionic). We have to make a decision if we want to continue this with CentOS 8; to be specific the change [1]. Installing pip and virtualenv from upstream sources has a long history full of bugs and workarounds nobody wants to think about (if you do want to think about it, you can start at [2]). A major problem has been that we have to put these packages on "hold", to avoid the situation where the packaged versions are re-installed over the upstream versions, creating a really big mess of mixed up versions. I'm thinking that CentOS 8 is a good place to stop this. We just won't support, in dib, installing pip/virtualenv from source for CentOS 8. We hope for the best that the packaged versions of tools are always working, but *if* we do require fixes to the system packages, we will implement that inside jobs directly, rather than on the base images. I think the 2019 world this is increasingly less likley, as we have less reliance on older practices like mixing system-wide installs (umm, yes devstack ... but we have a lot of work getting centos8 stable there anyway) and the Zuul v3 world makes it much easier to deploy isloated fixes as roles should we need. If we take this path, the images will be Python 3 only -- we recently turned Ansible's "ansible_python_interpreter" to Python 3 for Fedora 30 and after a little debugging I think that is ready to go. Of course jobs can install the Python 2 environment should they desire. Any comments here, or in the review [1] welcome. Thanks, -i [1] https://review.opendev.org/684462 [2] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/pip-and-virtualenv/install.d/pip-and-virtualenv-source-install/04-install-pip#L73 From fungi at yuggoth.org Fri Sep 27 11:09:22 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 27 Sep 2019 11:09:22 +0000 Subject: [OpenStack-Infra] CentOS 8 as a Python 3-only base image In-Reply-To: <20190927072615.GA24684@fedora19.localdomain> References: <20190927072615.GA24684@fedora19.localdomain> Message-ID: <20190927110922.s65cxjmclzhkoskt@yuggoth.org> On 2019-09-27 17:26:15 +1000 (+1000), Ian Wienand wrote: [...] > Installing pip and virtualenv from upstream sources has a long history > full of bugs and workarounds nobody wants to think about (if you do > want to think about it, you can start at [2]). [...] > I'm thinking that CentOS 8 is a good place to stop this. We just > won't support, in dib, installing pip/virtualenv from source for > CentOS 8. We hope for the best that the packaged versions of tools > are always working, but *if* we do require fixes to the system > packages, we will implement that inside jobs directly, rather than on > the base images. [...] This seems like a reasonable shift to me. I'd eventually love to see us stop preinstalling pip and virtualenv entirely, allowing jobs to take care of doing that at runtime if they need to use them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mnaser at vexxhost.com Fri Sep 27 12:57:24 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 27 Sep 2019 08:57:24 -0400 Subject: [OpenStack-Infra] CentOS 8 as a Python 3-only base image In-Reply-To: <20190927110922.s65cxjmclzhkoskt@yuggoth.org> References: <20190927072615.GA24684@fedora19.localdomain> <20190927110922.s65cxjmclzhkoskt@yuggoth.org> Message-ID: On Fri, Sep 27, 2019 at 7:11 AM Jeremy Stanley wrote: > > On 2019-09-27 17:26:15 +1000 (+1000), Ian Wienand wrote: > [...] > > Installing pip and virtualenv from upstream sources has a long history > > full of bugs and workarounds nobody wants to think about (if you do > > want to think about it, you can start at [2]). > [...] > > I'm thinking that CentOS 8 is a good place to stop this. We just > > won't support, in dib, installing pip/virtualenv from source for > > CentOS 8. We hope for the best that the packaged versions of tools > > are always working, but *if* we do require fixes to the system > > packages, we will implement that inside jobs directly, rather than on > > the base images. > [...] > > This seems like a reasonable shift to me. I'd eventually love to see > us stop preinstalling pip and virtualenv entirely, allowing jobs to > take care of doing that at runtime if they need to use them. +1 for this, it'll simplify building nodepool images a lot more > -- > Jeremy Stanley > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From iwienand at redhat.com Mon Sep 30 06:30:53 2019 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 30 Sep 2019 16:30:53 +1000 Subject: [OpenStack-Infra] CentOS 8 as a Python 3-only base image In-Reply-To: <20190927110922.s65cxjmclzhkoskt@yuggoth.org> References: <20190927072615.GA24684@fedora19.localdomain> <20190927110922.s65cxjmclzhkoskt@yuggoth.org> Message-ID: <20190930063053.GA24287@fedora19.localdomain> On Fri, Sep 27, 2019 at 11:09:22AM +0000, Jeremy Stanley wrote: > I'd eventually love to see us stop preinstalling pip and virtualenv > entirely, allowing jobs to take care of doing that at runtime if > they need to use them. You'd think, right? :) But it is a bit of a can of worms ... So pip is a part of Python 3 ... "dnf install python3" brings in python3-pip unconditionally. So there will always be a pip on the host. For CentOS 8 that's pip version 9.something (upstream is on 19.something). This is where traditionally we've had problems; requirements, etc. uses some syntax feature that tickles a bug in old pip and we're back to trying to override the default version. I think we can agree to try and mitigate that in jobs, rather than in base images. But as an additional complication, CentOS 8 ships it's "platform-python" which is used by internal tools like dnf. The thing is, we have Python tools that could probably reasonably be considered platform tools like "glean" which instantiates our networking. I am not sure if "/usr/libexec/platform-python -m pip install glean" is considered an abuse or a good way to install against a stable Python version. I'll go with the latter ... But ... platform-python doesn't have virtualenv (separate package on Python 3). Python documentation says that "venv" is a good way to create a virtual environment and basically suggests it can do things better than virtualenv because it's part of the base Python and so doesn't have to have a bunch of hacks. Then the virtualenv documentation throws some shade at venv saying "a subset of [virtualenv] has been integrated into the standard library" and lists why virtualenv is better. Now we have *three* choices for a virtual environment: venv with either platform python or packaged python, or virtualenv with packaged python. Which should an element choose, if they want to setup tools on the host during image build? And how do we stop every element having to hard-code all this logic into itself over and over? Where I came down on this is : https://review.opendev.org/684462 : this stops installing from source on CentOS 8, which I think we all agree on. It makes some opinionated decisions in creating DIB_PYTHON_PIP and DIB_PYTHON_VIRTUALENV variables that will "do the right thing" when used by elements: * Python 2 first era (trusty/centos7) will use python2 pip and virtualenv * Python 3 era (bionic/fedora) will use python3 pip and venv (*not* virtualenv) * RHEL8/CentOS 8 will use platform-python pip & venv https://review.opendev.org/685643 : above in action; installing glean correctly on all supported distros. -i From fungi at yuggoth.org Mon Sep 30 11:47:05 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Sep 2019 11:47:05 +0000 Subject: [OpenStack-Infra] CentOS 8 as a Python 3-only base image In-Reply-To: <20190930063053.GA24287@fedora19.localdomain> References: <20190927072615.GA24684@fedora19.localdomain> <20190927110922.s65cxjmclzhkoskt@yuggoth.org> <20190930063053.GA24287@fedora19.localdomain> Message-ID: <20190930114704.2yovgknlh4z5cf7z@yuggoth.org> On 2019-09-30 16:30:53 +1000 (+1000), Ian Wienand wrote: > On Fri, Sep 27, 2019 at 11:09:22AM +0000, Jeremy Stanley wrote: > > I'd eventually love to see us stop preinstalling pip and virtualenv > > entirely, allowing jobs to take care of doing that at runtime if > > they need to use them. > > You'd think, right? :) But it is a bit of a can of worms ... [...] > * Python 2 first era (trusty/centos7) will use python2 pip and virtualenv > * Python 3 era (bionic/fedora) will use python3 pip and venv (*not* > virtualenv) > * RHEL8/CentOS 8 will use platform-python pip & venv [...] This is basically what I had in mind, yes. We could also clean up after ourselves on ubuntu/debian where python3-pip and python3-venv are in their own separate packages which aren't typically installed by default on those platforms. But at any rate, I think we're at least far enough along with the Python packaging ecosystem that the available versions of pip contemporary with the Python interpreters on new platforms (not trusty/centos7) are fairly stable when it comes to featureset... unless we want to start making use of some of the new PEP 517 work. As for venv, I've found that tox works great with it too, you just need to install the tox-venv plugin for tox and it doesn't bother with virtualenv at all (been using that with some personal projects for many months already). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Mon Sep 30 23:48:35 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 30 Sep 2019 16:48:35 -0700 Subject: [OpenStack-Infra] Infra Meeting Agenda for October 1, 2019 Message-ID: We will be meeting at 19:00UTC on October 1, 2019 with this agenda: == Agenda for next meeting == * Announcements * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev * General topics ** Trusty Upgrade Progress (clarkb 20191001) *** Wiki updates ** static.openstack.org (ianw 20191001) *** Review spec: https://review.opendev.org/683852 *** Sign up for tasks at https://etherpad.openstack.org/p/static-services ** PTG Planning (clarkb 20191001) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 *** http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009733.html Strawman schedule * Open discussion