From tdecacqu at redhat.com Tue Jul 3 07:39:58 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 03 Jul 2018 07:39:58 +0000 Subject: [OpenStack-Infra] log-classify project update (anomaly detection in CI/CD logs) Message-ID: <1530601298.luby16yqut.tristanC@fedora> Hello, This is a follow-up to the initial project creation thread[0]. At the Vancouver Summit, we met to discuss ML for CI[1] and I lead a workshop on logreduce[2]. The log-classify project bootstrap is still waiting for review[3] and I am still looking forward to pushing logreduce[4] source code in openstack-infra/log-classify. The current implementation is working fine and I am going to enable it for every job running on Software Factory. However the core process HashingNeighbors[5] is rather slow (0.3MB per second) and I would like to improve it and/or implement other algorithms. To do that effectively, we need to gather more datasets[6]. I would like to propose some enhancements to the os-loganalyze[7] middleware to enable users to annotate and report anomalies they find in log files. To store the anoamlies reference, an additional webservice, or perhaps direct access to an elasticsearch cluster would be required. In parallel, we need to collect the users' feedback and create datasets daily using the baseline available at the time each anomaly was discovered. Ideally, we would create an ipfs (or any other network filesystem) that could then be used by anyone willing to work on $subject. There is a lot to do and it will be challening. To that effect, I would like to propose an initial meeting with all interested parties. Please register your irc name and timezone in this etherpad: https://etherpad.openstack.org/p/log-classify Due to OpenStack's exceptional infrastructure and recent Zuul v3 release, I think we are in a strong position to tackle this challenge. Others suggestions to bootstrap this effort within our community are welcome. Best regards, -Tristan [0] http://lists.openstack.org/pipermail/openstack-infra/2017-November/005676.html [1] https://etherpad.openstack.org/p/YVR-ml-ci-results [2] https://github.com/TristanCacqueray/anomaly-detection-workshop-opendev [3] https://review.openstack.org/#/q/topic:crm-import [4] git clone https://softwarefactory-project.io/r/logreduce [5] https://softwarefactory-project.io/cgit/logreduce/tree/logreduce/models.py [6] https://softwarefactory-project.io/cgit/logreduce-tests/tree/tests [7] https://review.openstack.org/#/q/topic:loganalyze-user-feedback -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From daragh.bailey at gmail.com Wed Jul 4 21:32:53 2018 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Wed, 4 Jul 2018 22:32:53 +0100 Subject: [OpenStack-Infra] What's the future for git-review? Message-ID: Hi, Firstly, thanks for git-review, it's such a useful tool, and I use it all the time working with Gerrit, from working on some openstack projects (including the odd patch to git-review), various projects in work and the very rare patch to Gerrit or it's plugins itself. Based on the comments at https://git.openstack.org/cgit/openstack-infra/git-review/tree/CONTRIBUTING.rst#n5, git-review is considered feature complete, and as a consequence it seems that reviewers have mostly moved onto other projects so it can take quite some time to get reviews. Perfectly understandable, everyone can only do so much and needs to pick something(s) to prioritise. However this is such a useful tool for working with Gerrit from the command line it seems to be the defacto git subcommand for interfacing with Gerrit that it seems a shame to limit it. While I think there are a number of current reviews that would be beneficial to git-review, as well as some pieces that don't appear to be there currently, I'm reluctant to invest much time as it seems unlikely enhancements would be accepted due to the current state of feature complete. Instead of putting together various changes to see if they might be reviewed and accepted, hoping a chat about what paths might be available could save a bit of time. There are a couple of things that I would like to work towards: * Change the tests to use a single gerrit with separate projects instead of separate instances (faster testing) * Allow the tests to run against multiple versions of Gerrit (ensure compatibility) * Fix and land many of the changes making it easier to download changes, list changes ordered with their dependencies, stashing when downloading, etc * Have git-review auto configure refs/notes/review (assuming it's available) for fetching on setup (I find it very handy and I'm always forgetting to do this) And potentially controversially; support other workflows and options outside of the OpenStack workflow. Although maybe not directly, and still keeping the OpenStack one as the default. I think there are a couple of ways that could be achieved, but I can't see any of them working well without a decent amount of refactoring. * Have git-review provide the APIs so that someone may define a git-review- that can add their workflow * Add support for additional behaviour to be defined with refs/meta/config of projects * Allow extensions to be installed that allow additional options to be added to the git-review CLI and config file That last one might require being able to specify the additional required plugins to be listed in .gitreview, and providing the documentation might be trickier? Basically make it easier to add custom behaviour without it being builtin to git-review, and without needing to reimplement a whole load of functionality elsewhere. But I'm pretty sure that all requires a substantial rewrite. Thoughts? Is it worth putting a plan together around some of the initial changes? And then revisiting what would be needed to allow extensions around other workflows? -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Wed Jul 4 22:07:33 2018 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Wed, 4 Jul 2018 23:07:33 +0100 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: References: Message-ID: Indeed, git-review is one of the tools I use the most and it is sad it didn't get more attention lately.... Clearly a project like this should not have open CRsr that are more than two years old that didn't get any feedback on them --- it sends a clear (bad) message to other potential contributors. I am willing to spend a little bit of my time doing reviving work on the project, like we did with python-jenkins and jjb. > On 4 Jul 2018, at 22:32, Darragh Bailey wrote: > > Hi, > > > Firstly, thanks for git-review, it's such a useful tool, and I use it all the time working with Gerrit, from working on some openstack projects (including the odd patch to git-review), various projects in work and the very rare patch to Gerrit or it's plugins itself. > > Based on the comments at https://git.openstack.org/cgit/openstack-infra/git-review/tree/CONTRIBUTING.rst#n5 , git-review is considered feature complete, and as a consequence it seems that reviewers have mostly moved onto other projects so it can take quite some time to get reviews. Perfectly understandable, everyone can only do so much and needs to pick something(s) to prioritise. However this is such a useful tool for working with Gerrit from the command line it seems to be the defacto git subcommand for interfacing with Gerrit that it seems a shame to limit it. > > While I think there are a number of current reviews that would be beneficial to git-review, as well as some pieces that don't appear to be there currently, I'm reluctant to invest much time as it seems unlikely enhancements would be accepted due to the current state of feature complete. Instead of putting together various changes to see if they might be reviewed and accepted, hoping a chat about what paths might be available could save a bit of time. > > There are a couple of things that I would like to work towards: > * Change the tests to use a single gerrit with separate projects instead of separate instances (faster testing) > * Allow the tests to run against multiple versions of Gerrit (ensure compatibility) > * Fix and land many of the changes making it easier to download changes, list changes ordered with their dependencies, stashing when downloading, etc > * Have git-review auto configure refs/notes/review (assuming it's available) for fetching on setup (I find it very handy and I'm always forgetting to do this) > > And potentially controversially; support other workflows and options outside of the OpenStack workflow. Although maybe not directly, and still keeping the OpenStack one as the default. > > I think there are a couple of ways that could be achieved, but I can't see any of them working well without a decent amount of refactoring. > > * Have git-review provide the APIs so that someone may define a git-review- that can add their workflow > * Add support for additional behaviour to be defined with refs/meta/config of projects > * Allow extensions to be installed that allow additional options to be added to the git-review CLI and config file > > That last one might require being able to specify the additional required plugins to be listed in .gitreview, and providing the documentation might be trickier? > > Basically make it easier to add custom behaviour without it being builtin to git-review, and without needing to reimplement a whole load of functionality elsewhere. But I'm pretty sure that all requires a substantial rewrite. > > > Thoughts? Is it worth putting a plan together around some of the initial changes? And then revisiting what would be needed to allow extensions around other workflows? > > > -- > Darragh Bailey > "Nothing is foolproof to a sufficiently talented fool" > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From fungi at yuggoth.org Thu Jul 5 01:57:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Jul 2018 01:57:17 +0000 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: References: Message-ID: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> On 2018-07-04 22:32:53 +0100 (+0100), Darragh Bailey wrote: [...] > Based on the comments at > https://git.openstack.org/cgit/openstack-infra/git-review/tree/CONTRIBUTING.rst#n5, > git-review is considered feature complete, and as a consequence it > seems that reviewers have mostly moved onto other projects so it > can take quite some time to get reviews. Perfectly understandable, > everyone can only do so much and needs to pick something(s) to > prioritise. However this is such a useful tool for working with > Gerrit from the command line it seems to be the defacto git > subcommand for interfacing with Gerrit that it seems a shame to > limit it. At one time git-review had started to suffer from scope creep and was getting more random proposals for new features than actual stability improvements. Its test coverage was, for quite a long while, also sub-par so that some of the feature additions which did get accepted introduced regressions that went unnoticed sometimes for weeks or months before we'd discover they needed reverting. The original authors intended for git-review to be primarily focused on bootstrapping Gerrit connectivity from a cloned Git repository as well as simplifying the basic Git commands for retrieving changes from and pushing changes to a Gerrit. That update to the CONTRIBUTING.rst file was intended to put the brakes on future scope creep, especially in cases where an added feature would work just as well as its own separate git subcommand. > While I think there are a number of current reviews that would be > beneficial to git-review, as well as some pieces that don't appear > to be there currently, I'm reluctant to invest much time as it > seems unlikely enhancements would be accepted due to the current > state of feature complete. Instead of putting together various > changes to see if they might be reviewed and accepted, hoping a > chat about what paths might be available could save a bit of time. I've tried to go in and approve changes from time to time, but in all honesty the negativity I've received in the past when attempting to push back on feature additions has caused me to deprioritize reviewing more changes for it. I should probably just buck up and go in with a (very polite) machete anyway. > There are a couple of things that I would like to work towards: > > * Change the tests to use a single gerrit with separate projects > instead of separate instances (faster testing) This seems reasonable if it doesn't introduce new races or odd test interdependencies from the reduced fixture isolation. I really have never been fond of the integration-testing-only model we ended up with though. I originally recommended lower-level unit testing with mocks for the Git and SSH interactions, but the one volunteer we got to implement a testsuite chose to automate Gerrit installation so it is what it is at the moment. > * Allow the tests to run against multiple versions of Gerrit (ensure > compatibility) This seems reasonable. We should have been bumping the Gerrit versions in the tests and/or running more jobs for different releases of it but the way version selection was implemented would need a bit of an overhaul to accommodate that. > * Fix and land many of the changes making it easier to download > changes, list changes ordered with their dependencies, stashing > when downloading, etc The change listing feature really seems increasingly out of place to me, and most of the "fixes" I saw related to it were about supporting more and more of Gerrit's query language and terminal formatting. If we deprecated/removed that and recommended interacting directly with Gerrit or alternative utilities for change searches (there are a lot more options for this than there were back when git-review was first written) all of those would become unnecessary and the code would be simplified at the same time. > * Have git-review auto configure refs/notes/review (assuming it's > available) for fetching on setup (I find it very handy and I'm > always forgetting to do this) I could see this being in scope, as it fits with the Gerrit connectivity bootstrapping mission. I too find the notes refs handy but have a global configuration in my ~/.gitconfig which seems to do the trick already so I'm curious to find out how git-review might improve on that. > And potentially controversially; support other workflows and > options outside of the OpenStack workflow. Although maybe not > directly, and still keeping the OpenStack one as the default. I'd love to know what about git-review is focused on OpenStack's workflow. We tried to make it as generic as possible. If there are any OpenStack-specific features still lingering in there, we should see about ripping them out as soon as is feasible. One that I'm aware of is the default topic mangling based on commit message parsing, which I've been wanting to eradicate for a while since Gerrit now makes altering topics possible without needing to push a new commit. For that matter, setting the topic based on the local branch name could also get tossed while we're at it, and just keep the -t option for directly specifying a change topic when people really want to do it at time of upload. > I think there are a couple of ways that could be achieved, but I > can't see any of them working well without a decent amount of > refactoring. > > * Have git-review provide the APIs so that someone may define a > git-review- that can add their workflow > > * Add support for additional behaviour to be defined with > refs/meta/config of projects > > * Allow extensions to be installed that allow additional options > to be added to the git-review CLI and config file > > That last one might require being able to specify the additional > required plugins to be listed in .gitreview, and providing the > documentation might be trickier? > > Basically make it easier to add custom behaviour without it being > builtin to git-review, and without needing to reimplement a whole > load of functionality elsewhere. But I'm pretty sure that all > requires a substantial rewrite. I'd need some concrete use case examples. From my perspective, git-review is already a plugin (by way of being a git subcommand) so adding plugins to the plugin seems like a layer violation. The examples I've seen in the past for adding new behaviors were things which made more sense to me as new git subcommands. For a counterexample, James Blair created git-restack not too long ago... it could have been implemented as a git-review option, but was sanely made to be its own distinct git subcommand instead. > Thoughts? Is it worth putting a plan together around some of the > initial changes? And then revisiting what would be needed to allow > extensions around other workflows? I'm all for plans to improve git-review's stability, test coverage and, most of all, simplicity. Thanks for raising the topic! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tdecacqu at redhat.com Thu Jul 5 09:17:17 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 05 Jul 2018 09:17:17 +0000 Subject: [OpenStack-Infra] [all] log-classify project update (anomaly detection in CI/CD logs) In-Reply-To: <1530601298.luby16yqut.tristanC@fedora> References: <1530601298.luby16yqut.tristanC@fedora> Message-ID: <1530780669.k1udih7bo7.tristanC@fedora> On July 3, 2018 7:39 am, Tristan Cacqueray wrote: [...] > There is a lot to do and it will be challening. To that effect, I would > like to propose an initial meeting with all interested parties. > Please register your irc name and timezone in this etherpad: > > https://etherpad.openstack.org/p/log-classify > So far, the mean timezone is UTC+1.75, I've added date proposal from the 16th to the 20th of July. Please adds a '+' to the one you can attend. I'll follow-up next week with an ical file for the most popular. Thanks, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Thu Jul 5 14:42:54 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 05 Jul 2018 07:42:54 -0700 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> (Jeremy Stanley's message of "Thu, 5 Jul 2018 01:57:17 +0000") References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> Message-ID: <878t6pvj4h.fsf@meyer.lemoncheese.net> Jeremy already articulated my thoughts well; I don't have much to add. But I think it's important to reiterate that I find it extremely valuable that git-review perform its function ("push changes to Gerrit") simply and reliably. There are certainly projects we've created which are neglected due to lack of time or interest. I don't think git-review is one of them. I think with agreement on scope, you'll find that we are interested in maintaining it. Again, I agree with Jeremy's evaluations of Darragh's proposals. I also don't think there is (or should be) anything OpenStack specific about it. I see it as an essential component of any Gerrit system. -Jim From agrimberg at linuxfoundation.org Thu Jul 5 16:03:44 2018 From: agrimberg at linuxfoundation.org (Andrew Grimberg) Date: Thu, 5 Jul 2018 09:03:44 -0700 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> Message-ID: <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> On 07/04/2018 06:57 PM, Jeremy Stanley wrote: >> And potentially controversially; support other workflows and >> options outside of the OpenStack workflow. Although maybe not >> directly, and still keeping the OpenStack one as the default. > > I'd love to know what about git-review is focused on OpenStack's > workflow. We tried to make it as generic as possible. If there are > any OpenStack-specific features still lingering in there, we should > see about ripping them out as soon as is feasible. One that I'm > aware of is the default topic mangling based on commit message > parsing, which I've been wanting to eradicate for a while since > Gerrit now makes altering topics possible without needing to push a > new commit. Personally I don't know what OpenStack specific workflows would be getting referred to here either. I use git-review on a lot of Gerrit systems that aren't OpenStack and there is nothing in standard usage that screams "this isn't standard / default Gerrit workflows" For that matter, setting the topic based on the local > branch name could also get tossed while we're at it, and just keep > the -t option for directly specifying a change topic when people > really want to do it at time of upload. Personally I would find this a regression. We inform our communities to use local branches and git-review all the time and tell them it will take care of setting the topic as long as they do that. It's an extremely useful feature and I rely upon it daily! I would hate to have to add an extra flag to my review pushes. -Andy- -- Andrew J Grimberg Lead, IT Release Engineering The Linux Foundation -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Thu Jul 5 16:13:16 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Jul 2018 16:13:16 +0000 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> Message-ID: <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> On 2018-07-05 09:03:44 -0700 (-0700), Andrew Grimberg wrote: > On 07/04/2018 06:57 PM, Jeremy Stanley wrote: [...] > > For that matter, setting the topic based on the local branch > > name could also get tossed while we're at it, and just keep the > > -t option for directly specifying a change topic when people > > really want to do it at time of upload. > > Personally I would find this a regression. We inform our > communities to use local branches and git-review all the time and > tell them it will take care of setting the topic as long as they > do that. It's an extremely useful feature and I rely upon it > daily! I would hate to have to add an extra flag to my review > pushes. Very helpful feedback, thanks! I'm on the fence about that one simply because the only reason git-review cared to set review topics at all originally was that at the time Gerrit only allowed you to do that when pushing a new commit. They've since separated topic modification out into its own action which can be done from the WebUI or API on an existing change without altering anything else about it. I do find the topic-branch-sets-change-topic behavior sort of unclean from an idempotency standpoint, as `git-review -d` followed by `git review` will alter the topic of your existing change to be the change index number when I'd rather it just left the topic alone. My bigger concern is that git-review attempts to autodetect possible topic names based on (at this point increasingly outmoded) OpenStack-community-specific commit message footer contents like Implements and Closes-Bug. These I see as a nuisance and codification of OpenStackisms we should cleanse from the codebase. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Thu Jul 5 17:21:07 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 5 Jul 2018 13:21:07 -0400 Subject: [OpenStack-Infra] zuulv3 feedback for 3pci Message-ID: <20180705172107.GA32552@localhost.localdomain> Greetings, Over the last few weeks I've been helping the RDO project migrate away from zuulv2 (jenkins) to zuulv3. Today all jobs have been migrated with the help of the zuul-migrate script. We'll start deleting jenkins bits in the next few days. I wanted to get down some things I've noticed in the process as feedback to thirdparty CI operators. Hopefully this will help others. Removal of zuul-cloner ---------------------- This by far was the largest issue we had in RDO project. The first thing it meant, was the need for much more HDD space. We almost quadrupled the storage quota needed to run zuulv3 properly because we no longer could zuul-cloner from git.o.o. Right now rdo is running 4 zuul-executors / 4 zuul-mergers, with the increase in storage requirements this also meant we needed faster disks. The previous servers used under zuulv2 couldn't handle the IO now required, so we've had to rebuild them backed with SSD. Previously they could be boot from volume to ceph. Need for use-cached-repos ------------------------- Today, use-cached-repos is only available to openstack-infra/project-config, we should promote this into zuul-jobs to help reduce the amount of pressure on zuul-executors when jobs start. In the case of 3pci, prepare-workspace role isn't up to the task to sync everything at once. The feedback here, is to some how allow the base job to be smart enough to work if a project is found in /opt/git or not. Today we have 2 different images in rdo, 1 has the cache of upstream git.o.o and other doesn't. Namespace projects with fqdn ---------------------------- This one is likely unique to rdoproject, but because we have 2 connection to different gerrit systems, review.rdoproject.org and git.openstack.org, we actually have duplicate project names. For example: openstack/tripleo-common which means, for zuul we have to write projects as: project: name: git.openstack.org/openstack/tripleo-common project: name: review.openstack.org/openstack/tripleo-common There are legacy reasons for this, and we plan on cleaning review.r.o, however because of this duplication we cannot use upstream jobs right now. My initial thought would be to update jobs, in this case devstack to use the following for required-projects: required-projects: - git.openstack.org/openstack-dev/devstack - git.openstack.org/openstack/tripleo-common and propose the patch upstream. Again, this is likely specific to rdoproject, but something right now that blocks them on loading jobs from zuul.o.o. I do have some other suggestions, but they are more specific to zuul. I could post them here as a follow up or on zuul ML. I am happy I was able to help in the original migration of the openstack projects from jenkins to zuulv3, it did help a lot when I was debugging zuul failures. But over all rdo project didn't have any major issues with job content. Thanks, Paul From doug at doughellmann.com Thu Jul 5 17:23:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 05 Jul 2018 13:23:57 -0400 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> Message-ID: <1530811233-sup-9403@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-07-05 16:13:16 +0000: > On 2018-07-05 09:03:44 -0700 (-0700), Andrew Grimberg wrote: > > On 07/04/2018 06:57 PM, Jeremy Stanley wrote: > [...] > > > For that matter, setting the topic based on the local branch > > > name could also get tossed while we're at it, and just keep the > > > -t option for directly specifying a change topic when people > > > really want to do it at time of upload. > > > > Personally I would find this a regression. We inform our > > communities to use local branches and git-review all the time and > > tell them it will take care of setting the topic as long as they > > do that. It's an extremely useful feature and I rely upon it > > daily! I would hate to have to add an extra flag to my review > > pushes. > > Very helpful feedback, thanks! I'm on the fence about that one > simply because the only reason git-review cared to set review topics > at all originally was that at the time Gerrit only allowed you to do > that when pushing a new commit. They've since separated topic > modification out into its own action which can be done from the > WebUI or API on an existing change without altering anything else > about it. I do find the topic-branch-sets-change-topic behavior sort > of unclean from an idempotency standpoint, as `git-review -d` > followed by `git review` will alter the topic of your existing > change to be the change index number when I'd rather it just left > the topic alone. > > My bigger concern is that git-review attempts to autodetect possible > topic names based on (at this point increasingly outmoded) > OpenStack-community-specific commit message footer contents like > Implements and Closes-Bug. These I see as a nuisance and > codification of OpenStackisms we should cleanse from the codebase. I also rely heavily on the use of branch names to set the topic. The bug and blueprint detection logic is less important to me personally. I wonder if it would be useful to move that step of determining the topic out to a hook, so that project-specific logic could be applied as part of submitting a patch? Doug From fungi at yuggoth.org Thu Jul 5 17:34:59 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Jul 2018 17:34:59 +0000 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <1530811233-sup-9403@lrrr.local> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> <1530811233-sup-9403@lrrr.local> Message-ID: <20180705173458.hfwjbjf3eiruhrcs@yuggoth.org> On 2018-07-05 13:23:57 -0400 (-0400), Doug Hellmann wrote: [...] > I wonder if it would be useful to move that step of determining the > topic out to a hook, so that project-specific logic could be applied > as part of submitting a patch? In the way of spitballing some alternatives, we could have it refuse to update the topic if it sees that the change already has a topic set in Gerrit (unless -t is used). That would address most of my gripes about the autotopic functionality. I find it especially annoying if I've stacked two changes in series for procedural reasons but want to maintain separate change topics for them in Gerrit. We could of course also make it possible to disable topic inference with a configuration option. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From agrimberg at linuxfoundation.org Thu Jul 5 17:46:41 2018 From: agrimberg at linuxfoundation.org (Andrew Grimberg) Date: Thu, 5 Jul 2018 10:46:41 -0700 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> Message-ID: <2b4eaa71-9922-a506-826f-1c0822f52b15@linuxfoundation.org> On 07/05/2018 09:13 AM, Jeremy Stanley wrote: > On 2018-07-05 09:03:44 -0700 (-0700), Andrew Grimberg wrote: >> On 07/04/2018 06:57 PM, Jeremy Stanley wrote: > [...] >>> For that matter, setting the topic based on the local branch >>> name could also get tossed while we're at it, and just keep the >>> -t option for directly specifying a change topic when people >>> really want to do it at time of upload. >> >> Personally I would find this a regression. We inform our >> communities to use local branches and git-review all the time and >> tell them it will take care of setting the topic as long as they >> do that. It's an extremely useful feature and I rely upon it >> daily! I would hate to have to add an extra flag to my review >> pushes. > > Very helpful feedback, thanks! I'm on the fence about that one > simply because the only reason git-review cared to set review topics > at all originally was that at the time Gerrit only allowed you to do > that when pushing a new commit. They've since separated topic > modification out into its own action which can be done from the > WebUI or API on an existing change without altering anything else > about it. I do find the topic-branch-sets-change-topic behavior sort > of unclean from an idempotency standpoint, as `git-review -d` > followed by `git review` will alter the topic of your existing > change to be the change index number when I'd rather it just left > the topic alone. Perhaps it shouldn't try setting / resetting the topic if the local branch is refs/review// ? That could definitely be cleaned up and is a very minor frustration to me, but is very rarely hit (that I'm aware of) in our communities. > My bigger concern is that git-review attempts to autodetect possible > topic names based on (at this point increasingly outmoded) > OpenStack-community-specific commit message footer contents like > Implements and Closes-Bug. These I see as a nuisance and > codification of OpenStackisms we should cleanse from the codebase. Oh, interesting, I didn't know that it tried to do that. We don't have those footer semantics in any of our projects at present so it's never been something that comes up. -Andy- -- Andrew J Grimberg Lead, IT Release Engineering The Linux Foundation -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From corvus at inaugust.com Thu Jul 5 17:54:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 05 Jul 2018 10:54:32 -0700 Subject: [OpenStack-Infra] zuulv3 feedback for 3pci In-Reply-To: <20180705172107.GA32552@localhost.localdomain> (Paul Belanger's message of "Thu, 5 Jul 2018 13:21:07 -0400") References: <20180705172107.GA32552@localhost.localdomain> Message-ID: <871schsh47.fsf@meyer.lemoncheese.net> Paul Belanger writes: > Greetings, > > Over the last few weeks I've been helping the RDO project migrate away from > zuulv2 (jenkins) to zuulv3. Today all jobs have been migrated with the help of > the zuul-migrate script. We'll start deleting jenkins bits in the next few days. > > I wanted to get down some things I've noticed in the process as feedback to > thirdparty CI operators. Hopefully this will help others. Thanks! > Need for use-cached-repos > ------------------------- > > Today, use-cached-repos is only available to openstack-infra/project-config, we > should promote this into zuul-jobs to help reduce the amount of pressure on > zuul-executors when jobs start. In the case of 3pci, prepare-workspace role > isn't up to the task to sync everything at once. > > The feedback here, is to some how allow the base job to be smart enough to work > if a project is found in /opt/git or not. Today we have 2 different images in > rdo, 1 has the cache of upstream git.o.o and other doesn't. I agree. I think we've talked about the possibility of merging the use-cached-repos functionality into prepare-workspace, so that it works in all cases. I think it should be possible and would be a good improvement. > Namespace projects with fqdn > ---------------------------- > > This one is likely unique to rdoproject, but because we have 2 connection to > different gerrit systems, review.rdoproject.org and git.openstack.org, we > actually have duplicate project names. For example: > > openstack/tripleo-common > > which means, for zuul we have to write projects as: > > project: > name: git.openstack.org/openstack/tripleo-common > > project: > name: review.openstack.org/openstack/tripleo-common > > There are legacy reasons for this, and we plan on cleaning review.r.o, however > because of this duplication we cannot use upstream jobs right now. My initial > thought would be to update jobs, in this case devstack to use the following for > required-projects: > > required-projects: > - git.openstack.org/openstack-dev/devstack > - git.openstack.org/openstack/tripleo-common > > and propose the patch upstream. Again, this is likely specific to rdoproject, > but something right now that blocks them on loading jobs from zuul.o.o. Oh, interesting. I think we may have missed this subtlety when thinking about this use case. I agree that's the best solution for now. > I do have some other suggestions, but they are more specific to zuul. I could > post them here as a follow up or on zuul ML. > > I am happy I was able to help in the original migration of the openstack > projects from jenkins to zuulv3, it did help a lot when I was debugging zuul > failures. But over all rdo project didn't have any major issues with job content. Thanks for the current (and upcoming) feedback. I think RDO is in a particularly good place to exercise the upstream/downstream sharing of job content; I'm looking forward to more! -Jim From doug at doughellmann.com Thu Jul 5 19:39:50 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 05 Jul 2018 15:39:50 -0400 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <20180705173458.hfwjbjf3eiruhrcs@yuggoth.org> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> <1530811233-sup-9403@lrrr.local> <20180705173458.hfwjbjf3eiruhrcs@yuggoth.org> Message-ID: <1530819558-sup-8971@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-07-05 17:34:59 +0000: > On 2018-07-05 13:23:57 -0400 (-0400), Doug Hellmann wrote: > [...] > > I wonder if it would be useful to move that step of determining the > > topic out to a hook, so that project-specific logic could be applied > > as part of submitting a patch? > > In the way of spitballing some alternatives, we could have it refuse > to update the topic if it sees that the change already has a topic > set in Gerrit (unless -t is used). That would address most of my > gripes about the autotopic functionality. I find it especially > annoying if I've stacked two changes in series for procedural > reasons but want to maintain separate change topics for them in > Gerrit. > > We could of course also make it possible to disable topic inference > with a configuration option. Both of those ideas seem reasonable. From honjo.rikimaru at po.ntt-tx.co.jp Fri Jul 6 08:49:01 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Fri, 6 Jul 2018 17:49:01 +0900 Subject: [OpenStack-Infra] How do I add a third party CI by ZuulV3? Message-ID: <3eabf669-eb53-ef0b-d0e8-0c57cd2b28ac@po.ntt-tx.co.jp> Hello, I'd like to add a third party CI of networking-spp project.[1] But, I have some question about it. I'd appreciate it if you give information. My wishes are the following: * I'd like to run my test on my environment. Because my test requires special environment. * I'm planning that check new patch-sets and run my test by ZuulV3. So I built ZuulV3 and nodepool on my environment, and pushed .zuul.yaml to gerrit.[2] But, the following error was returned. Should I add settings of my third party CI to project-config in this case? If it is "Yes", is there documents about the way? I confirmed , but there was no information for ZuulV3. > Zuul encountered a syntax error while parsing its configuration in the > repo openstack/networking-spp on branch master. The error was: > > Pipelines may not be defined in untrusted repos, they may only be > defined in config repos. [1] https://github.com/openstack/networking-spp [2] https://review.openstack.org/#/c/580561/1 Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From cboylan at sapwetik.org Fri Jul 6 16:11:07 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Jul 2018 09:11:07 -0700 Subject: [OpenStack-Infra] How do I add a third party CI by ZuulV3? In-Reply-To: <3eabf669-eb53-ef0b-d0e8-0c57cd2b28ac@po.ntt-tx.co.jp> References: <3eabf669-eb53-ef0b-d0e8-0c57cd2b28ac@po.ntt-tx.co.jp> Message-ID: <1530893467.2987802.1432251312.2722314F@webmail.messagingengine.com> On Fri, Jul 6, 2018, at 1:49 AM, Rikimaru Honjo wrote: > Hello, > > I'd like to add a third party CI of networking-spp project.[1] > But, I have some question about it. > I'd appreciate it if you give information. > > My wishes are the following: > > * I'd like to run my test on my environment. > Because my test requires special environment. > * I'm planning that check new patch-sets and run my test by ZuulV3. > > So I built ZuulV3 and nodepool on my environment, and pushed .zuul.yaml > to gerrit.[2] > > But, the following error was returned. > Should I add settings of my third party CI to project-config in this case? > If it is "Yes", is there documents about the way? > > I confirmed , > but there was no information for ZuulV3. > > > Zuul encountered a syntax error while parsing its configuration in the > > repo openstack/networking-spp on branch master. The error was: > > > > Pipelines may not be defined in untrusted repos, they may only be > > defined in config repos. > > > [1] > https://github.com/openstack/networking-spp > > [2] > https://review.openstack.org/#/c/580561/1 The Zuul config in the projects that OpenStack Infra hosts apply to the OpenStack Zuul instance. Certain aspects of this config must be defined in a trusted repo to protect this instance from unintended (or even malicious) updates in the repos we host. The error you ran into is a case of this. In particular pipelines define when and how zuul should run jobs so we don't want anyone to be able to update that without review in central trusted config. As for how to do this for third party CI, your Zuul would need to have its own trusted config (for the same reasons as above, but protecting your Zuul instance not ours). That config will have pipelines defined. If the project is comfortable with it you can define the jobs and playbooks and roles for third party CI in the upstream project. Then you would select to run those jobs in your Zuul's local config and report the results back to Gerrit from there. Or if the upstream project wants to keep that data out of tree you can configure all of it in your Zuul config locally. One drawback to hosting the job config upstream would be that changes to the job config can be made without gating them and ensuring that they work (because third party CI can only vote +/-1). This problem is likely less of an issue if reviewers respect the third party CI results. I think to start I would mostly keep what you've done, but move the pipeline definitions and project config that says what jobs to run into your Zuul's config. Hope this helps, Clark From corvus at inaugust.com Fri Jul 6 16:36:01 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 06 Jul 2018 09:36:01 -0700 Subject: [OpenStack-Infra] How do I add a third party CI by ZuulV3? In-Reply-To: <1530893467.2987802.1432251312.2722314F@webmail.messagingengine.com> (Clark Boylan's message of "Fri, 06 Jul 2018 09:11:07 -0700") References: <3eabf669-eb53-ef0b-d0e8-0c57cd2b28ac@po.ntt-tx.co.jp> <1530893467.2987802.1432251312.2722314F@webmail.messagingengine.com> Message-ID: <87muv4nwy6.fsf@meyer.lemoncheese.net> We could consider hosting a config-project with pipeline definitions for third-party CI as an optional service folks could use. It would not, however, be able to support customized reporting messages or recheck syntax. -Jim From cboylan at sapwetik.org Fri Jul 6 17:31:28 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Jul 2018 10:31:28 -0700 Subject: [OpenStack-Infra] Followup on the future of Infra config management specs In-Reply-To: <1525816852.1695493.1365405672.640E0FF1@webmail.messagingengine.com> References: <1525816852.1695493.1365405672.640E0FF1@webmail.messagingengine.com> Message-ID: <1530898288.3007764.1432339032.74FA3D19@webmail.messagingengine.com> On Tue, May 8, 2018, at 3:00 PM, Clark Boylan wrote: > Hello everyone, > > Last week we got all three of the promised potential future config > management system specs pushed to Gerrit. > > They can be found here: > * https://review.openstack.org/449933 Puppet 4 Infra > * https://review.openstack.org/469983 Ansible Infra > * https://review.openstack.org/565550 Containerized Infra > > A good chunk of us appear to have reviewed them at this point. During > today's Infra meeting I asked for some initial thoughts and the > direction people thought they saw us going in. > > The general mood seems to be using a system that decouples applications > from their host platforms (containers as packaging essentially) and > config management to build the base platform(s) that doesn't require > every server have specific versions of specific tools (Ansible) would be > a helpful long term goal. That said any transition will take time and > the puppet upgrade is long over due. > > With all of this considered the rough plan that I propose is: "life > support puppet4 short/medium term, transition to ansible base + > container application "packaging" longer term, eventually having zuul do > deployments (but this last bit should be its own spec and is out of > scope of current effort)". > > I think this gives us a good short term option that should be doable > (upgrade puppetry to puppet 4). Then we can transition in the goodness > of not tightly coupling our config management tooling and applications > themselves to the platforms we run. Monty has volunteered to do the > combining of the specs to reflect what this more concrete plan would > look like. > > I know not everyone can attend the meetings so wanted to make sure > everyone saw this and hence this thread. Please provide feedback if you > feel strongly about this plan (think it is terrible or think it is > great, info is useful in both cases). Monty has put together a spec for this at https://review.openstack.org/565550. A couple of us have reviewed it and I think it is really close to being ready. In this week's meeting I suggested that we target the week of July 17 for approving this if we can get reviewers to look it over and help refine it futher (as necessary). Then we get almost two months to work on initial tasks before meeting at the PTG to focus on the bits we've found could use face to face time. All that to say please review this spec. I'm hopeful we can approve it with a reasonable amount of consensus the week of July 17. Thank you, Clark From cboylan at sapwetik.org Fri Jul 6 17:40:56 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Jul 2018 10:40:56 -0700 Subject: [OpenStack-Infra] Brainstorming winterstack service branding Message-ID: <1530898856.3009376.1432344192.1CFF687B@webmail.messagingengine.com> As mentioned in this week's meeting I offered to start a draft of what the branding sets might look like for winterstack services. Specifically which would not be whitelabeled and which we'd expect to explicitly whitelabel for projects. I've got a really early (and rough) list going at https://etherpad.openstack.org/p/winterscale-service-branding along with some simple criteria I used for putting things in one bucket or another. Feel free to add your thoughts there or move things around. Once we get a list we are reasonable happy with I think we can share it more broadly as an indicator for what projects should be able to rely on once we get winterstack moving. Clark From cboylan at sapwetik.org Fri Jul 6 19:15:41 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Jul 2018 12:15:41 -0700 Subject: [OpenStack-Infra] Brainstorming winterstack service branding In-Reply-To: <1530898856.3009376.1432344192.1CFF687B@webmail.messagingengine.com> References: <1530898856.3009376.1432344192.1CFF687B@webmail.messagingengine.com> Message-ID: <1530904541.4059809.1432436192.5AD57914@webmail.messagingengine.com> On Fri, Jul 6, 2018, at 10:40 AM, Clark Boylan wrote: > As mentioned in this week's meeting I offered to start a draft of what > the branding sets might look like for winterstack services. Specifically > which would not be whitelabeled and which we'd expect to explicitly > whitelabel for projects. I've got a really early (and rough) list going > at https://etherpad.openstack.org/p/winterscale-service-branding along > with some simple criteria I used for putting things in one bucket or > another. > > Feel free to add your thoughts there or move things around. Once we get > a list we are reasonable happy with I think we can share it more broadly > as an indicator for what projects should be able to rely on once we get > winterstack moving. > As noted on IRC it should be "winterscale" not "winterstack". I got it right on the etherpad, but then my typing drivers failed and got it wrong when writing this email. Sorry for the confusion, Clark From daragh.bailey at gmail.com Fri Jul 6 22:54:37 2018 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Fri, 6 Jul 2018 23:54:37 +0100 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> Message-ID: Perhaps sending that email right before taking a few days off meaning I couldn't reply straight away wasn't the most helpful ;-) On Thu, 5 Jul 2018, 02:57 Jeremy Stanley, wrote: > On 2018-07-04 22:32:53 +0100 (+0100), Darragh Bailey wrote: > [...] > > Based on the comments at > > > https://git.openstack.org/cgit/openstack-infra/git-review/tree/CONTRIBUTING.rst#n5 > , > > git-review is considered feature complete, and as a consequence it > > seems that reviewers have mostly moved onto other projects so it > > can take quite some time to get reviews. Perfectly understandable, > > everyone can only do so much and needs to pick something(s) to > > prioritise. However this is such a useful tool for working with > > Gerrit from the command line it seems to be the defacto git > > subcommand for interfacing with Gerrit that it seems a shame to > > limit it. > > At one time git-review had started to suffer from scope creep and > was getting more random proposals for new features than actual > stability improvements. Its test coverage was, for quite a long > while, also sub-par so that some of the feature additions which did > get accepted introduced regressions that went unnoticed sometimes > for weeks or months before we'd discover they needed reverting. The > original authors intended for git-review to be primarily focused on > bootstrapping Gerrit connectivity from a cloned Git repository as > well as simplifying the basic Git commands for retrieving changes > from and pushing changes to a Gerrit. That update to the > CONTRIBUTING.rst file was intended to put the brakes on future scope > creep, especially in cases where an added feature would work just as > well as its own separate git subcommand. > You've made one suggestion below on that applying for the list behaviour. Made me think that an etherpad listing the current outstanding or abandoned changes and discussing which would be better off outside of git-review would be a useful starting place. > While I think there are a number of current reviews that would be > > beneficial to git-review, as well as some pieces that don't appear > > to be there currently, I'm reluctant to invest much time as it > > seems unlikely enhancements would be accepted due to the current > > state of feature complete. Instead of putting together various > > changes to see if they might be reviewed and accepted, hoping a > > chat about what paths might be available could save a bit of time. > > I've tried to go in and approve changes from time to time, but in > all honesty the negativity I've received in the past when attempting > to push back on feature additions has caused me to deprioritize > reviewing more changes for it. I should probably just buck up and go > in with a (very polite) machete anyway. > I think it's all due to needing to prioritise time spent and git-review has mostly done what's needed. > There are a couple of things that I would like to work towards: > > > > * Change the tests to use a single gerrit with separate projects > > instead of separate instances (faster testing) > > This seems reasonable if it doesn't introduce new races or odd test > interdependencies from the reduced fixture isolation. I really have > never been fond of the integration-testing-only model we ended up > with though. I originally recommended lower-level unit testing with > mocks for the Git and SSH interactions, but the one volunteer we got > to implement a testsuite chose to automate Gerrit installation so it > is what it is at the moment. > More mocks around ssh/http should work well, but I've found that it's not necessarily beneficial doing the same around git, as with some simple fixtures it can be tested very quickly. The existing tests are still useful, and changing to a single gerrit instance per runner would require some thought in the logging side (adding markers should be sufficient I think), but thetas probably the main issue. > * Allow the tests to run against multiple versions of Gerrit (ensure > > compatibility) > > This seems reasonable. We should have been bumping the Gerrit > versions in the tests and/or running more jobs for different > releases of it but the way version selection was implemented would > need a bit of an overhaul to accommodate that. > I'm thinking also for compatibility across a few releases would be good as well. > * Fix and land many of the changes making it easier to download > > changes, list changes ordered with their dependencies, stashing > > when downloading, etc > > The change listing feature really seems increasingly out of place to > me, and most of the "fixes" I saw related to it were about > supporting more and more of Gerrit's query language and terminal > formatting. If we deprecated/removed that and recommended > interacting directly with Gerrit or alternative utilities for change > searches (there are a lot more options for this than there were back > when git-review was first written) all of those would become > unnecessary and the code would be simplified at the same time. > That's interesting, I'd consider the ability to query for what is available for review a step before downloading a change for review, and understanding that it might be bringing multiple chances down that aren't merged useful. If this was moved out of git-review, I suspect it might still need to know a bit about git-review and be able to use some of its configuration. > * Have git-review auto configure refs/notes/review (assuming it's > > available) for fetching on setup (I find it very handy and I'm > > always forgetting to do this) > > I could see this being in scope, as it fits with the Gerrit > connectivity bootstrapping mission. I too find the notes refs handy > but have a global configuration in my ~/.gitconfig which seems to do > the trick already so I'm curious to find out how git-review might > improve on that. > I didn't think that this could be done for the 'origin' remote and would be ignored by fit for other projects where it doesn't exist? Or are you using the 'gerrit' remote? But to the main advantage is that it opens this up to many people that might not have been aware it exists. And also opens up to asking the user if they'd like the review notes displayed along with the logs by default for this repo by setting core.notesRef. > And potentially controversially; support other workflows and > > options outside of the OpenStack workflow. Although maybe not > > directly, and still keeping the OpenStack one as the default. > > I'd love to know what about git-review is focused on OpenStack's > workflow. We tried to make it as generic as possible. If there are > any OpenStack-specific features still lingering in there, we should > see about ripping them out as soon as is feasible. One that I'm > aware of is the default topic mangling based on commit message > parsing, which I've been wanting to eradicate for a while since > Gerrit now makes altering topics possible without needing to push a > new commit. For that matter, setting the topic based on the local > branch name could also get tossed while we're at it, and just keep > the -t option for directly specifying a change topic when people > really want to do it at time of upload. > There may be others, I recall this coming up before around being able to set review scores for labels at the same time as uploading the change. Think it was around the 'Workflow' label. Maybe this is a case for a different command, but it seems likely to break the flow doe anyone using git-review to submit. However the labels for each project are customizable so it seems likely this would need the correct set to be worked out at setup time if included. > I think there are a couple of ways that could be achieved, but I > > can't see any of them working well without a decent amount of > > refactoring. > > > > * Have git-review provide the APIs so that someone may define a > > git-review- that can add their workflow > > > > * Add support for additional behaviour to be defined with > > refs/meta/config of projects > > > > * Allow extensions to be installed that allow additional options > > to be added to the git-review CLI and config file > > > > That last one might require being able to specify the additional > > required plugins to be listed in .gitreview, and providing the > > documentation might be trickier? > > > > Basically make it easier to add custom behaviour without it being > > builtin to git-review, and without needing to reimplement a whole > > load of functionality elsewhere. But I'm pretty sure that all > > requires a substantial rewrite. > > I'd need some concrete use case examples. From my perspective, > git-review is already a plugin (by way of being a git subcommand) so > adding plugins to the plugin seems like a layer violation. The > examples I've seen in the past for adding new behaviors were things > which made more sense to me as new git subcommands. For a > counterexample, James Blair created git-restack not too long ago... > it could have been implemented as a git-review option, but was > sanely made to be its own distinct git subcommand instead. > Maybe the list one suggested to be outside of git-review. I suspect it'd still want to be connected to it in some way. > > Thoughts? Is it worth putting a plan together around some of the > > initial changes? And then revisiting what would be needed to allow > > extensions around other workflows? > > I'm all for plans to improve git-review's stability, test coverage > and, most of all, simplicity. Thanks for raising the topic! > -- > Jeremy Stanley Typing this all on a mobile so might have missed a few things and will follow up after the weekend. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" - unknown > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jul 6 23:34:59 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Jul 2018 23:34:59 +0000 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> Message-ID: <20180706233459.whw32ovkx24du3sq@yuggoth.org> On 2018-07-06 23:54:37 +0100 (+0100), Darragh Bailey wrote: > On Thu, 5 Jul 2018, 02:57 Jeremy Stanley, wrote: [...] > > The change listing feature really seems increasingly out of place to > > me, and most of the "fixes" I saw related to it were about > > supporting more and more of Gerrit's query language and terminal > > formatting. If we deprecated/removed that and recommended > > interacting directly with Gerrit or alternative utilities for change > > searches (there are a lot more options for this than there were back > > when git-review was first written) all of those would become > > unnecessary and the code would be simplified at the same time. > > That's interesting, I'd consider the ability to query for what is > available for review a step before downloading a change for > review, and understanding that it might be bringing multiple > chances down that aren't merged useful. > > If this was moved out of git-review, I suspect it might still need > to know a bit about git-review and be able to use some of its > configuration. [...] I guess it comes down to whether we think "finding changes in Gerrit" is really within scope. For me it hasn't been for a very long time, as there are other tools far better at this. If I use git-review to retrieve a change I pretty much already know the change number (either because I had it pulled up in gertty or the Gerrit WebUI or saw it mentioned by an IRC bot or in an E-mail update notification or because someone directly asked me about it). > > I too find the notes refs handy but have a global configuration > > in my ~/.gitconfig which seems to do the trick already so I'm > > curious to find out how git-review might improve on that. > > I didn't think that this could be done for the 'origin' remote and > would be ignored by fit for other projects where it doesn't exist? > Or are you using the 'gerrit' remote? Yes, I do it globally for all remotes named "origin". At least the git versions I use and the non-Gerrit origins with which I interact don't seem to get confused when there's no refs/notes/review in a repo. And for Gerrit-hosted repos, Gerrit replication includes notes refs even if I'm cloning from the replica rather than directly from Gerrit. If there are corner cases this breaks, I'd appreciate knowing about them so I can help with a better implementation, but so far it's worked out great for me. > But to the main advantage is that it opens this up to many people > that might not have been aware it exists. And also opens up to > asking the user if they'd like the review notes displayed along > with the logs by default for this repo by setting core.notesRef. [...] Yes, this is one of the reasons I consider it to still be in scope. It's a bootstrapping situation. > > I'd love to know what about git-review is focused on OpenStack's > > workflow. We tried to make it as generic as possible. If there > > are any OpenStack-specific features still lingering in there, we > > should see about ripping them out as soon as is feasible. [...] > There may be others, I recall this coming up before around being > able to set review scores for labels at the same time as uploading > the change. Think it was around the 'Workflow' label. Maybe this > is a case for a different command, but it seems likely to break > the flow doe anyone using git-review to submit. However the labels > for each project are customizable so it seems likely this would > need the correct set to be worked out at setup time if included. [...] Some of that situation hails from the dark ages when the OpenStack community ran a fork of Gerrit with a "work in progress" feature we were unable to get upstreamed. There was pressure to get git-review to support it (a -w flag landed in the codebase at some point for this) but that was a very clear OpenStackism and even long before the OpenStack community switched back to mainline Gerrit and started using a "Workflow" label to indicate work in progress status (among other things) the short-lived -w option was ripped out of git-review in a mass revert to put the brakes on the runaway train it was becoming: https://review.openstack.org/13890 As mentioned in the more recent review(s), setting arbitrary labels at upload might be acceptable if we can come up with a clean solution for that, but encoding OpenStack community assumptions like "Workflow=-1 means work-in-progress" must be avoided. Problem is that each deployment's (even each repository's/ACL's) use of non-default labels would need some semantic policy description language in the .gitreview file unless we really do want it to just be a insert-arbitrary-label-votes-here sort of thing. An interesting aside, latest versions of Gerrit implement a "work in progress" feature to replace the terrible "draft" functionality, so the OpenStack community's use of Workflow=-1 to signal that may cease to be relevant soon after the next review.openstack.org upgrade. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From honjo.rikimaru at po.ntt-tx.co.jp Mon Jul 9 10:31:16 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 9 Jul 2018 19:31:16 +0900 Subject: [OpenStack-Infra] How do I add a third party CI by ZuulV3? In-Reply-To: <1530893467.2987802.1432251312.2722314F@webmail.messagingengine.com> References: <3eabf669-eb53-ef0b-d0e8-0c57cd2b28ac@po.ntt-tx.co.jp> <1530893467.2987802.1432251312.2722314F@webmail.messagingengine.com> Message-ID: <713ab47c-fa1b-c57f-8b47-3d6ceab76df7@po.ntt-tx.co.jp> Hello Clark, Thank you for your information. But, sorry, I have some additional questions. On 2018/07/07 1:11, Clark Boylan wrote: > On Fri, Jul 6, 2018, at 1:49 AM, Rikimaru Honjo wrote: >> Hello, >> >> I'd like to add a third party CI of networking-spp project.[1] >> But, I have some question about it. >> I'd appreciate it if you give information. >> >> My wishes are the following: >> >> * I'd like to run my test on my environment. >> Because my test requires special environment. >> * I'm planning that check new patch-sets and run my test by ZuulV3. >> >> So I built ZuulV3 and nodepool on my environment, and pushed .zuul.yaml >> to gerrit.[2] >> >> But, the following error was returned. >> Should I add settings of my third party CI to project-config in this case? >> If it is "Yes", is there documents about the way? >> >> I confirmed , >> but there was no information for ZuulV3. >> >>> Zuul encountered a syntax error while parsing its configuration in the >>> repo openstack/networking-spp on branch master. The error was: >>> >>> Pipelines may not be defined in untrusted repos, they may only be >>> defined in config repos. >> >> >> [1] >> https://github.com/openstack/networking-spp >> >> [2] >> https://review.openstack.org/#/c/580561/1 > > The Zuul config in the projects that OpenStack Infra hosts apply to the OpenStack Zuul instance. Certain aspects of this config must be defined in a trusted repo to protect this instance from unintended (or even malicious) updates in the repos we host. The error you ran into is a case of this. > > In particular pipelines define when and how zuul should run jobs so we don't want anyone to be able to update that without review in central trusted config. > > As for how to do this for third party CI, your Zuul would need to have its own trusted config (for the same reasons as above, but protecting your Zuul instance not ours). That config will have pipelines defined. If the project is comfortable with it you can define the jobs and playbooks and roles for third party CI in the upstream project. Then you would select to run those jobs in your Zuul's local config and report the results back to Gerrit from there. In this case, should I add a part of my settings to openstack-infra/project-config? > Or if the upstream project wants to keep that data out of tree you can configure all of it in your Zuul config locally. One drawback to hosting the job config upstream would be that changes to the job config can be made without gating them and ensuring that they work (because third party CI can only vote +/-1). This problem is likely less of an issue if reviewers respect the third party CI results. In this case, should I put config-projects on local environment and report the test results to review.openstack.org? But, in my understanding, "config-projects" in tenant.yaml should be put under the source of the connection which is the target of reporting. If my understanding is correct, I think that config-projects can not be prepared locally. > I think to start I would mostly keep what you've done, but move the pipeline definitions and project config that says what jobs to run into your Zuul's config. > > Hope this helps, > Clark > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From mordred at inaugust.com Tue Jul 10 20:14:31 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 10 Jul 2018 15:14:31 -0500 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <2b4eaa71-9922-a506-826f-1c0822f52b15@linuxfoundation.org> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> <2b4eaa71-9922-a506-826f-1c0822f52b15@linuxfoundation.org> Message-ID: On 07/05/2018 12:46 PM, Andrew Grimberg wrote: > On 07/05/2018 09:13 AM, Jeremy Stanley wrote: >> On 2018-07-05 09:03:44 -0700 (-0700), Andrew Grimberg wrote: >>> On 07/04/2018 06:57 PM, Jeremy Stanley wrote: >> [...] >>>> For that matter, setting the topic based on the local branch >>>> name could also get tossed while we're at it, and just keep the >>>> -t option for directly specifying a change topic when people >>>> really want to do it at time of upload. >>> >>> Personally I would find this a regression. We inform our >>> communities to use local branches and git-review all the time and >>> tell them it will take care of setting the topic as long as they >>> do that. It's an extremely useful feature and I rely upon it >>> daily! I would hate to have to add an extra flag to my review >>> pushes. >> >> Very helpful feedback, thanks! I'm on the fence about that one >> simply because the only reason git-review cared to set review topics >> at all originally was that at the time Gerrit only allowed you to do >> that when pushing a new commit. They've since separated topic >> modification out into its own action which can be done from the >> WebUI or API on an existing change without altering anything else >> about it. I do find the topic-branch-sets-change-topic behavior sort >> of unclean from an idempotency standpoint, as `git-review -d` >> followed by `git review` will alter the topic of your existing >> change to be the change index number when I'd rather it just left >> the topic alone. > > Perhaps it shouldn't try setting / resetting the topic if the local > branch is refs/review// ? That could definitely be > cleaned up and is a very minor frustration to me, but is very rarely hit > (that I'm aware of) in our communities. I hit this all the time - largely because I'm lazy - and it drives me batty. I would definitely be in favor of detecting that we're in efs/review// and not auto-setting a topic in that case. >> My bigger concern is that git-review attempts to autodetect possible >> topic names based on (at this point increasingly outmoded) >> OpenStack-community-specific commit message footer contents like >> Implements and Closes-Bug. These I see as a nuisance and >> codification of OpenStackisms we should cleanse from the codebase. > > Oh, interesting, I didn't know that it tried to do that. We don't have > those footer semantics in any of our projects at present so it's never > been something that comes up. > > -Andy- > > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > From mordred at inaugust.com Tue Jul 10 20:15:19 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 10 Jul 2018 15:15:19 -0500 Subject: [OpenStack-Infra] What's the future for git-review? In-Reply-To: <1530819558-sup-8971@lrrr.local> References: <20180705015717.6dwkrpzali5fkuuo@yuggoth.org> <947a638f-95d6-a5fd-8abe-8b9f329ab606@linuxfoundation.org> <20180705161316.pnjmxdlo2lmnrvpq@yuggoth.org> <1530811233-sup-9403@lrrr.local> <20180705173458.hfwjbjf3eiruhrcs@yuggoth.org> <1530819558-sup-8971@lrrr.local> Message-ID: <4652f72b-9527-3817-beb4-600b470296a9@inaugust.com> On 07/05/2018 02:39 PM, Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-07-05 17:34:59 +0000: >> On 2018-07-05 13:23:57 -0400 (-0400), Doug Hellmann wrote: >> [...] >>> I wonder if it would be useful to move that step of determining the >>> topic out to a hook, so that project-specific logic could be applied >>> as part of submitting a patch? >> >> In the way of spitballing some alternatives, we could have it refuse >> to update the topic if it sees that the change already has a topic >> set in Gerrit (unless -t is used). That would address most of my >> gripes about the autotopic functionality. I find it especially >> annoying if I've stacked two changes in series for procedural >> reasons but want to maintain separate change topics for them in >> Gerrit. >> >> We could of course also make it possible to disable topic inference >> with a configuration option. > > Both of those ideas seem reasonable. ++ From cboylan at sapwetik.org Wed Jul 11 15:25:22 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 11 Jul 2018 08:25:22 -0700 Subject: [OpenStack-Infra] How do I add a third party CI by ZuulV3? In-Reply-To: <713ab47c-fa1b-c57f-8b47-3d6ceab76df7@po.ntt-tx.co.jp> References: <3eabf669-eb53-ef0b-d0e8-0c57cd2b28ac@po.ntt-tx.co.jp> <1530893467.2987802.1432251312.2722314F@webmail.messagingengine.com> <713ab47c-fa1b-c57f-8b47-3d6ceab76df7@po.ntt-tx.co.jp> Message-ID: <1531322722.3819768.1437398216.16E8FB31@webmail.messagingengine.com> On Mon, Jul 9, 2018, at 3:31 AM, Rikimaru Honjo wrote: > Hello Clark, > > Thank you for your information. > But, sorry, I have some additional questions. > > On 2018/07/07 1:11, Clark Boylan wrote: > > On Fri, Jul 6, 2018, at 1:49 AM, Rikimaru Honjo wrote: > >> Hello, > >> > >> I'd like to add a third party CI of networking-spp project.[1] > >> But, I have some question about it. > >> I'd appreciate it if you give information. > >> > >> My wishes are the following: > >> > >> * I'd like to run my test on my environment. > >> Because my test requires special environment. > >> * I'm planning that check new patch-sets and run my test by ZuulV3. > >> > >> So I built ZuulV3 and nodepool on my environment, and pushed .zuul.yaml > >> to gerrit.[2] > >> > >> But, the following error was returned. > >> Should I add settings of my third party CI to project-config in this case? > >> If it is "Yes", is there documents about the way? > >> > >> I confirmed , > >> but there was no information for ZuulV3. > >> > >>> Zuul encountered a syntax error while parsing its configuration in the > >>> repo openstack/networking-spp on branch master. The error was: > >>> > >>> Pipelines may not be defined in untrusted repos, they may only be > >>> defined in config repos. > >> > >> > >> [1] > >> https://github.com/openstack/networking-spp > >> > >> [2] > >> https://review.openstack.org/#/c/580561/1 > > > > The Zuul config in the projects that OpenStack Infra hosts apply to the OpenStack Zuul instance. Certain aspects of this config must be defined in a trusted repo to protect this instance from unintended (or even malicious) updates in the repos we host. The error you ran into is a case of this. > > > > In particular pipelines define when and how zuul should run jobs so we don't want anyone to be able to update that without review in central trusted config. > > > > As for how to do this for third party CI, your Zuul would need to have its own trusted config (for the same reasons as above, but protecting your Zuul instance not ours). That config will have pipelines defined. If the project is comfortable with it you can define the jobs and playbooks and roles for third party CI in the upstream project. Then you would select to run those jobs in your Zuul's local config and report the results back to Gerrit from there. > > In this case, should I add a part of my settings to openstack-infra/ > project-config? No you will need to host your own trusted repo. > > > Or if the upstream project wants to keep that data out of tree you can configure all of it in your Zuul config locally. One drawback to hosting the job config upstream would be that changes to the job config can be made without gating them and ensuring that they work (because third party CI can only vote +/-1). This problem is likely less of an issue if reviewers respect the third party CI results. > > In this case, should I put config-projects on local environment and > report the test results to review.openstack.org? > But, in my understanding, "config-projects" in tenant.yaml should be put > under the source of the connection which is the target of reporting. > If my understanding is correct, I think that config-projects can not be > prepared locally. It can be prepared locally using the git driver, https://zuul-ci.org/docs/zuul/admin/drivers/git.html . This is probably the simplest way to start then you can consider hosting your config on a Gerrit or Github at a later time. Chances are if this is a simple setup that the git driver may work long term. > > > I think to start I would mostly keep what you've done, but move the pipeline definitions and project config that says what jobs to run into your Zuul's config. > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From honjo.rikimaru at po.ntt-tx.co.jp Thu Jul 12 00:40:39 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Thu, 12 Jul 2018 09:40:39 +0900 Subject: [OpenStack-Infra] How do I add a third party CI by ZuulV3? In-Reply-To: <1531322722.3819768.1437398216.16E8FB31@webmail.messagingengine.com> References: <3eabf669-eb53-ef0b-d0e8-0c57cd2b28ac@po.ntt-tx.co.jp> <1530893467.2987802.1432251312.2722314F@webmail.messagingengine.com> <713ab47c-fa1b-c57f-8b47-3d6ceab76df7@po.ntt-tx.co.jp> <1531322722.3819768.1437398216.16E8FB31@webmail.messagingengine.com> Message-ID: <7de230b9-9481-6647-c18e-207dfcb071a0@po.ntt-tx.co.jp> Hello Clark, On 2018/07/12 0:25, Clark Boylan wrote: > On Mon, Jul 9, 2018, at 3:31 AM, Rikimaru Honjo wrote: >> Hello Clark, >> >> Thank you for your information. >> But, sorry, I have some additional questions. >> >> On 2018/07/07 1:11, Clark Boylan wrote: >>> On Fri, Jul 6, 2018, at 1:49 AM, Rikimaru Honjo wrote: >>>> Hello, >>>> >>>> I'd like to add a third party CI of networking-spp project.[1] >>>> But, I have some question about it. >>>> I'd appreciate it if you give information. >>>> >>>> My wishes are the following: >>>> >>>> * I'd like to run my test on my environment. >>>> Because my test requires special environment. >>>> * I'm planning that check new patch-sets and run my test by ZuulV3. >>>> >>>> So I built ZuulV3 and nodepool on my environment, and pushed .zuul.yaml >>>> to gerrit.[2] >>>> >>>> But, the following error was returned. >>>> Should I add settings of my third party CI to project-config in this case? >>>> If it is "Yes", is there documents about the way? >>>> >>>> I confirmed , >>>> but there was no information for ZuulV3. >>>> >>>>> Zuul encountered a syntax error while parsing its configuration in the >>>>> repo openstack/networking-spp on branch master. The error was: >>>>> >>>>> Pipelines may not be defined in untrusted repos, they may only be >>>>> defined in config repos. >>>> >>>> >>>> [1] >>>> https://github.com/openstack/networking-spp >>>> >>>> [2] >>>> https://review.openstack.org/#/c/580561/1 >>> >>> The Zuul config in the projects that OpenStack Infra hosts apply to the OpenStack Zuul instance. Certain aspects of this config must be defined in a trusted repo to protect this instance from unintended (or even malicious) updates in the repos we host. The error you ran into is a case of this. >>> >>> In particular pipelines define when and how zuul should run jobs so we don't want anyone to be able to update that without review in central trusted config. >>> >>> As for how to do this for third party CI, your Zuul would need to have its own trusted config (for the same reasons as above, but protecting your Zuul instance not ours). That config will have pipelines defined. If the project is comfortable with it you can define the jobs and playbooks and roles for third party CI in the upstream project. Then you would select to run those jobs in your Zuul's local config and report the results back to Gerrit from there. >> >> In this case, should I add a part of my settings to openstack-infra/ >> project-config? > > No you will need to host your own trusted repo. I got it. >> >>> Or if the upstream project wants to keep that data out of tree you can configure all of it in your Zuul config locally. One drawback to hosting the job config upstream would be that changes to the job config can be made without gating them and ensuring that they work (because third party CI can only vote +/-1). This problem is likely less of an issue if reviewers respect the third party CI results. >> >> In this case, should I put config-projects on local environment and >> report the test results to review.openstack.org? >> But, in my understanding, "config-projects" in tenant.yaml should be put >> under the source of the connection which is the target of reporting. >> If my understanding is correct, I think that config-projects can not be >> prepared locally. > > It can be prepared locally using the git driver, https://zuul-ci.org/docs/zuul/admin/drivers/git.html . This is probably the simplest way to start then you can consider hosting your config on a Gerrit or Github at a later time. Chances are if this is a simple setup that the git driver may work long term. Thank you for suggesting. I readjust my settings according to the policy. >> >>> I think to start I would mostly keep what you've done, but move the pipeline definitions and project config that says what jobs to run into your Zuul's config. > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From tdecacqu at redhat.com Thu Jul 12 04:54:38 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 12 Jul 2018 04:54:38 +0000 Subject: [OpenStack-Infra] [all] log-classify project update (anomaly detection in CI/CD logs) In-Reply-To: <1530780669.k1udih7bo7.tristanC@fedora> References: <1530601298.luby16yqut.tristanC@fedora> <1530780669.k1udih7bo7.tristanC@fedora> Message-ID: <1531370791.r4nn3973qm.tristanC@fedora> On July 5, 2018 9:17 am, Tristan Cacqueray wrote: > On July 3, 2018 7:39 am, Tristan Cacqueray wrote: > [...] >> There is a lot to do and it will be challening. To that effect, I would >> like to propose an initial meeting with all interested parties. >> Please register your irc name and timezone in this etherpad: >> >> https://etherpad.openstack.org/p/log-classify >> > So far, the mean timezone is UTC+1.75, I've added date proposal from the > 16th to the 20th of July. Please adds a '+' to the one you can attend. > I'll follow-up next week with an ical file for the most popular. > Wednesday 18 July at 12:00 UTC has the most votes. There is now a #log-classify channel on Freenode. And I also started an infra-spec draft here: https://review.openstack.org/#/c/581214/1/specs/log_classify.rst See you then. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: log-classify.ics Type: text/calendar Size: 302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Mon Jul 16 22:27:10 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 16 Jul 2018 15:27:10 -0700 Subject: [OpenStack-Infra] Moving logs into swift (redux) Message-ID: <87h8kyn7ep.fsf@meyer.lemoncheese.net> Hi, As you may know, all of the logs from Zuul builds are currently uploaded to a single static fileserver with about 14TB of storage available in one large filesystem. This was easy to set up, but scales poorly, and we live in constant fear of filesystem corruption necessitating a lengthy outage for repair or loss of data (an event which happens, on average, once or twice a year and takes several days to resolve). Our most promising approaches to solving this involve moving log storage to swift. We (mostly Joshua) have done considerable work in the past but kept hitting blockers. I think the situation has changed enough that the issues we hit before won't be a problem now. I believe we can use this work as a foundation to, relatively quickly, move our log storage into swift. Once there, there's a number of possibilities to improve the experience around logs and artifacts in Zuul and general. This email is going to focus mostly on how OpenStack Infra can move our current log storage and hosting to swift. I will follow it up with an email to the zuul-discuss list about further work that we can do that's more generally applicable to all Zuul users. This email is the result of a number of previous discussions, especially with Monty, and many of the ideas here are his. It also draws very heavily on Joshua's previous work. Here's the general idea: Pre-generate any content for which we currently rely on middleware running on logs.openstack.org. Then upload all of that to swift. Return a direct link to swift for serving the content. In more detail: In addition to using swift as the storage backend, we would also like to avoid running a server as an intermediary. This is one of the obstacles we hit last time. We started to make os-loganalyze (OSLA) a smart proxy which could serve files from disk and swift. It threatened to become very complicated and tax the patience of OSLA's reviewers. OSLA's primary author and reviewer isn't really around anymore, so I suspect the appetite for major changes to OSLA is even less than it may have been in the past (we have merged 2 changes this year so far). There are three kinds of automatically generated content on logs.o.o: * Directory indexes * OSLA HTMLification of logs * ARA If we pre-generate all of those, we don't need any of the live services on logs.o.o. Joshua's zuul_swift_upload script already generates indexes for us. OSLA can already be used to HTMLify files statically. And ARA has a mode to pre-generate its output as well (which we used previously until we ran out of inodes. So today, we basically have what we need to pre-generate this data and store it in swift. Another issue we ran into previously was the transition from filesystem storage to swift. This was because in Zuul v2, we could not dynamically change the log reporting URL. However, in Zuul v3, since the job itself reports the final log URL, we can handle the transition by creating new roles to perform the swift upload and return the swift URL. We can begin by using these roles in a new base job so that we can verify correct operation. Then, when we're ready, we can switch the default base job. All jobs which upload logs to swift will report the new swift URL; the existing logs.o.o URLs will continue to function until they age out. The Zuul dashboard makes finding the location of logs for jobs (especially post jobs) simpler. So we no longer need logs.o.o to find the storage location (files or swift) for post jobs -- a user can just follow the link from the build history in the dashboard. Finally, the apache config (and to some degree, OSLA middleware) handles compression. Ultimately, we don't actually care if the files are compressed in storage. That's an implementation detail (which we care about now because we operate the storage). But it's not a user requirement. In fact, what we want is for jobs to produce logs in whatever format they want (plain text, journal, etc). We want to store those. And we want to present them to the user in the original format. Right now we compress many of them before we upload them to the log server because, lacking a dedicated upload handler on the log server, there's no other way to cause them to be stored compressed. If we're relieved of that burden, then the only thing we really care about is transfer efficiency. We should be able to upload files to swift with Content-Encoding: gzip, and, likewise, users should be able to download files with Accept-Encoding: gzip. We should be able to have efficient transfer without having to explicitly compress and rename files. Our first usability win. The latest version of the zuul_swift_upload script uses the swift tempurl functionality to upload logs. This is because it was designed to run on untrusted nodes. A closer analog to our current Zuul v3 log upload system would be to run the uploader on the executor, giving it a real swift credential. It can then upload logs to swift in the normal manner, rather than via tempurl. It can also create containers as needed -- another consideration from our earlier work. By default, it could avoid creating containers, but we could configure it to create, say, containers for each first-level of our sharding scheme. This could be a general feature of the role that allows for per-site customization. I think that's the approach we should start with, because it will be the easiest transition from our current scheme. However, in the future, we can move to having the uploads occur from the test nodes themselves (rather than, or in addition to, the executor), by having a two-part system. The first part runs on the executor in a trusted context and creates any containers needed, then generates a tempurl, and uses that to have the worker nodes upload to the container directly. I only mention this to show that we're not backing ourselves permanently into executor-only uploads. But we shouldn't consider this part of the first phase. We have also discussed using multiple swifts. It may be easiest to start with one, but in a future where we have executor affinity in Zuul, we may want to upload to the nearest swift. In that case, we can modify the role to, rather than be configured with a single swift, support multiple swifts, and use the executor affinity information to determine if there is a swift colocated in the executor's cloud, and if not, use a fallback. This way we can use multiple swifts as they are available, but not require them. To summarize: static generation combined with a new role to upload to swift using openstacksdk should allow us to migrate to swift fairly quickly. Once there, we can work on a number of enhancements which I will describe in a followup post to zuul-discuss. -Jim From corvus at inaugust.com Mon Jul 16 22:29:28 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 16 Jul 2018 15:29:28 -0700 Subject: [OpenStack-Infra] Moving logs into swift (redux) In-Reply-To: <87h8kyn7ep.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Mon, 16 Jul 2018 15:27:10 -0700") References: <87h8kyn7ep.fsf@meyer.lemoncheese.net> Message-ID: <87zhyqlsqf.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > To summarize: static generation combined with a new role to upload to > swift using openstacksdk should allow us to migrate to swift fairly > quickly. Once there, we can work on a number of enhancements which I > will describe in a followup post to zuul-discuss. The followup message is here: http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-July/000501.html -Jim From cboylan at sapwetik.org Mon Jul 16 22:50:13 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 16 Jul 2018 15:50:13 -0700 Subject: [OpenStack-Infra] Moving logs into swift (redux) In-Reply-To: <87h8kyn7ep.fsf@meyer.lemoncheese.net> References: <87h8kyn7ep.fsf@meyer.lemoncheese.net> Message-ID: <1531781413.660810.1442882552.119BBCEA@webmail.messagingengine.com> On Mon, Jul 16, 2018, at 3:27 PM, James E. Blair wrote: > Hi, > > As you may know, all of the logs from Zuul builds are currently uploaded > to a single static fileserver with about 14TB of storage available in > one large filesystem. This was easy to set up, but scales poorly, and > we live in constant fear of filesystem corruption necessitating a > lengthy outage for repair or loss of data (an event which happens, on > average, once or twice a year and takes several days to resolve). > > Our most promising approaches to solving this involve moving log storage > to swift. We (mostly Joshua) have done considerable work in the past > but kept hitting blockers. I think the situation has changed enough > that the issues we hit before won't be a problem now. I believe we can > use this work as a foundation to, relatively quickly, move our log > storage into swift. Once there, there's a number of possibilities to > improve the experience around logs and artifacts in Zuul and general. > > This email is going to focus mostly on how OpenStack Infra can move our > current log storage and hosting to swift. I will follow it up with an > email to the zuul-discuss list about further work that we can do that's > more generally applicable to all Zuul users. > > This email is the result of a number of previous discussions, especially > with Monty, and many of the ideas here are his. It also draws very > heavily on Joshua's previous work. Here's the general idea: > > Pre-generate any content for which we currently rely on middleware > running on logs.openstack.org. Then upload all of that to swift. > Return a direct link to swift for serving the content. > > In more detail: > > In addition to using swift as the storage backend, we would also like to > avoid running a server as an intermediary. This is one of the obstacles > we hit last time. We started to make os-loganalyze (OSLA) a smart proxy > which could serve files from disk and swift. It threatened to become > very complicated and tax the patience of OSLA's reviewers. OSLA's > primary author and reviewer isn't really around anymore, so I suspect > the appetite for major changes to OSLA is even less than it may have > been in the past (we have merged 2 changes this year so far). > > There are three kinds of automatically generated content on logs.o.o: > > * Directory indexes > * OSLA HTMLification of logs > * ARA > > If we pre-generate all of those, we don't need any of the live services > on logs.o.o. Joshua's zuul_swift_upload script already generates > indexes for us. OSLA can already be used to HTMLify files statically. > And ARA has a mode to pre-generate its output as well (which we used > previously until we ran out of inodes. So today, we basically have what > we need to pre-generate this data and store it in swift. Couple of thoughts about this and Ara specifically. Ara static generation easily produces tens of thousands of files. Copying many small files to the log server with rsync was often quite slow (on the order of 10 minutes for some jobs (that is my fuzzy memory though)). I am concerned that HTTP to $swift service will have similar problems with many small files. This is something we should test. Also, while swift doesn't have inode problems the end user needs to worry about, it does apparently have limits on practical number of objects per container. One of the issues we had in the past, particularly with the swift we had access to, was that each container was not directly accessible by default and you had to configure CDN distribution of each container to be publicly visible. This made creating many containers to shard the objects more complicated than we had hoped. All this to say we may still have to solve the "inode" problem just within the context of swift containers, creating containers, making them visible. We should do our best to test both of these items and/or follow up with whichever cloud hosts the containers to make sure we aren't missing anything else (possible object creation rate limits for example). > > Another issue we ran into previously was the transition from filesystem > storage to swift. This was because in Zuul v2, we could not dynamically > change the log reporting URL. However, in Zuul v3, since the job itself > reports the final log URL, we can handle the transition by creating new > roles to perform the swift upload and return the swift URL. We can > begin by using these roles in a new base job so that we can verify > correct operation. Then, when we're ready, we can switch the default > base job. All jobs which upload logs to swift will report the new swift > URL; the existing logs.o.o URLs will continue to function until they age > out. > > The Zuul dashboard makes finding the location of logs for jobs > (especially post jobs) simpler. So we no longer need logs.o.o to find > the storage location (files or swift) for post jobs -- a user can just > follow the link from the build history in the dashboard. > > Finally, the apache config (and to some degree, OSLA middleware) handles > compression. Ultimately, we don't actually care if the files are > compressed in storage. That's an implementation detail (which we care > about now because we operate the storage). But it's not a user > requirement. In fact, what we want is for jobs to produce logs in > whatever format they want (plain text, journal, etc). We want to store > those. And we want to present them to the user in the original format. > Right now we compress many of them before we upload them to the log > server because, lacking a dedicated upload handler on the log server, > there's no other way to cause them to be stored compressed. > > If we're relieved of that burden, then the only thing we really care > about is transfer efficiency. We should be able to upload files to > swift with Content-Encoding: gzip, and, likewise, users should be able > to download files with Accept-Encoding: gzip. We should be able to have > efficient transfer without having to explicitly compress and rename > files. Our first usability win. > > The latest version of the zuul_swift_upload script uses the swift > tempurl functionality to upload logs. This is because it was designed > to run on untrusted nodes. A closer analog to our current Zuul v3 log > upload system would be to run the uploader on the executor, giving it a > real swift credential. It can then upload logs to swift in the normal > manner, rather than via tempurl. It can also create containers as > needed -- another consideration from our earlier work. By default, it > could avoid creating containers, but we could configure it to create, > say, containers for each first-level of our sharding scheme. This could > be a general feature of the role that allows for per-site customization. Just a side note that creating containers doesn't necessarily make them publicly available in all deployments. This was an issue we ran into in the past. Rax containers could only be accessed publicly if distributed through their CDN. > > I think that's the approach we should start with, because it will be the > easiest transition from our current scheme. However, in the future, we > can move to having the uploads occur from the test nodes themselves > (rather than, or in addition to, the executor), by having a two-part > system. The first part runs on the executor in a trusted context and > creates any containers needed, then generates a tempurl, and uses that > to have the worker nodes upload to the container directly. I only > mention this to show that we're not backing ourselves permanently into > executor-only uploads. But we shouldn't consider this part of the first > phase. > > We have also discussed using multiple swifts. It may be easiest to > start with one, but in a future where we have executor affinity in Zuul, > we may want to upload to the nearest swift. In that case, we can modify > the role to, rather than be configured with a single swift, support > multiple swifts, and use the executor affinity information to determine > if there is a swift colocated in the executor's cloud, and if not, use a > fallback. This way we can use multiple swifts as they are available, > but not require them. > > To summarize: static generation combined with a new role to upload to > swift using openstacksdk should allow us to migrate to swift fairly > quickly. Once there, we can work on a number of enhancements which I > will describe in a followup post to zuul-discuss. This is exciting. I think that zuulv3 puts us in a much better position overall to make use of swift. Job secrets make managing credentials simpler, the dashboard gives us historical browsing of logs. In return we should be able to care less about rotating logs (swift can automatically expire objects), available disk, available inodes, and general reliability of the backing storage. Finally, we will probably need to make changes the logstash processing of logs to fetch the non htmlified log contents since they will be stored separately now. Easy enough, will just need to be done. > > -Jim From corvus at inaugust.com Mon Jul 16 23:04:59 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 16 Jul 2018 16:04:59 -0700 Subject: [OpenStack-Infra] Moving logs into swift (redux) In-Reply-To: <1531781413.660810.1442882552.119BBCEA@webmail.messagingengine.com> (Clark Boylan's message of "Mon, 16 Jul 2018 15:50:13 -0700") References: <87h8kyn7ep.fsf@meyer.lemoncheese.net> <1531781413.660810.1442882552.119BBCEA@webmail.messagingengine.com> Message-ID: <87pnzmkcis.fsf@meyer.lemoncheese.net> Clark Boylan writes: > Couple of thoughts about this and Ara specifically. Ara static > generation easily produces tens of thousands of files. Copying many > small files to the log server with rsync was often quite slow (on the > order of 10 minutes for some jobs (that is my fuzzy memory though)). I > am concerned that HTTP to $swift service will have similar problems > with many small files. This is something we should test. Yes. If we want to get out of the business of running a log proxy (I *very* much do), static generation is the only currently supported option with ara. Despite the downsides, we were able to use ara in static generation mode before and it worked. I'm hopeful that by uploading to swift in parallel, we can mitigate the upload cost. > Also, while swift doesn't have inode problems the end user needs to > worry about, it does apparently have limits on practical number of > objects per container. One of the issues we had in the past, > particularly with the swift we had access to, was that each container > was not directly accessible by default and you had to configure CDN > distribution of each container to be publicly visible. This made > creating many containers to shard the objects more complicated than we > had hoped. All this to say we may still have to solve the "inode" > problem just within the context of swift containers, creating > containers, making them visible. > > We should do our best to test both of these items and/or follow up > with whichever cloud hosts the containers to make sure we aren't > missing anything else (possible object creation rate limits for > example). Yes, the object limit concern is why I think our swift role should create containers as necessary and shard storage. The CDN behavior you describe where public access to swift is not possible except by CDN, and where the CDN uses unpredictable hostnames per container which must be determined via a non-standard API call is, not swift's standard behavior, it is a cloud-specific variant of swift. I believe we can write upload roles for the swift-variant you describe. We can also write upload roles for standard swift. I think it will be difficult to try to use both at the same time, so if we're serious about distributing logs to cloud-local swifts in the future, we may want to start by focusing on standard swift (and accept that clouds that either don't run swift, or don't run standard swift, will export their logs to another cloud. That's not so bad. Most of our clouds do something similar today). -Jim From doug at doughellmann.com Tue Jul 17 00:08:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 16 Jul 2018 20:08:36 -0400 Subject: [OpenStack-Infra] Moving logs into swift (redux) In-Reply-To: <87h8kyn7ep.fsf@meyer.lemoncheese.net> References: <87h8kyn7ep.fsf@meyer.lemoncheese.net> Message-ID: <1531786013-sup-5976@lrrr.local> Excerpts from corvus's message of 2018-07-16 15:27:10 -0700: > The Zuul dashboard makes finding the location of logs for jobs > (especially post jobs) simpler. So we no longer need logs.o.o to find > the storage location (files or swift) for post jobs -- a user can just > follow the link from the build history in the dashboard. Is that information available through an API? I could update git-os-job to use the API to get the URL (it knows how to construct the URL from the commit ID today). Doug From joshua.hesketh at gmail.com Tue Jul 17 03:11:19 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 17 Jul 2018 13:11:19 +1000 Subject: [OpenStack-Infra] Moving logs into swift (redux) In-Reply-To: <87h8kyn7ep.fsf@meyer.lemoncheese.net> References: <87h8kyn7ep.fsf@meyer.lemoncheese.net> Message-ID: Hey all, I like this plan as a kind of next steps for OpenStack-Infra. I have some thoughts on how zuul might better improve it's logging story but will post those on the other thread. I do, however, share both of Clark's concerns. At the moment zuul_swift_uploads makes a POST request for each individual file. I do believe we can group them up to a limit, but that limit is still small and complicated by things such as the total size of the data (which is probably why the script does them individually but I don't recall). This is just to say that we need to test how it will go uploading a lot of files and the time that it may take. I know the CDN was complicated with the cloud provider we were using at the time. However, I'm unsure what the CDN options are these days. Will there be an API we can use to turn the CDN on per container and get the public URL for example? If the above two items turn out sub-optimal, I'm personally not opposed to continuing to run our own middleware. We don't necessarily need that to be in os_loganalyze as the returned URL could be a new middleware. The middleware can then handle the ARA and possibly even work as our own CDN choosing the correct container as needed (if we can't get CDN details otherwise). Cheers, Josh On Tue, Jul 17, 2018 at 8:27 AM, James E. Blair wrote: > Hi, > > As you may know, all of the logs from Zuul builds are currently uploaded > to a single static fileserver with about 14TB of storage available in > one large filesystem. This was easy to set up, but scales poorly, and > we live in constant fear of filesystem corruption necessitating a > lengthy outage for repair or loss of data (an event which happens, on > average, once or twice a year and takes several days to resolve). > > Our most promising approaches to solving this involve moving log storage > to swift. We (mostly Joshua) have done considerable work in the past > but kept hitting blockers. I think the situation has changed enough > that the issues we hit before won't be a problem now. I believe we can > use this work as a foundation to, relatively quickly, move our log > storage into swift. Once there, there's a number of possibilities to > improve the experience around logs and artifacts in Zuul and general. > > This email is going to focus mostly on how OpenStack Infra can move our > current log storage and hosting to swift. I will follow it up with an > email to the zuul-discuss list about further work that we can do that's > more generally applicable to all Zuul users. > > This email is the result of a number of previous discussions, especially > with Monty, and many of the ideas here are his. It also draws very > heavily on Joshua's previous work. Here's the general idea: > > Pre-generate any content for which we currently rely on middleware > running on logs.openstack.org. Then upload all of that to swift. > Return a direct link to swift for serving the content. > > In more detail: > > In addition to using swift as the storage backend, we would also like to > avoid running a server as an intermediary. This is one of the obstacles > we hit last time. We started to make os-loganalyze (OSLA) a smart proxy > which could serve files from disk and swift. It threatened to become > very complicated and tax the patience of OSLA's reviewers. OSLA's > primary author and reviewer isn't really around anymore, so I suspect > the appetite for major changes to OSLA is even less than it may have > been in the past (we have merged 2 changes this year so far). > > There are three kinds of automatically generated content on logs.o.o: > > * Directory indexes > * OSLA HTMLification of logs > * ARA > > If we pre-generate all of those, we don't need any of the live services > on logs.o.o. Joshua's zuul_swift_upload script already generates > indexes for us. OSLA can already be used to HTMLify files statically. > And ARA has a mode to pre-generate its output as well (which we used > previously until we ran out of inodes. So today, we basically have what > we need to pre-generate this data and store it in swift. > > Another issue we ran into previously was the transition from filesystem > storage to swift. This was because in Zuul v2, we could not dynamically > change the log reporting URL. However, in Zuul v3, since the job itself > reports the final log URL, we can handle the transition by creating new > roles to perform the swift upload and return the swift URL. We can > begin by using these roles in a new base job so that we can verify > correct operation. Then, when we're ready, we can switch the default > base job. All jobs which upload logs to swift will report the new swift > URL; the existing logs.o.o URLs will continue to function until they age > out. > > The Zuul dashboard makes finding the location of logs for jobs > (especially post jobs) simpler. So we no longer need logs.o.o to find > the storage location (files or swift) for post jobs -- a user can just > follow the link from the build history in the dashboard. > > Finally, the apache config (and to some degree, OSLA middleware) handles > compression. Ultimately, we don't actually care if the files are > compressed in storage. That's an implementation detail (which we care > about now because we operate the storage). But it's not a user > requirement. In fact, what we want is for jobs to produce logs in > whatever format they want (plain text, journal, etc). We want to store > those. And we want to present them to the user in the original format. > Right now we compress many of them before we upload them to the log > server because, lacking a dedicated upload handler on the log server, > there's no other way to cause them to be stored compressed. > > If we're relieved of that burden, then the only thing we really care > about is transfer efficiency. We should be able to upload files to > swift with Content-Encoding: gzip, and, likewise, users should be able > to download files with Accept-Encoding: gzip. We should be able to have > efficient transfer without having to explicitly compress and rename > files. Our first usability win. > > The latest version of the zuul_swift_upload script uses the swift > tempurl functionality to upload logs. This is because it was designed > to run on untrusted nodes. A closer analog to our current Zuul v3 log > upload system would be to run the uploader on the executor, giving it a > real swift credential. It can then upload logs to swift in the normal > manner, rather than via tempurl. It can also create containers as > needed -- another consideration from our earlier work. By default, it > could avoid creating containers, but we could configure it to create, > say, containers for each first-level of our sharding scheme. This could > be a general feature of the role that allows for per-site customization. > > I think that's the approach we should start with, because it will be the > easiest transition from our current scheme. However, in the future, we > can move to having the uploads occur from the test nodes themselves > (rather than, or in addition to, the executor), by having a two-part > system. The first part runs on the executor in a trusted context and > creates any containers needed, then generates a tempurl, and uses that > to have the worker nodes upload to the container directly. I only > mention this to show that we're not backing ourselves permanently into > executor-only uploads. But we shouldn't consider this part of the first > phase. > > We have also discussed using multiple swifts. It may be easiest to > start with one, but in a future where we have executor affinity in Zuul, > we may want to upload to the nearest swift. In that case, we can modify > the role to, rather than be configured with a single swift, support > multiple swifts, and use the executor affinity information to determine > if there is a swift colocated in the executor's cloud, and if not, use a > fallback. This way we can use multiple swifts as they are available, > but not require them. > > To summarize: static generation combined with a new role to upload to > swift using openstacksdk should allow us to migrate to swift fairly > quickly. Once there, we can work on a number of enhancements which I > will describe in a followup post to zuul-discuss. > > -Jim > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Jul 17 15:30:52 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 17 Jul 2018 08:30:52 -0700 Subject: [OpenStack-Infra] Moving logs into swift (redux) In-Reply-To: <1531786013-sup-5976@lrrr.local> (Doug Hellmann's message of "Mon, 16 Jul 2018 20:08:36 -0400") References: <87h8kyn7ep.fsf@meyer.lemoncheese.net> <1531786013-sup-5976@lrrr.local> Message-ID: <87y3e9j2vn.fsf@meyer.lemoncheese.net> Doug Hellmann writes: > Excerpts from corvus's message of 2018-07-16 15:27:10 -0700: > >> The Zuul dashboard makes finding the location of logs for jobs >> (especially post jobs) simpler. So we no longer need logs.o.o to find >> the storage location (files or swift) for post jobs -- a user can just >> follow the link from the build history in the dashboard. > > Is that information available through an API? I could update git-os-job > to use the API to get the URL (it knows how to construct the URL from > the commit ID today). Yes: http://zuul.openstack.org/api/builds?project=openstack/python-monascaclient&newrev=f5b8831fbaf69d5c93776b166bd4915cf452ae27 But Zuul's API is in flux, undocumented, and comes with no stability promises yet. We want to change that, but we're still getting the basics down. I hesitate to suggest that folks write to the API too much at this point. Having said that, this is a pretty lightweight use, and I'm sure we'll always have this functionality, even if we end up changing the details, so I think we should do it. If we have to change git-os-job again before everything is final, I'm sure it won't be much trouble. -Jim From corvus at inaugust.com Tue Jul 17 15:41:38 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 17 Jul 2018 08:41:38 -0700 Subject: [OpenStack-Infra] Moving logs into swift (redux) In-Reply-To: (Joshua Hesketh's message of "Tue, 17 Jul 2018 13:11:19 +1000") References: <87h8kyn7ep.fsf@meyer.lemoncheese.net> Message-ID: <87tvoxj2dp.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: > I know the CDN was complicated with the cloud provider we were using at the > time. However, I'm unsure what the CDN options are these days. Will there > be an API we can use to turn the CDN on per container and get the public > URL for example? A typical swift has the ability to allow for public access to swift itself, so this shouldn't be an issue. We should survey our available swifts and make sure of this. I'm not currently advocating that we use non-standard swifts (i.e., ones which require non-standard API calls to retrieve CDN urls, etc). > If the above two items turn out sub-optimal, I'm personally not opposed to > continuing to run our own middleware. We don't necessarily need that to be > in os_loganalyze as the returned URL could be a new middleware. The > middleware can then handle the ARA and possibly even work as our own CDN > choosing the correct container as needed (if we can't get CDN details > otherwise). I'd love to get out of the middleware business entirely if we can. It causes large, disruptive outages when it breaks. -Jim From fungi at yuggoth.org Mon Jul 23 13:15:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 23 Jul 2018 13:15:26 +0000 Subject: [OpenStack-Infra] [Openstack] ask.openstack.org down?! In-Reply-To: References: Message-ID: <20180723131526.omqi6vfrvhwfwfyf@yuggoth.org> On 2018-07-23 11:28:50 +0200 (+0200), Erdősi Péter wrote: > I got connection refused when trying to open ask.openstack.org. > I've tried it from HBONE (AS 1955) and UPC (AS 6830) networks with > no luck. It wasn't a routing issue. The server's Web service was actually not running. Something seems to have occurred (perhaps hit a race condition bug) around the time of the server's scheduled jobs (log rotation, et cetera) which raised a Django WSGI exception and killed the parent Apache process. Starting Apache again has restored the site to working order. Thanks for reporting! > Can someone investigate or escalate this to the right place? A more appropriate list would have been openstack-infra at lists.openstack.org (Cc'd on my reply). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fazy at niif.hu Mon Jul 23 13:28:56 2018 From: fazy at niif.hu (=?UTF-8?B?RXJkxZFzaSBQw6l0ZXI=?=) Date: Mon, 23 Jul 2018 15:28:56 +0200 Subject: [OpenStack-Infra] [Openstack] ask.openstack.org down?! In-Reply-To: <20180723131526.omqi6vfrvhwfwfyf@yuggoth.org> References: <20180723131526.omqi6vfrvhwfwfyf@yuggoth.org> Message-ID: <8dfc06f8-a9fb-6f95-2b39-081c5b8ec1c4@niif.hu> 2018. 07. 23. 15:15 keltezéssel, Jeremy Stanley írta: > On 2018-07-23 11:28:50 +0200 (+0200), Erdősi Péter wrote: >> I got connection refused when trying to open ask.openstack.org. >> I've tried it from HBONE (AS 1955) and UPC (AS 6830) networks with >> no luck. > It wasn't a routing issue. The server's Web service was actually not > running. Something seems to have occurred (perhaps hit a race > condition bug) around the time of the server's scheduled jobs (log > rotation, et cetera) which raised a Django WSGI exception and killed > the parent Apache process. Starting Apache again has restored the > site to working order. Thanks for reporting! > We thank you the fast repair ;) Regards:  Peter ERDOSI -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Mon Jul 23 15:22:13 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 23 Jul 2018 11:22:13 -0400 Subject: [OpenStack-Infra] Reworking zuul base jobs Message-ID: <20180723152213.GA5814@localhost.localdomain> Greetings, A few weeks ago, I sent an email to the zuul-discuss[1] ML talking about the idea of splitting a base job in project-config into trusted / untrusted parts. Since then we've actually implemented the idea in rdoproject.org zuul and seems to be working very well. Basically, I'd like to do the same here with our base job but first wanted to give a heads up. Here is the basic idea: project-config (trusted) - job: name: base-minimal parent: null description: top-level job - job: name: base-minimal-test parent: null description: top-level job for testing base-minimal openstack-zuul-jobs (untrusted) - job: name: base parent: base-minimal This then allows us to start moving tasks / roles like configure-mirrors from trusted into untrusted, since it doesn't really need trusted context on the executor. In rdoproject, our base-minimal job is much smaller then openstack-infra today, but really has just become used for handling secrets (post-run playbooks) and zuul_stream (pre). Everything else has been moved into untrusted. Here, we likely need to have a little more discussion around what we move into untrusted from trusted, but once we've done the dance to place base into openstack-zuul-jobs and parent to base-minimal in project-config, we can start testing. We'd still need to do the base-minimal / base-minimal-test dance for trusted context, but it should be much smaller the things we need to test. As a working example, the recent changes to pypi mirrors I believe would have been much easier to test in this setup. - Paul [1] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-July/000508.html From alifshit at redhat.com Tue Jul 24 16:25:25 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 24 Jul 2018 12:25:25 -0400 Subject: [OpenStack-Infra] [infra][nova] Running NFV tests in CI In-Reply-To: References: Message-ID: > Hey all, > > tl;dr Humbly requesting a handful of nodes to run NFV tests in CI > > Intel has their NFV tests tempest plugin [1] and manages a third party > CI for Nova. Two of the cores on that project (Stephen Finucane and > Sean Mooney) have now moved to Red Hat, but the point still stands > that there's a need and a use case for testing things like NUMA > topologies, CPU pinning and hugepages. > > At Red Hat, we also have a similar tempest plugin project [2] that we > use for downstream whitebox testing. The scope is a bit bigger than > just NFV, but the main use case is still testing NFV code in an > automated way. > > Given that there's a clear need for this sort of whitebox testing, I > would like to humbly request a handful of nodes (in the 3 to 5 range) > from infra to run an "official" Nova NFV CI. The code doing the > testing would initially be the current Intel plugin, bug we could have > a separate discussion about keeping "Intel" in the name or forking > and/or renaming it to something more vendor-neutral. > > I won't be at PTG (conflict with personal travel), so I'm kindly > asking Stephen and Sean to represent this idea in Denver. > > Cheers! > > [1] https://github.com/openstack/intel-nfv-ci-tests > [2] https://review.rdoproject.org/r/#/admin/projects/openstack/whitebox-tempest-plugin Forgot to actually include openstack-infra, apologies.