From graham.whaley at intel.com Mon Dec 3 17:12:44 2018 From: graham.whaley at intel.com (Whaley, Graham) Date: Mon, 3 Dec 2018 17:12:44 +0000 Subject: [OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack In-Reply-To: <1543360534.3763086.1591115768.53DF7B8A@webmail.messagingengine.com> References: <01D2D3EEA433C0419A12152B3A36557E204E9663@IRSMSX101.ger.corp.intel.com> <01D2D3EEA433C0419A12152B3A36557E204F18E7@IRSMSX101.ger.corp.intel.com> <1539879007.3697557.1546669152.55C51D64@webmail.messagingengine.com> <01D2D3EEA433C0419A12152B3A36557E204F3B6D@IRSMSX101.ger.corp.intel.com> <1540314157.1272773.1551970776.0C80340F@webmail.messagingengine.com> <01D2D3EEA433C0419A12152B3A36557E2051318D@IRSMSX101.ger.corp.intel.com> <1543360534.3763086.1591115768.53DF7B8A@webmail.messagingengine.com> Message-ID: <01D2D3EEA433C0419A12152B3A36557E20516DC4@IRSMSX101.ger.corp.intel.com> Hi Clark, > There is more to it than that. This service is part of the CI system we operate. > The way you consume it is through the use of Zuul jobs. If you want to inject > data into our Logstash/Elasticsearch system you do that by configuring your jobs > in Zuul to do so. We are not in the business of operating one off solutions to > problems. We support a large variety of users and projects and using generic > flexible systems like this one is how we make that viable. > > Additionally these systems are community managed so that we can work > together to solve these problems in a way that gives the infra team appropriate > administrative access while still allowing you and others to get specific work > done. Rather than avoid this tooling can we please attempt to use it when it has > preexisting solutions to problems like this? We will happily do our best to make > re-consumption of existing systems a success, but building one off solutions to > solve problems that are already solved does not scale. > Sure, OK, understood... [snip] > > I wasn't directly involved with the decision making at the time but back at the > beginning of the year my understanding was that Jenkins was chosen over Zuul > for expediency. This wasn't a bad choice as the Github support in Zuul was still > quite new (though having more users would likely have pushed it along more > quickly). It probably would be worthwhile to decide separately if Jenkins is the > permanent solution to the Kata CI tooling problem, or if we should continue to > push for Zuul. If we want to push for Zuul then I think we need to stop choosing > Jenkins as a default and start implementing new stuff in Zuul then move the > existing CI as Kata is able. > > As for who has Zuul access, the Infra team has administrative access to the > service. Zuul configuration for the existing Kata jobs is done through a repo > managed by the infra team, but anyone can push and propose changes to this > repo. The reason for this is Zuul wants to gate its config updates to prevent new > configs from being merged without being tested. Bypassing this testing does > allow you to break your Zuul configuration. Currently we aren't gating Kata with > Zuul so the configs live in the Infra repo. If we started gating Kata changes with > Zuul we could move the configs into Kata repos and Kata could self manage > them. > > Looking ahead Zuul is multitenant aware, and we could deploy a Kata tenant. > This would give Kata a bit more freedom to configure its Zuul pipeline behavior > as desired, though gating is still strongly recommended as that will prevent > broken configs from merging. I spoke with some of the other Kata folks - we agreed I'd try to move the Kata metrics CI into Zuul utilizing the packet.net hardware, and let's see how that pans out. I think that will help both sides understand the current state of kata/zuul so we can move things forwards there. Wrt the packet.net slaves, I believe we can do that using some of the packet.net/zuul integration work done by JohnStudarus - John and I had some chats at the Summit in Berlin. https://opensource.com/article/18/10/building-zuul-cicd-cloud I'll do some Zuul readup, work out how I need to PR the additional ansible/yaml items to the infra repos to add the metrics build/runs (I see the repos and code, and a metrics run is very very similar to a normal kata CI run - and to begin with we can do those runs in the VM builders to test out the flows before moving to the packet.net hardware). [move this down to the end...] > > No, we would inject the data through the existing test node -> Zuul -> Logstash - > > Elasticsearch path. This might be one bit we have to work out. The metrics generates raw JSON results. The best method I found for landing that directly into logstash was through the socket filebeat. It is not clear in my head how this ties in with Zuul - the best method I found previously for direct JSON injection into logstash, and thus elastic, was using the socket filebeat. Will that fit in with the infra? Graham --------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From cboylan at sapwetik.org Tue Dec 4 20:27:28 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 04 Dec 2018 12:27:28 -0800 Subject: [OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack In-Reply-To: <01D2D3EEA433C0419A12152B3A36557E20516DC4@IRSMSX101.ger.corp.intel.com> References: <01D2D3EEA433C0419A12152B3A36557E204E9663@IRSMSX101.ger.corp.intel.com> <01D2D3EEA433C0419A12152B3A36557E204F18E7@IRSMSX101.ger.corp.intel.com> <1539879007.3697557.1546669152.55C51D64@webmail.messagingengine.com> <01D2D3EEA433C0419A12152B3A36557E204F3B6D@IRSMSX101.ger.corp.intel.com> <1540314157.1272773.1551970776.0C80340F@webmail.messagingengine.com> <01D2D3EEA433C0419A12152B3A36557E2051318D@IRSMSX101.ger.corp.intel.com> <1543360534.3763086.1591115768.53DF7B8A@webmail.messagingengine.com> <01D2D3EEA433C0419A12152B3A36557E20516DC4@IRSMSX101.ger.corp.intel.com> Message-ID: <1543955248.736878.1598871176.311395C2@webmail.messagingengine.com> On Mon, Dec 3, 2018, at 9:12 AM, Whaley, Graham wrote: > snip > > I spoke with some of the other Kata folks - we agreed I'd try to move > the Kata metrics CI into Zuul utilizing the packet.net hardware, and > let's see how that pans out. I think that will help both sides > understand the current state of kata/zuul so we can move things forwards > there. > > Wrt the packet.net slaves, I believe we can do that using some of the > packet.net/zuul integration work done by JohnStudarus - John and I had > some chats at the Summit in Berlin. > https://opensource.com/article/18/10/building-zuul-cicd-cloud > > I'll do some Zuul readup, work out how I need to PR the additional > ansible/yaml items to the infra repos to add the metrics build/runs (I > see the repos and code, and a metrics run is very very similar to a > normal kata CI run - and to begin with we can do those runs in the VM > builders to test out the flows before moving to the packet.net > hardware). I think running the jobs on VMs to get the workflow going first is a great idea. > > [move this down to the end...] > > > > No, we would inject the data through the existing test node -> Zuul -> Logstash - > > > Elasticsearch path. > > This might be one bit we have to work out. The metrics generates raw > JSON results. The best method I found for landing that directly into > logstash was through the socket filebeat. It is not clear in my head how > this ties in with Zuul - the best method I found previously for direct > JSON injection into logstash, and thus elastic, was using the socket > filebeat. Will that fit in with the infra? > The current setup has the zuul job list out the files we want logstash to process into elasticsearch then submit gearman jobs to a tool in front of logstash which feeds logstash. We did this because way back when the "officially" documented way to feed logstash was via Redis and we found that we need far more memory for that model than this one. The other upside to the way we've set this up is the Zuul job submits requests for processing and that processing happens asynchronously so we can report results back to Gerrit (or Github, etc) without waiting for elasticsearch things to happen. For your use case the way I sort of envision the json data getting fed into logstash is via this same mechanism. Your job would log the json data, Zuul would submit a gearman job request to have that processed, then this data would be fetched and fed into logstash. Since this is metric and not log event data we may need to have a slightly different gearman job that knows how to feed logstash for the json data specifically rather than the log data. Then on the logstash side have a tcp input ingest that data which will send it down a different filter path from the log inputs. I'm happy to help with this as I've got far too much insider info on how this all works. If you'd like to look at the internals the Zuul job side happens here [0] and this small script runs as a service in front of logstash [1] to feed the logstashes. If we can get the outline of a job running on a VM that stashes the json data in its logs that will give us a good start for figuring out how to plug that into this system. [0] https://git.openstack.org/cgit/openstack-infra/project-config/tree/roles/submit-log-processor-jobs/library/submit_log_processor_jobs.py [1] https://git.openstack.org/cgit/openstack-infra/puppet-log_processor/tree/files/log-gearman-worker.py Clark From francois.magimel at alumni.enseeiht.fr Wed Dec 5 18:40:16 2018 From: francois.magimel at alumni.enseeiht.fr (=?UTF-8?Q?Fran=c3=a7ois_Magimel?=) Date: Wed, 5 Dec 2018 19:40:16 +0100 Subject: [OpenStack-Infra] Hosting OpenStack Summit (among other) videos Message-ID: Hello, My name is François and I would like to suggest a new service for the OpenStack Foundation: video hosting. Today, video are hosted in Youtube. But as OpenStack promotes free and open source softwares, I would like to help installing an alternative for the Foundation: Peertube [1], a decentralized video hosting network based on libre software :). For more information, Peertube is promoted by Framasoft [2] (a French non-profit organisation) and is under the AGPLv3 licence. The first version is out and it is already used by some organisations :). You can find some examples [3]. So, some questions have arisen on IRC #openstack-infra and on my mind: - what is the licence of summit videos ? - does the Foundation have some resources to host videos ? What do you think of that idea ? (Of course, I am ready to help or to do the installation and to get metadata of all videos and put them into the peertube instance :)). [1] https://joinpeertube.org/en/ [2] https://framasoft.org/en/ [3] https://framatube.org/en-US -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Wed Dec 5 19:02:13 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Dec 2018 19:02:13 +0000 Subject: [OpenStack-Infra] Hosting OpenStack Summit (among other) videos In-Reply-To: References: Message-ID: <20181205190212.k2yh42o4b6g2q3d4@yuggoth.org> On 2018-12-05 19:40:16 +0100 (+0100), François Magimel wrote: > My name is François and I would like to suggest a new service for > the OpenStack Foundation: video hosting. Today, video are hosted > in Youtube. But as OpenStack promotes free and open source > softwares, I would like to help installing an alternative for the > Foundation: Peertube [1], a decentralized video hosting network > based on libre software :). [...] Thanks! I find the idea compelling of course because it's free/libre open source software (unlike YouTube), but also because it turns out many of our contributors in mainland China are blocked from accessing YouTube and need us to host these videos somewhere else anyway. I'm curious, since PeerTube basically relies on an in-browser bittorrent client, whether bittorrent protocol works through the government-imposed Internet filters in China and would provide a good solution to that problem. > - what is the licence of summit videos ? Allison Price on the OSF staff confirmed for me that all the official OpenStack Summit session recordings are supposed to be distributed under CC-BY (unfortunately they're inconsistently labeled in YouTube descriptions where some indicate a License of "Creative Commons Attribution license (reuse allowed)" and others don't currently mention any license). I think this means we should have no legal problem at least serving copies of them. > - does the Foundation have some resources to host videos ? [...] It wouldn't be the OpenStack Foundation, but rather the community project infrastructure, which would need to obtain those resources. (If you're unfamiliar with the OpenStack Infrastructure project, in the process of switching to the name OpenDev, https://docs.openstack.org/infra/system-config/project.html provides an overview of who we are and how we collaborate.) We have a fair amount of "cloud" resources provided to us by many generous donor organizations, so ought to be able to come up with sufficient space to house these files and cover the bandwidth of serving them if we decide that's something we want to do. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Dec 5 19:06:02 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Dec 2018 19:06:02 +0000 Subject: [OpenStack-Infra] Hosting OpenStack Summit (among other) videos In-Reply-To: <20181205190212.k2yh42o4b6g2q3d4@yuggoth.org> References: <20181205190212.k2yh42o4b6g2q3d4@yuggoth.org> Message-ID: <20181205190601.sqjnnzudzgftwdol@yuggoth.org> On 2018-12-05 19:02:13 +0000 (+0000), Jeremy Stanley wrote: > On 2018-12-05 19:40:16 +0100 (+0100), François Magimel wrote: [...] > > - what is the licence of summit videos ? > > Allison Price on the OSF staff confirmed for me that all the > official OpenStack Summit session recordings are supposed to be > distributed under CC-BY (unfortunately they're inconsistently > labeled in YouTube descriptions where some indicate a License of > "Creative Commons Attribution license (reuse allowed)" and others > don't currently mention any license). [...] And Allison has fixed them now, so hopefully the licenses should all be consistently showing up. Thanks, Allison! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Fri Dec 7 11:02:00 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 7 Dec 2018 12:02:00 +0100 Subject: [OpenStack-Infra] [Release-job-failures] Release of openstack-infra/jenkins-job-builder failed In-Reply-To: References: Message-ID: <6ef85559-4ea7-64e0-b633-99236b752e82@openstack.org> zuul at openstack.org wrote: > Build failed. > > - trigger-readthedocs-webhook http://logs.openstack.org/22/22151762d1147da9bbbe9353fe52c6995ab8b658/release/trigger-readthedocs-webhook/48b0996/ : FAILURE in 1m 32s > - release-openstack-python http://logs.openstack.org/22/22151762d1147da9bbbe9353fe52c6995ab8b658/release/release-openstack-python/6d983bb/ : SUCCESS in 3m 51s > - announce-release http://logs.openstack.org/22/22151762d1147da9bbbe9353fe52c6995ab8b658/release/announce-release/a9ca9ee/ : SUCCESS in 3m 50s > - propose-update-constraints http://logs.openstack.org/22/22151762d1147da9bbbe9353fe52c6995ab8b658/release/propose-update-constraints/453d5a1/ : SUCCESS in 3m 21s Looks like the readthedocs integration for JJB is misconfigured, causing the trigger-readthedocs-webhook to fail ? -- Thierry Carrez (ttx) From mnaser at vexxhost.com Fri Dec 7 19:06:20 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 7 Dec 2018 14:06:20 -0500 Subject: [OpenStack-Infra] Access to CI Message-ID: <99C52875-A3BB-4A74-8719-FCD6AAE91608@vexxhost.com> Hi everyone, We’re in the processing of moving a lot of our internally used Ansible roles to ones which are public. As a result, we’d like to also add testing coverage for them, however, there’s two issues that we’re seeing We’d love to take advantage of the existing infrastructure for CI rather than use some other third party CI service for those roles. Also, the Gerrit workflow is great and flows perfectly with everything that we do right now. However, having said that.. 1) There seems to be some namespace issues which could block us (for example, openstack/ansible-role-container-registry seems to be fairly TripleO opinionated, even running TripleO jobs). If we wanted to write a similar role, we’d probably have to put another name.. or $some_other_solution 2) If we don’t host the code in Gerrit and just use GitHub for now (until we can get namespaced projects in Gerrit with OpenDev), then are we allowed to use the current Zuul deployment and resources to run these jobs? Is there any sort of infrastructure in place to get a ‘yes’ or ‘no’? I know that the OpenDev effort is moving forward but it might be a little while before there’s something concrete and it’d be nice to be able to use some infrastructure in the meantime, without having to rely on other external services. Thanks! Mohammed From cboylan at sapwetik.org Fri Dec 7 19:21:00 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 07 Dec 2018 11:21:00 -0800 Subject: [OpenStack-Infra] Access to CI In-Reply-To: <99C52875-A3BB-4A74-8719-FCD6AAE91608@vexxhost.com> References: <99C52875-A3BB-4A74-8719-FCD6AAE91608@vexxhost.com> Message-ID: <1544210460.1505907.1602397800.6EF955E9@webmail.messagingengine.com> On Fri, Dec 7, 2018, at 11:06 AM, Mohammed Naser wrote: > Hi everyone, > > We’re in the processing of moving a lot of our internally used Ansible > roles to ones which are public. As a result, we’d like to also add > testing coverage for them, however, there’s two issues that we’re seeing > > We’d love to take advantage of the existing infrastructure for CI rather > than use some other third party CI service for those roles. Also, the > Gerrit workflow is great and flows perfectly with everything that we do > right now. However, having said that.. > > 1) There seems to be some namespace issues which could block us (for > example, openstack/ansible-role-container-registry seems to be fairly > TripleO opinionated, even running TripleO jobs). If we wanted to write > a similar role, we’d probably have to put another name.. or > $some_other_solution Long term it seems that the setup described in this spec, https://review.openstack.org/#/c/623033/1/specs/opendev-gerrit.rst, would cover your needs. Then you can host things in Gerrit with non conflicting namespaces and use zuul. > > 2) If we don’t host the code in Gerrit and just use GitHub for now > (until we can get namespaced projects in Gerrit with OpenDev), then are > we allowed to use the current Zuul deployment and resources to run these > jobs? Is there any sort of infrastructure in place to get a ‘yes’ or > ‘no’? The current OpenStack Infra zuul + GitHub situation is all "third party ci" like. Basically we test other projects where they intersect with OpenStack (Kata is more run their tests because the intersection is support from OpenStack Foundation though). One thing we've learned from this is that it can be painful to fully utilize the power of Zuul from the Github side if you don't buy into gating. Job configs can't be loaded from these repos (as they can break easily without gating). I think this experience may have influenced a perspective (at least with myself) that I'd much rather not deal with Github as an opaque service day to day. That said, perhaps a better evaluation would be through the hosting of a project that would buy into gating from the start so that we can evaluate it under those circumstances. > > I know that the OpenDev effort is moving forward but it might be a > little while before there’s something concrete and it’d be nice to be > able to use some infrastructure in the meantime, without having to rely > on other external services. My preference here would be to host in Gerrit if we can. Maybe it is feasible to start allowing new projects in new namespaces before we make the OpenDev switch. (This likely needs more thought than I've just given it though). I'm not completely opposed to using this as a one off "can we Github in the way we'd like Zuul to work with Github" test especially if we have an agreement to move to Gerrit once the OpenDev switch does happen. Mostly don't want to have to support this long term if we decide it isn't tenable hence the Gerrit agreement which we know works. I do think it would be useful to get others' thoughts on this particularly those that may be running Github + Zuul with a gating workflow already (Paul and Clint and Tobias?). Curious what the rest of the Infra team think. Clark From aschultz at redhat.com Fri Dec 7 21:05:23 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 7 Dec 2018 14:05:23 -0700 Subject: [OpenStack-Infra] Access to CI In-Reply-To: <99C52875-A3BB-4A74-8719-FCD6AAE91608@vexxhost.com> References: <99C52875-A3BB-4A74-8719-FCD6AAE91608@vexxhost.com> Message-ID: On Fri, Dec 7, 2018 at 12:08 PM Mohammed Naser wrote: > > Hi everyone, > > We’re in the processing of moving a lot of our internally used Ansible roles to ones which are public. As a result, we’d like to also add testing coverage for them, however, there’s two issues that we’re seeing > > We’d love to take advantage of the existing infrastructure for CI rather than use some other third party CI service for those roles. Also, the Gerrit workflow is great and flows perfectly with everything that we do right now. However, having said that.. > > 1) There seems to be some namespace issues which could block us (for example, openstack/ansible-role-container-registry seems to be fairly TripleO opinionated, even running TripleO jobs). If we wanted to write a similar role, we’d probably have to put another name.. or $some_other_solution > Why? If you don't already have one, can you not start with the existing role and maybe improve it to fit all needs? This is partially why as tripleo we've started making independent roles so that we can all benefit from them rather than keeping them within a single project repo. I don't think TripleO would have any issues with adding new functionalities or tweaking things so long as it doesn't break backwards compatibility. This is what I was pushing to start the collaboration around the OSA os_tempest role so I don't think namespace collisions are a problem unless you plan on importing an existing code base. > 2) If we don’t host the code in Gerrit and just use GitHub for now (until we can get namespaced projects in Gerrit with OpenDev), then are we allowed to use the current Zuul deployment and resources to run these jobs? Is there any sort of infrastructure in place to get a ‘yes’ or ‘no’? > > I know that the OpenDev effort is moving forward but it might be a little while before there’s something concrete and it’d be nice to be able to use some infrastructure in the meantime, without having to rely on other external services. > > Thanks! > Mohammed > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From fungi at yuggoth.org Mon Dec 10 17:33:19 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Dec 2018 17:33:19 +0000 Subject: [OpenStack-Infra] [infra] OpenDev feedback forum session summary Message-ID: <20181210173319.lnqn2ydshtmpwkjj@yuggoth.org> Wednesday afternoon at the OpenStack Summit we met to discuss the plan for the upcoming transition of the OpenStack Infrastructure team to an independent effort named OpenDev. Notes were recorded at https://etherpad.openstack.org/p/BER-opendev-feedback-and-missing-features and form the basis of the summary with follows. For those unfamiliar with this topic, the announcement at http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html provides some background and context. Much of what follows may be a reiteration of things also covered there, so please excuse any redundancy on my part. To start out, we (re)announced that we have chosen a name (OpenDev) and a domain (opendev.org), so can more effectively plan for DNS changes for most of the services we currently host under the "legacy" (for us) openstack.org domain. It was also pointed out that while we expect to maintain convenience redirects and aliases from old hostnames for all services we reasonably can so as to minimize disruption, there will still be some unavoidable discontinuities for users from time to time as we work through this. We talked for a bit about options for decentralizing GitHub repository mirroring so that the current team no longer needs to maintain it, and how to put it in control of people who want to manage those organizations there for themselves instead. Doing this with a job in Zuul's post pipeline (using encrypted secrets for authentication) was suggested as one possible means to avoid users all maintaining their own separate automation to accomplish the same thing. Interest in bare metal CI nodes in nodepool was brought up again. To reiterate, there's not really any technical reason we can't use them, more that prior offers to donate access to Nova/Ironic-managed nodes for this purpose never panned out. If you work for an organization which maintains a "bare metal cloud" we could reach over the open Internet and you'd consider carving out some of your capacity for our CI system, please do get in touch with us! We spent a bit of time covering user concerns about the transition to OpenDev and what reassurances we ought to provide. For starters, our regular contributors and root systems administrators will continue to be just as reachable and responsive as ever via IRC and mailing lists, even if the names of the channels and MLs may change as part of this transition. Similarly, our operations will remain as open and transparent as they are today... really nothing about how we maintain our systems is changing substantively as a part of the OpenDev effort, though certainly the ways in which we maintain our systems do still change and evolve over time as we seek to improve them so that will of course continue to be the case. Paul Belanger raised concerns that announcing OpenDev could result in a flood of new requests to host more projects. Well, really, I think that's what we hope for. I (apparently) pointed out that even when StackForge was first created back at the beginning of 2012, there wasn't much restriction as to what we would be willing to host. As interest in OpenDev spreads to new audiences, interest in participating in its maintenance and development should too grow. That said, we acknowledge that there are some scalability bottlenecks and manual/human steps in certain aspects of new project onboarding for now, so should be very up-front with any new projects about that fact. We're also not planning for any big marketing push to seek out additional projects at this point, but are happy to talk to any who discover us and are interested in the services we offer. Next, Paul Belanger brought up the possibility of "bring your own cloud" options for projects providing CI resources themselves. While we expect nodepool to have support for tenant-specific resources in the not-too-distant future, Jim Blair and Clark Boylan agreed the large pool of generic resources we operate with now is really where we see a lot of benefit and ability to drive efficiencies of scale. Then Monty Taylor talked for a while, according to the notes in the pad, and said things about earmarked resources potentially requiring a sort of "commons tax" or... something. Jim Rollenhagen asked whether we would potentially start to test and gate projects on GitHub too rather than just our Gerrit. Clark Boylan and Jim Blair noted that the current situation where we're testing pull requests for Kata's repositories is a bit of an experiment in that direction today and the challenges we've faced suggest that, while we'll likely continue to act as a third-party CI system for some projects hosted on GitHub (we're doing that with Ansible for example), we've discovered that trying to enforce gating in code review platforms we don't also control is not likely something we'll want to continue in the long term. It came up that our earlier ideas for flattening our Git namespace to reduce confusion and minimize future repository renames is now not looking as attractive. Instead we're probably going to need to embrace an explosion of new namespaces and find better ways to cope with the pain of renames in Gerrit as projects want to move between them over time. We're planning to only run one Gerrit for simplicity, so artificially creating "tenants" in it through prefixes in repository names is really the simplest solution we have to avoid different projects stepping on one another's toes with their name choices. Then we got into some policy discussions about namespace creation. Jim Blair talked about the potential to map Git/Gerrit repository namespaces to distinct Zuul tenants, and someone (might have been me? I was fairly jet-lagged and so don't really remember) asked about who decides what the requirements are for projects to create repositories in a particular namespace. In the case of OpenStack, the answer is almost certainly the OpenStack Technical Committee or at least some group to whom they delegate that responsibility. The OpenStack TC needs to discuss fairly early what its policies are for the "openstack" namespace (whether existing unofficial projects will be allowed to remain, whether addition of new unofficial projects will be allowed there) as well as whether it wants to allow creation of multiple separate namespaces for official OpenStack projects. The suggestion of nested "deep" namespaces like openstack/nova/nova came up at this point too. We also resolved that we need to look back into the project rename plugin for Gerrit. The last time we evaluated it, there wasn't much there. We've heard it's improved with newer Gerrit releases, but if it's still lacking we might want to contribute to making it more effective so we can handle the inevitable renames more easily in the future. And finally, as happens with most forum sessions, we stopped abruptly because we ran over and it was Kendall Nelson's turn to start getting ops feedback for the Contributor Guide. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Mon Dec 10 19:13:03 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 10 Dec 2018 11:13:03 -0800 Subject: [OpenStack-Infra] Infra Team Meeting Agenda for December 11, 2018 Message-ID: <1544469183.2215168.1604948576.7D3D3B3E@webmail.messagingengine.com> Hello everyone, My phone has reminded me this time around that I am supposed to send out the meeting a day in advance. Here it is capturing the edits as of a few minutes before this is sent out. I'm using the mediawiki markup hopefully that is readable in text email. * Announcements ** Holiday season for many is upon us: *** Last meeting of 2018: December 18 *** First meeting of 2019: January 8 * Actions from last meeting ** Clarkb started OpenDev website content draft: https://review.openstack.org/622624 ** ianw and dmsimard still looking for reviews on https://review.openstack.org/#/q/topic:inner-ara-results ** ianw looking for reviews on glean + networkmanager + fedora29 support: https://review.openstack.org/#/q/status:open+topic:fedora29 * Specs ** https://review.openstack.org/623033 OpenDev Repo Hosting Rework Spec ** https://review.openstack.org/607377 Storyboard Attachements ** https://review.openstack.org/581214 Anomaly Detection in CI Logs * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:puppet-4 and topic:update-cfg-mgmt *** Zuul as CD engine * General topics ** OpenDev update (corvus) *** Website content draft: https://review.openstack.org/622624 ** Getting back to doing some Trusty server upgrades (clarkb 20181210) *** https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup *** If you can grab a host off the list and work it that is greatly appreciated. ** Infra root Github user and auth practice review (clarkb 20181210) *** Should we enforce second account rule and should the account we use be required to use 2fa? *** https://review.openstack.org/#/c/620702/ and https://review.openstack.org/#/c/620703/ * Open discussion From jonathan at openstack.org Tue Dec 11 00:39:44 2018 From: jonathan at openstack.org (Jonathan Bryce) Date: Mon, 10 Dec 2018 18:39:44 -0600 Subject: [OpenStack-Infra] Cross-community/generic mailing lists Message-ID: Hi everyone, I was having a conversation with some people who are working across multiple communities involved in virtualization and container security and they were interested in having a higher level mailing list for open discussions. It doesn’t necessarily make sense to tie it to any particular project mailing list, and I was wondering how others on the OpenDev team felt about creating discussion lists along these lines on lists.opendev.org. This isn’t the first time we’ve seen this use case, and seems like it could be a nice service to a number of communities. Thoughts? Jonathan From iwienand at redhat.com Tue Dec 11 01:16:26 2018 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 11 Dec 2018 12:16:26 +1100 Subject: [OpenStack-Infra] [Release-job-failures] Release of openstack-infra/jenkins-job-builder failed In-Reply-To: <6ef85559-4ea7-64e0-b633-99236b752e82@openstack.org> References: <6ef85559-4ea7-64e0-b633-99236b752e82@openstack.org> Message-ID: <20181211011626.GA4286@fedora19.localdomain> On Fri, Dec 07, 2018 at 12:02:00PM +0100, Thierry Carrez wrote: > Looks like the readthedocs integration for JJB is misconfigured, causing the > trigger-readthedocs-webhook to fail ? Thanks for pointing this out. After investigation it doesn't appear to be misconfigured in any way, but it seems that RTD have started enforcing the need for csrf tokens for the POST we use to notify it to build. This appears to be new behaviour, and possibly incorrectly applied upstream (I'm struggling to think why it's necessary here). I've filed https://github.com/rtfd/readthedocs.org/issues/4986 which hopefully can open a conversation about this. Let's see what comes of that... *If* we have no choice but to move to token based authentication, I did write the role to handle that. But it involves every project maintaining it's own secrets and us having to rework the jobs which is not difficult but also not trivial. So let's hope it doesn't come to that ... -i From joshua.hesketh at gmail.com Tue Dec 11 01:54:45 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 11 Dec 2018 12:54:45 +1100 Subject: [OpenStack-Infra] [infra] OpenDev feedback forum session summary In-Reply-To: <20181210173319.lnqn2ydshtmpwkjj@yuggoth.org> References: <20181210173319.lnqn2ydshtmpwkjj@yuggoth.org> Message-ID: Thank you for the update, it's much appreciated for those who couldn't make it :-) On Tue, Dec 11, 2018 at 4:34 AM Jeremy Stanley wrote: > Wednesday afternoon at the OpenStack Summit we met to discuss the > plan for the upcoming transition of the OpenStack Infrastructure > team to an independent effort named OpenDev. Notes were recorded at > https://etherpad.openstack.org/p/BER-opendev-feedback-and-missing-features > and form the basis of the summary with follows. > > For those unfamiliar with this topic, the announcement at > > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html > provides some background and context. Much of what follows may be a > reiteration of things also covered there, so please excuse any > redundancy on my part. > > To start out, we (re)announced that we have chosen a name (OpenDev) > and a domain (opendev.org), so can more effectively plan for DNS > changes for most of the services we currently host under the > "legacy" (for us) openstack.org domain. It was also pointed out that > while we expect to maintain convenience redirects and aliases from > old hostnames for all services we reasonably can so as to minimize > disruption, there will still be some unavoidable discontinuities for > users from time to time as we work through this. > > We talked for a bit about options for decentralizing GitHub > repository mirroring so that the current team no longer needs to > maintain it, and how to put it in control of people who want to > manage those organizations there for themselves instead. Doing this > with a job in Zuul's post pipeline (using encrypted secrets for > authentication) was suggested as one possible means to avoid users > all maintaining their own separate automation to accomplish the same > thing. > > Interest in bare metal CI nodes in nodepool was brought up again. To > reiterate, there's not really any technical reason we can't use > them, more that prior offers to donate access to Nova/Ironic-managed > nodes for this purpose never panned out. If you work for an > organization which maintains a "bare metal cloud" we could reach > over the open Internet and you'd consider carving out some of your > capacity for our CI system, please do get in touch with us! > > We spent a bit of time covering user concerns about the transition > to OpenDev and what reassurances we ought to provide. For starters, > our regular contributors and root systems administrators will > continue to be just as reachable and responsive as ever via IRC and > mailing lists, even if the names of the channels and MLs may change > as part of this transition. Similarly, our operations will remain as > open and transparent as they are today... really nothing about how > we maintain our systems is changing substantively as a part of the > OpenDev effort, though certainly the ways in which we maintain our > systems do still change and evolve over time as we seek to improve > them so that will of course continue to be the case. > > Paul Belanger raised concerns that announcing OpenDev could result > in a flood of new requests to host more projects. Well, really, I > think that's what we hope for. I (apparently) pointed out that even > when StackForge was first created back at the beginning of 2012, > there wasn't much restriction as to what we would be willing to > host. As interest in OpenDev spreads to new audiences, interest in > participating in its maintenance and development should too grow. > That said, we acknowledge that there are some scalability > bottlenecks and manual/human steps in certain aspects of new project > onboarding for now, so should be very up-front with any new projects > about that fact. We're also not planning for any big marketing push > to seek out additional projects at this point, but are happy to talk > to any who discover us and are interested in the services we offer. > > Next, Paul Belanger brought up the possibility of "bring your own > cloud" options for projects providing CI resources themselves. While > we expect nodepool to have support for tenant-specific resources in > the not-too-distant future, Jim Blair and Clark Boylan agreed the > large pool of generic resources we operate with now is really where > we see a lot of benefit and ability to drive efficiencies of scale. > Then Monty Taylor talked for a while, according to the notes in the > pad, and said things about earmarked resources potentially requiring > a sort of "commons tax" or... something. > > Jim Rollenhagen asked whether we would potentially start to test and > gate projects on GitHub too rather than just our Gerrit. Clark > Boylan and Jim Blair noted that the current situation where we're > testing pull requests for Kata's repositories is a bit of an > experiment in that direction today and the challenges we've faced > suggest that, while we'll likely continue to act as a third-party CI > system for some projects hosted on GitHub (we're doing that with > Ansible for example), we've discovered that trying to enforce gating > in code review platforms we don't also control is not likely > something we'll want to continue in the long term. > > It came up that our earlier ideas for flattening our Git namespace > to reduce confusion and minimize future repository renames is now > not looking as attractive. Instead we're probably going to need to > embrace an explosion of new namespaces and find better ways to cope > with the pain of renames in Gerrit as projects want to move between > them over time. We're planning to only run one Gerrit for > simplicity, so artificially creating "tenants" in it through > prefixes in repository names is really the simplest solution we have > to avoid different projects stepping on one another's toes with > their name choices. > > Then we got into some policy discussions about namespace creation. > Jim Blair talked about the potential to map Git/Gerrit repository > namespaces to distinct Zuul tenants, and someone (might have been > me? I was fairly jet-lagged and so don't really remember) asked > about who decides what the requirements are for projects to create > repositories in a particular namespace. In the case of OpenStack, > the answer is almost certainly the OpenStack Technical Committee or > at least some group to whom they delegate that responsibility. The > OpenStack TC needs to discuss fairly early what its policies are for > the "openstack" namespace (whether existing unofficial projects will > be allowed to remain, whether addition of new unofficial projects > will be allowed there) as well as whether it wants to allow creation > of multiple separate namespaces for official OpenStack projects. The > suggestion of nested "deep" namespaces like openstack/nova/nova came > up at this point too. > > We also resolved that we need to look back into the project rename > plugin for Gerrit. The last time we evaluated it, there wasn't much > there. We've heard it's improved with newer Gerrit releases, but if > it's still lacking we might want to contribute to making it more > effective so we can handle the inevitable renames more easily in the > future. > > And finally, as happens with most forum sessions, we stopped > abruptly because we ran over and it was Kendall Nelson's turn to > start getting ops feedback for the Contributor Guide. ;) > -- > Jeremy Stanley > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Dec 11 02:16:36 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 10 Dec 2018 21:16:36 -0500 Subject: [OpenStack-Infra] Cross-community/generic mailing lists In-Reply-To: <85B25A98-EE59-44D8-A394-A5CD498FC1FF@vexxhost.com> References: <85B25A98-EE59-44D8-A394-A5CD498FC1FF@vexxhost.com> Message-ID: <5E57B029-C7C9-4678-87F2-93C998076E5A@vexxhost.com> Resending because I’m silly and I didn’t hit reply all. > On Dec 10, 2018, at 9:15 PM, Mohammed Naser wrote: > > >> On Dec 10, 2018, at 7:39 PM, Jonathan Bryce wrote: >> >> Hi everyone, >> >> I was having a conversation with some people who are working across multiple communities involved in virtualization and container security and they were interested in having a higher level mailing list for open discussions. It doesn’t necessarily make sense to tie it to any particular project mailing list, and I was wondering how others on the OpenDev team felt about creating discussion lists along these lines on lists.opendev.org. This isn’t the first time we’ve seen this use case, and seems like it could be a nice service to a number of communities. > > I think this is actually really useful. At the Denver PTG (round 2), Clark brought up the idea of having public clouds (as an example of an operator at the time — this happened during the public cloud WG sessions) find a way to discuss together with upstream regarding things which involve core virtualization infrastructure. > > One of the issues that were discussed at the time was the ability to bring stable nested virtualization. Kashyap at the time mentioned that he was also happy to work together to help improve that. At the time, we just had no real place to house that discussion in but this seems *perfect* for OpenDev. > >> Thoughts? > > I’m for it. > >> Jonathan >> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > From shivashree.vaidhyamsubramanian at mail.mcgill.ca Tue Dec 11 04:56:29 2018 From: shivashree.vaidhyamsubramanian at mail.mcgill.ca (Vysali Vaidhyam Subramanian) Date: Tue, 11 Dec 2018 04:56:29 +0000 Subject: [OpenStack-Infra] [CI] Gathering Results for Research Message-ID: Hello , I am a grad student and I am currently studying Flaky Tests. As a part of my study , I’ve been examining the Check and the Gate jobs in the OpenStack projects.I have been trying to identify why developers run rechecks and how often developers running a recheck helps in the identification of a Flaky test. To identify how often a recheck points to a Flaky test, I need the test results of each of the rechecks. However, I have not been able to get this information from Gerrit for each recheck comment. I was wondering if the history of the jobs run against a recheck comment is available and if it can be retrieved. It would be great if I can get some pointers :) Thanks, Shivashree. From cboylan at sapwetik.org Tue Dec 11 18:00:18 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 11 Dec 2018 10:00:18 -0800 Subject: [OpenStack-Infra] [CI] Gathering Results for Research In-Reply-To: References: Message-ID: <1544551218.1450561.1606104728.34EEEAB8@webmail.messagingengine.com> On Mon, Dec 10, 2018, at 8:56 PM, Vysali Vaidhyam Subramanian wrote: > Hello , > > I am a grad student and I am currently studying Flaky Tests. > As a part of my study , I’ve been examining the Check and the Gate jobs > in the OpenStack projects.I have been trying to identify why developers > run rechecks and how often > developers running a recheck helps in the identification of a Flaky > test. > > To identify how often a recheck points to a Flaky test, I need the test > results of each of the rechecks. > However, I have not been able to get this information from Gerrit for > each recheck comment. > I was wondering if the history of the jobs run against a recheck comment > is available and if it can be retrieved. > > It would be great if I can get some pointers :) > This information is known to Zuul, but I don't think Zuul currently records a flag to indicate results are due to some human triggered retry mechanism. One approach could be to add this functionality to Zuul and rely on the Zuul builds db for that data. Another approach that doesn't require updates to Zuul is to parse Gerrit comments and flag things yourself. For check jobs they only run when a new patchset is pushed or when rechecked. This means the first results for a patchset are the initial set. Any subsequent results for that patchset from the check pipeline (indicated in the comment itself) are the result of rechecks. The gate is a bit more complicated because shared gate queues can cause a change's tests to be rerun if a related change is rechecked. You can probably infer if the recheck was on this particular change by looking for previous recheck comments without results. Unfortunately I don't know how clean the data is. I believe the Zuul comments have been very consistent over time, but don't know that for sure. You may want to start with both things. The first to make future data easier to consume and the second to have a way to get at the preexisting data. Some other thoughts. Our job log retention time is quite short due to disk space contraints (~4 weeks?). While the Gerrit comments go back many years if you want to know what specific test case a tempest job failed you'll only be able to get that data for the last month or so. We also try to index our job logs in elasticsearch and expose them via a kibana web ui and a subset of the elasticsearch API at http://logstash.openstack.org. More details at https://docs.openstack.org/infra/system-config/logstash.html. We are happy for people to use that for additional insight. Just please try to be nice to our cluster and we'd love it if you shared insights/results with us too. Finally we do some tracking of what we think are reasons for rechecks with our "elastic-recheck" tool. It builds on top of the elasticsearch cluster above using bug fingerprint queries to track the occurrence of known issues. http://status.openstack.org/elastic-recheck/ renders graphs and the source repo for elastic-recheck has all the query fingerprints. Again feel free to use this tool if it is helpful, but we'd love insights/feedback/etc if you end up learning anything interesting with it. Hope this was useful, Clark From shivashree.vaidhyamsubramanian at mail.mcgill.ca Wed Dec 12 17:53:34 2018 From: shivashree.vaidhyamsubramanian at mail.mcgill.ca (Vysali Vaidhyam Subramanian) Date: Wed, 12 Dec 2018 17:53:34 +0000 Subject: [OpenStack-Infra] [CI] Gathering Results for Research Message-ID: <50320422-7600-4345-8F78-6BF889AB7F8A@mail.mcgill.ca> Hey Clark, Thank you very much for taking the time out to send a detailed reply. This is definitely helpful and confirms what we know so far about the Openstack CI infrastructure. We’ll start with the data from Gerrit and build upon that as we go. Thank you once again ! -Shivashree. > Clark wrote: > This information is known to Zuul, but I don't think Zuul currently records a flag to indicate results are due to some human > > triggered retry mechanism. One approach could be to add this functionality to Zuul and rely on the Zuul builds db for that > > data. > Another approach that doesn't require updates to Zuul is to parse Gerrit comments and flag things yourself. For check jobs > they only run when a new patchset is pushed or when rechecked. This means the first results for a patchset are the initial > set. Any subsequent results for that patchset from the check pipeline (indicated in the comment itself) are the result > of rechecks. > The gate is a bit more complicated because shared gate queues can cause a change's tests to be rerun if a related change > is rechecked. You can probably infer if the recheck was on this particular change by looking for previous recheck comments >without results. > Unfortunately I don't know how clean the data is. I believe the Zuul comments have been very consistent over time, but > > don't know that for sure. You may want to start with both things. The first to make future data easier to consume and the > > second to have a way to get at the preexisting data. […] > Hope this was useful, > Clark >On Mon, Dec 10, 2018, at 8:56 PM, Vysali Vaidhyam Subramanian wrote: > Hello , > > > > I am a grad student and I am currently studying Flaky Tests. > > As a part of my study , I’ve been examining the Check and the Gate jobs > > in the OpenStack projects.I have been trying to identify why developers > > run rechecks and how often > > developers running a recheck helps in the identification of a Flaky > > test. > > > > To identify how often a recheck points to a Flaky test, I need the test > > results of each of the rechecks. > > However, I have not been able to get this information from Gerrit for > > each recheck comment. > > I was wondering if the history of the jobs run against a recheck comment > > is available and if it can be retrieved. > > > > It would be great if I can get some pointers :) From fungi at yuggoth.org Thu Dec 13 20:39:15 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 13 Dec 2018 20:39:15 +0000 Subject: [OpenStack-Infra] Cross-community/generic mailing lists In-Reply-To: References: Message-ID: <20181213203915.hlfemc2c4wisl4lb@yuggoth.org> On 2018-12-10 18:39:44 -0600 (-0600), Jonathan Bryce wrote: > I was having a conversation with some people who are working > across multiple communities involved in virtualization and > container security and they were interested in having a higher > level mailing list for open discussions. It doesn’t necessarily > make sense to tie it to any particular project mailing list, and I > was wondering how others on the OpenDev team felt about creating > discussion lists along these lines on lists.opendev.org. This > isn’t the first time we’ve seen this use case, and seems like it > could be a nice service to a number of communities. > > Thoughts? As a straw man, I've proposed https://review.openstack.org/625096 to add a lists.opendev.org Mailman site to our existing mailing list server. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From francois.magimel at alumni.enseeiht.fr Sun Dec 16 22:10:26 2018 From: francois.magimel at alumni.enseeiht.fr (=?UTF-8?Q?Fran=c3=a7ois_Magimel?=) Date: Sun, 16 Dec 2018 23:10:26 +0100 Subject: [OpenStack-Infra] Hosting OpenStack Summit (among other) videos In-Reply-To: References: Message-ID: <127a7c27-5730-f763-7552-f9c75360a649@alumni.enseeiht.fr> Le mer. 5 déc. 2018 à 19:39, François Magimel > a écrit : > Hello, > > My name is François and I would like to suggest a new service for the OpenStack Foundation: video hosting. Today, video are hosted in Youtube. But as OpenStack promotes free and open source softwares, I would like to help installing an alternative for the Foundation: Peertube [1], a decentralized video hosting network based on libre software :). > > For more information, Peertube is promoted by Framasoft [2] (a French non-profit organisation) and is under the AGPLv3 licence. The first version is out and it is already used by some organisations :). You can find some examples [3]. > > So, some questions have arisen on IRC #openstack-infra and on my mind: > - what is the licence of summit videos ? > - does the Foundation have some resources to host videos ? > > What do you think of that idea ? > (Of course, I am ready to help or to do the installation and to get metadata of all videos and put them into the peertube instance :)). > > > [1] https://joinpeertube.org/en/ > [2] https://framasoft.org/en/ > [3] https://framatube.org/en-US Hello, I've just propose a spec for this service : https://review.openstack.org/#/c/625450/. Feel free to review it :). François -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cboylan at sapwetik.org Mon Dec 17 19:11:40 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 17 Dec 2018 11:11:40 -0800 Subject: [OpenStack-Infra] Team meeting agenda for December 18, 2018 Message-ID: <1545073900.1758800.1611707400.6F3F50B4@webmail.messagingengine.com> Here is tomorrow's team meeting agenda. Hope to see you there. This will be our last meeting of the year and we'll be back January 8, 2019. * Announcements ** Holiday season for many is upon us: *** Last meeting of 2018: December 18 *** First meeting of 2019: January 8 * Actions from last meeting ** ianw and dmsimard still looking for reviews on https://review.openstack.org/#/q/topic:inner-ara-results ** ianw looking for reviews on glean + networkmanager + fedora29 support: https://review.openstack.org/#/q/status:open+topic:fedora29 * Specs approval ** https://review.openstack.org/623033 OpenDev Repo Hosting Rework Spec ** https://review.openstack.org/607377 Storyboard Attachements ** https://review.openstack.org/581214 Anomaly Detection in CI Logs * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:puppet-4 and topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev *** Clarkb has changes up to publish opendev website: https://review.openstack.org/#/c/625671/ * General topics ** Shared Github admin account (with 2fa) (clarkb 20181218) *** https://review.openstack.org/#/c/624531/ ** Pre holiday bug fixing (clarkb 20181218) *** https://review.openstack.org/625350 Fix base-server play. Related ansible bug https://github.com/ansible/ansible/issues/49969 *** https://review.openstack.org/625095 Change pypi cache proxy behavior to cache indexes ** Docker failing when mirror contains a path element (frickler/mgoddard 20181218) *** Breaks our zuul-quick-start job: http://logs.openstack.org/55/624855/3/check/zuul-quick-start/00d956c/job-output.txt.gz#_2018-12-17_12_36_58_996620 *** http://paste.openstack.org/show/737483/ *** Upstream issue doesn't look like it is going to be fixed any time soon: https://github.com/moby/moby/issues/36598 *** kolla-ansible is also suffering from this: http://logs.openstack.org/89/568289/2/check/kolla-ansible-ubuntu-source-ceph/6983d0d/primary/logs/system_logs/docker.txt.gz *** Proposed workaround: Switch to using the pathless :8082 variant of our mirror unconditionally - https://review.openstack.org/625596 * Open discussion From wuchongyao at outlook.com Wed Dec 19 05:11:43 2018 From: wuchongyao at outlook.com (wu chongyao) Date: Wed, 19 Dec 2018 05:11:43 +0000 Subject: [OpenStack-Infra] MacroSAN storage CI Message-ID: I am setting up an External OpenStack Testing System。The attachment is public key。 The following is my account information: Username:MacroSAN Fullname: MacroSAN storage CI Email Address: wuchongyao at macrosan.com. ________________________________ wuchongyao at outlook.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: id_rsa Type: application/octet-stream Size: 1675 bytes Desc: id_rsa URL: From fungi at yuggoth.org Sun Dec 23 20:00:59 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 23 Dec 2018 20:00:59 +0000 Subject: [OpenStack-Infra] [infra][storyboard] Project Update/Some New Things In-Reply-To: <20180910143449.ycbttjla2tn4ysql@yuggoth.org> References: <1536586998.2089.20.camel@sotk.co.uk> <20180910143449.ycbttjla2tn4ysql@yuggoth.org> Message-ID: <20181223200059.lco557g4atqy2hn5@yuggoth.org> On 2018-09-10 14:34:49 +0000 (+0000), Jeremy Stanley wrote: > On 2018-09-10 14:43:18 +0100 (+0100), Adam Coldrick wrote: [...] > > # Finding stories from a task ID > > > > It is now possible to navigate to a story given just a task ID, > > if for whatever reason that's all the information you have > > available. A link like > > > > https://storyboard.openstack.org/#!/task/12389 > > > > will work. This will redirect to the story containing the task, > > and is the first part of work to support linking directly to an > > individual task in a story. > [...] > > As an aside, I think this makes it possible now for us to start > hyperlinking Task footers in commit messages within the Gerrit > change view. I'll try and figure out what we need to adjust in our > Gerrit commentlink and its-storyboard plugin configs to make that > happen. As of Friday (2018-12-21) our Gerrit instance at https://review.openstack.org/ has started hyperlinking task footers in commit messages, leveraging the above feature (the configuration change for it merged months ago but Gerrit has been so stable lately we've not gotten around to restarting it for that to take effect until now). At this point you can omit the story footer if you have a task footer, since all the story footer has been providing is a hyperlink anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: