From roman.gorshunov at att.com Fri Jan 3 16:29:53 2020 From: roman.gorshunov at att.com (Gorshunov, Roman) Date: Fri, 3 Jan 2020 16:29:53 +0000 Subject: [OpenStack-Infra] Touching base; Airship CI cluster In-Reply-To: References: Message-ID: <8bdf26696d2a407fa7f58fc47bd9c6fa@att.com> Hello Christopher, Clark, OpenStack-Infra, Thank you for your help. Moving conversation to OpenStack-Infra mailing list. Answers: OS images available are OK for us. We primarily target Ubuntu 18.04 as a base image. > For sizing our current test VMs are 8vcpu, 8GB RAM, 80GB of disk. This is the VM we would be running: 64 GB Ram, 200 GB block storage, 16 VCpu, + nested virtualization > Is it possible that AirShip could run these test jobs on 3 node multinode jobs using our existing nested-virt flavors? No, we can’t. We really need to run nested VMs inside a very big VM to simulate end-to-end baremetal deployments (with PXE booting, etc). Nested VMs would be up to 16GB RAM.                 > Airship could choose to accept the risk of having a single provider, but I would caution against this. We currently have one provider, planning to have two providers soon. > The other policy thing we should consider is if these resources would be available to the large pool of users or if they should be considered airship specific. We would prefer them to be Airship-specific, primarily because only we would be running those huge VMs, and other providers do not have this option at the moment for us to be able to utilize their hardware. Please, let’s set up a call for us at the time you would find convenient. -- Roman Gorshunov Principal, Systems Engineering From: Christopher Price Sent: Thursday, December 12, 2019 11:00 AM To: Gorshunov, Roman Cc: paye600 at gmail.com Subject: Re: Touching base Hi Roman, I have been in touch with a few people and it seems there is a way to solve this across the groups, however there is some thinking and effort involved in making it happen. We should set up a call with the AirShip and some of the infra team to discuss and align on objectives, however I think first it’s worth having a conversation in the AirShip community to understand what is needed.  Here is a basic rundown from Clarke, sorry for the long mail with Q&A included: On 12/6/19 4:55 AM, Christopher Price wrote: > Hey Clark, >  > Reaching out to chat about getting some of the more unique AirShip gating use-cases set up in openinfra. >  > Some text below in my question to Thierry for context, but to summarize: > - Airship needs to do "huge VM" based testing with nested virtualization enabled for gating in opendev > - A few companies are wiling to put up some hardware to support that - maintenance and support from the companies > - I'd like to bring the "what does infra expect" topic forward to see what needs to happen and what we need to ensure to get the gating in place for these items >  > I think the plan is to use something like airskiff or airship-in-a-bottle as a method of doing deploy testing for gating. > Requires a Ubuntu 16.04 VM (minimum 4vCPU/20GB RAM/32GB disk) to run. > https://urldefense.proofpoint.com/v2/url?u=https-3A__airship-2Dtreasuremap.readthedocs.io_en_latest_airskiff.html&d=DwMGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qhOWbb99YOfK35xi5gJyuCTk8aVUkJ-JqhgPQdVrAoo&m=R4VBrtcCr7N-f_2HymjTWkL3JH9SWSQGg3U_SZh32iM&s=4-zR8RIZpy1O2ZMeOjlqVklESP4M1GoUWP87iKuV2H8&e= > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_airshipit_airship-2Din-2Da-2Dbottle&d=DwMGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qhOWbb99YOfK35xi5gJyuCTk8aVUkJ-JqhgPQdVrAoo&m=R4VBrtcCr7N-f_2HymjTWkL3JH9SWSQGg3U_SZh32iM&s=KfFmZvKHEh4LRqnDhAYol88lMLbBtGnQE_mKjUz8QTI&e= We currently build images for Ubuntu Xenial, Ubuntu Bionic, CentOS 7,  CentOS 8, Fedora 29, Fedora 30, opensuse 15, Debian Stretch, Debian  Buster, and more. The image here shouldn't be an issue. For sizing our current test VMs are 8vcpu, 8GB RAM, 80GB of disk. Zuul  supports native multinode testing. What we've tried to do is push  towards running tests on distributed multinode setups rather than  singular large VMs. A major advantage of this is in many cases what we  are producing is distributed software that needs to operate in a  distributed manner and we are able to test that effectively. Nested virt is likely the biggest hurdle to sort out. Despite it finally  being enabled by default on very recent Linux kernels what we see in  production is much older and flakier. Typical nested virt experience  from our existing cloud providers is that it will work for some time  then a kernel update will get pushed out. The "middle" VMs will get this  update first and start crashing until our cloud providers update the  base hypervisor kernels as well. What we have done to try and improve nested virt reliability is added  nested-virt specific labels to nodepool. teams running tests that  trigger these nested virt crashes can use these labels to actively work  with our cloud providers to debug and fix these issues. Is it possible that AirShip could run these test jobs on 3 node  multinode jobs using our existing nested-virt flavors? >  > I'd like to set up a call amongst stakeholders or on an AirShip dev call in the not too distant future to outline: >             What do hardware hosting companies need to provide Our "contributing resources" document,  https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_infra_system-2Dconfig_contribute-2Dcloud.html&d=DwMGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qhOWbb99YOfK35xi5gJyuCTk8aVUkJ-JqhgPQdVrAoo&m=R4VBrtcCr7N-f_2HymjTWkL3JH9SWSQGg3U_SZh32iM&s=sVCeNvgE11_hlPffQ0yPg0eLPMW7CcyIG-Umb2oTTls&e=,  should give a good overview of what is required. Short version is an  OpenStack cloud where we can run some longer lived cloud resources as  well as the test nodes themselves. >             Are there any specific needs on the infra team (policies or changes) that we should be aware of We require publicly addressable IP addresses for each node because we  run a globally distributed control plane. These addresses can be IPv6  addresses, but we do need at least one IPv4 address for the in cloud  mirror node. Policy wise for the OpenStack project we've maintained that for gating  resources need to come from at least two different providers. From our  experience clouds come and go (due to planned an unplanned outages) and  being able to fallback on redundant resources in particularly important  for gating. Airship could choose to accept the risk of having a single  provider, but I would caution against this. The other policy thing we should consider is if these resources would be  available to the large pool of users or if they should be considered  airship specific. In the past we've tried to do project specific  resources and they tended to be less reliable. This doesn't necessarily  make it a bad idea, but I think having "pressure" from the greater whole  helps ensure things run smoothly and problems are caught quickly. Assuming AirShip is able to test today as described above we could add  in these new resources to the pool to expand quotas and provide more  resources to Airship (and possibly the whole). >             What is required by the AirShip devs to ensure they can direct their jobs to the right machines etc.. Nodepool would provide node labels that identify the resources and Zuul  job configuration would consume those labels. Configuration is how we  express this. >  > Hopefully we can have something in place before soonish as the AirShip team want to be in Beta for their 2.0 release early next year. >  > Any help you can provide would be appreciated. I think it would be helpful to get the discussion onto the infra mailing  list, mailto:openstack-infra at lists.openstack.org, as other team members may  have input as well. Happy to do a call as well (perhaps we can coordinate that on the  mailing list?). I'll be around the next two weeks pre holidays. Major  time conflicts are Mondays 1600-1700UTC and Tuesdays  1600-1700UTC+1900-2000UTC and my working hours are generally  1600-0100UTC but could do a one off 1500UTC call if that helps. I have not been able to follow up as my weeks have become strangled with end of year activities.  Your help on determining what to do next would be appreciated, and if you want to move forward with this information feel free to do so, this doesn’t have to pivot on me.  😊 / Chris ...   From cboylan at sapwetik.org Mon Jan 6 17:36:10 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 06 Jan 2020 09:36:10 -0800 Subject: [OpenStack-Infra] Touching base; Airship CI cluster In-Reply-To: <8bdf26696d2a407fa7f58fc47bd9c6fa@att.com> References: <8bdf26696d2a407fa7f58fc47bd9c6fa@att.com> Message-ID: On Fri, Jan 3, 2020, at 8:29 AM, Gorshunov, Roman wrote: > Hello Christopher, Clark, OpenStack-Infra, > > Thank you for your help. > > Moving conversation to OpenStack-Infra mailing list. > > Answers: > OS images available are OK for us. We primarily target Ubuntu 18.04 as > a base image. > > > For sizing our current test VMs are 8vcpu, 8GB RAM, 80GB of disk. > This is the VM we would be running: 64 GB Ram, 200 GB block storage, 16 > VCpu, + nested virtualization > > > Is it possible that AirShip could run these test jobs on 3 > node multinode jobs using our existing nested-virt flavors? > No, we can’t. We really need to run nested VMs inside a very big VM to > simulate end-to-end baremetal deployments (with PXE booting, etc). > Nested VMs would be up to 16GB RAM.     As mentioned in the earlier email nested virt is one of my big concerns with this as it hasn't been very stable for us in the past. Coupled with potentially having a single provider of CI resources this could lead to very flaky testing with no alternatives. Thinking out loud here, could you test the PXE provisioning independent of the configuration management and workload of the PXE provisioned resources? Then you could avoid nested virt when PXE booting and use qemu emulation which should be more reliable. I believe this is how Ironic does their testing. Then we can take advantage of our existing abilities to run multinode testing to check configuration management and workloads assuming the PXE boots succeeded (because this is tested separately). I expect this would give you more reliable testing over time as you avoid the nested virt problem entirely. This may also make the test environment fit into existing resources allowing you to take advantage of multi cloud availability should problems arise in any single test node provider cloud. > > > Airship could choose to accept the risk of having a single provider, but I would caution against this. > We currently have one provider, planning to have two providers soon. > > > The other policy thing we should consider is if these resources would > be available to the large pool of users or if they should be > considered airship specific. > We would prefer them to be Airship-specific, primarily because only we > would be running those huge VMs, and other providers do not have this > option at the moment for us to be able to utilize their hardware. > > Please, let’s set up a call for us at the time you would find convenient. Does 16:00UTC Thursday January 9, 2020 work for you? If so we should be able to use our asterisk server, https://wiki.openstack.org/wiki/Infrastructure/Conferencing, room 6001. Let me know if you cannot connect to that easily and we can set up a jitsi meet (https://meet.jit.si/) instead. > > -- > Roman Gorshunov > Principal, Systems Engineering > > From: Christopher Price > Sent: Thursday, December 12, 2019 11:00 AM > To: Gorshunov, Roman > Cc: paye600 at gmail.com > Subject: Re: Touching base > > Hi Roman, > > I have been in touch with a few people and it seems there is a way to > solve this across the groups, however there is some thinking and effort > involved in making it happen. > We should set up a call with the AirShip and some of the infra team to > discuss and align on objectives, however I think first it’s worth > having a conversation in the AirShip community to understand what is > needed.  Here is a basic rundown from Clarke, sorry for the long mail > with Q&A included: > > On 12/6/19 4:55 AM, Christopher Price wrote: > > Hey Clark, > >  > > Reaching out to chat about getting some of the more unique AirShip gating use-cases set up in openinfra. > >  > > Some text below in my question to Thierry for context, but to summarize: > > - Airship needs to do "huge VM" based testing with nested virtualization enabled for gating in opendev > > - A few companies are wiling to put up some hardware to support that - maintenance and support from the companies > > - I'd like to bring the "what does infra expect" topic forward to see what needs to happen and what we need to ensure to get the gating in place for these items > >  > > I think the plan is to use something like airskiff or airship-in-a-bottle as a method of doing deploy testing for gating. > > Requires a Ubuntu 16.04 VM (minimum 4vCPU/20GB RAM/32GB disk) to run. > > https://urldefense.proofpoint.com/v2/url?u=https-3A__airship-2Dtreasuremap.readthedocs.io_en_latest_airskiff.html&d=DwMGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qhOWbb99YOfK35xi5gJyuCTk8aVUkJ-JqhgPQdVrAoo&m=R4VBrtcCr7N-f_2HymjTWkL3JH9SWSQGg3U_SZh32iM&s=4-zR8RIZpy1O2ZMeOjlqVklESP4M1GoUWP87iKuV2H8&e= > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_airshipit_airship-2Din-2Da-2Dbottle&d=DwMGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qhOWbb99YOfK35xi5gJyuCTk8aVUkJ-JqhgPQdVrAoo&m=R4VBrtcCr7N-f_2HymjTWkL3JH9SWSQGg3U_SZh32iM&s=KfFmZvKHEh4LRqnDhAYol88lMLbBtGnQE_mKjUz8QTI&e= > > We currently build images for Ubuntu Xenial, Ubuntu Bionic, CentOS 7,  > CentOS 8, Fedora 29, Fedora 30, opensuse 15, Debian Stretch, Debian  > Buster, and more. The image here shouldn't be an issue. > > For sizing our current test VMs are 8vcpu, 8GB RAM, 80GB of disk. Zuul  > supports native multinode testing. What we've tried to do is push  > towards running tests on distributed multinode setups rather than  > singular large VMs. A major advantage of this is in many cases what we  > are producing is distributed software that needs to operate in a  > distributed manner and we are able to test that effectively. > > Nested virt is likely the biggest hurdle to sort out. Despite it finally  > being enabled by default on very recent Linux kernels what we see in  > production is much older and flakier. Typical nested virt experience  > from our existing cloud providers is that it will work for some time  > then a kernel update will get pushed out. The "middle" VMs will get this  > update first and start crashing until our cloud providers update the  > base hypervisor kernels as well. > > What we have done to try and improve nested virt reliability is added  > nested-virt specific labels to nodepool. teams running tests that  > trigger these nested virt crashes can use these labels to actively work  > with our cloud providers to debug and fix these issues. > > Is it possible that AirShip could run these test jobs on 3 node  > multinode jobs using our existing nested-virt flavors? > > >  > > I'd like to set up a call amongst stakeholders or on an AirShip dev call in the not too distant future to outline: > >             What do hardware hosting companies need to provide > > Our "contributing resources" document,  > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_infra_system-2Dconfig_contribute-2Dcloud.html&d=DwMGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qhOWbb99YOfK35xi5gJyuCTk8aVUkJ-JqhgPQdVrAoo&m=R4VBrtcCr7N-f_2HymjTWkL3JH9SWSQGg3U_SZh32iM&s=sVCeNvgE11_hlPffQ0yPg0eLPMW7CcyIG-Umb2oTTls&e=,  > should give a good overview of what is required. Short version is an  > OpenStack cloud where we can run some longer lived cloud resources as  > well as the test nodes themselves. > > >             Are there any specific needs on the infra team (policies or changes) that we should be aware of > > We require publicly addressable IP addresses for each node because we  > run a globally distributed control plane. These addresses can be IPv6  > addresses, but we do need at least one IPv4 address for the in cloud  > mirror node. > > Policy wise for the OpenStack project we've maintained that for gating  > resources need to come from at least two different providers. From our  > experience clouds come and go (due to planned an unplanned outages) and  > being able to fallback on redundant resources in particularly important  > for gating. Airship could choose to accept the risk of having a single  > provider, but I would caution against this. > > The other policy thing we should consider is if these resources would be  > available to the large pool of users or if they should be considered  > airship specific. In the past we've tried to do project specific  > resources and they tended to be less reliable. This doesn't necessarily  > make it a bad idea, but I think having "pressure" from the greater whole  > helps ensure things run smoothly and problems are caught quickly. > > Assuming AirShip is able to test today as described above we could add  > in these new resources to the pool to expand quotas and provide more  > resources to Airship (and possibly the whole). > > >             What is required by the AirShip devs to ensure they can direct their jobs to the right machines etc.. > > Nodepool would provide node labels that identify the resources and Zuul  > job configuration would consume those labels. Configuration is how we  > express this. > > >  > > Hopefully we can have something in place before soonish as the AirShip team want to be in Beta for their 2.0 release early next year. > >  > > Any help you can provide would be appreciated. > > I think it would be helpful to get the discussion onto the infra mailing  > list, mailto:openstack-infra at lists.openstack.org, as other team members may  > have input as well. > > Happy to do a call as well (perhaps we can coordinate that on the  > mailing list?). I'll be around the next two weeks pre holidays. Major  > time conflicts are Mondays 1600-1700UTC and Tuesdays  > 1600-1700UTC+1900-2000UTC and my working hours are generally  > 1600-0100UTC but could do a one off 1500UTC call if that helps. > > I have not been able to follow up as my weeks have become strangled > with end of year activities.  Your help on determining what to do next > would be appreciated, and if you want to move forward with this > information feel free to do so, this doesn’t have to pivot on me.  😊 > > / Chris > > ... >   > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From cboylan at sapwetik.org Mon Jan 6 21:10:28 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 06 Jan 2020 13:10:28 -0800 Subject: [OpenStack-Infra] Infra Meeting Agenda for January 7, 2020 Message-ID: <4a16abd5-6aaa-40b0-8acd-4e61e7eee26e@www.fastmail.com> We will meet at 19:00UTC in #openstack-meeting with this agenda: == Agenda for next meeting == * Announcements * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev *** Possible gitea/go-git bug in current version of gitea we are running https://storyboard.openstack.org/#!/story/2006849 *** Bug seems to remain present after Gitea 1.10 upgrade * General topics ** Trusty Upgrade Progress (clarkb 20200107) *** Wiki updates ** static.openstack.org (ianw,corvus,mnaser,fungi 20200107) *** Need reviews on https://review.opendev.org/#/q/status:open+topic:static.opendev.org ** Project renames (clarkb 20200107) * Open discussion Welcome back from the holidays! Clark From paye600 at gmail.com Tue Jan 7 09:45:40 2020 From: paye600 at gmail.com (Roman Gorshunov) Date: Tue, 7 Jan 2020 10:45:40 +0100 Subject: [OpenStack-Infra] Touching base; Airship CI cluster In-Reply-To: References: <8bdf26696d2a407fa7f58fc47bd9c6fa@att.com> Message-ID: Hello Clark, Thank you for your reply. Meeting time is OK for me. I have forwarded invitation to Pete Birley and Matt McEuen, they would hopefully join us. Best regards, -- Roman Gorshunov On Mon, Jan 6, 2020 at 6:40 PM Clark Boylan wrote: > > Does 16:00UTC Thursday January 9, 2020 work for you? If so we should be able to use our asterisk server, https://wiki.openstack.org/wiki/Infrastructure/Conferencing, room 6001. > From cboylan at sapwetik.org Mon Jan 13 23:23:35 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 13 Jan 2020 15:23:35 -0800 Subject: [OpenStack-Infra] Infra Meeting agenda for January 14, 2020 Message-ID: <4c982347-2df1-416e-8462-f509ffa2b46e@www.fastmail.com> We will meet tomorrow, January 14 at 19:00UTC in #openstack-meeting with this agenda: == Agenda for next meeting == * Announcements ** OpenStack Foundation Individual Board Member Election happening now. Ends January 17 17:00UTC * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev *** Gitea restarts for upgrading appear to be able to miss gerrit replication events. No errors logged on either side. *** Possible gitea/go-git bug in current version of gitea we are running https://storyboard.openstack.org/#!/story/2006849 * General topics ** Trusty Upgrade Progress (clarkb 20200107) *** Wiki updates ** static.openstack.org (ianw,corvus,mnaser,fungi 20200107) *** Need reviews on https://review.opendev.org/#/q/status:open+topic:static.opendev.org * Open discussion From ssbarnea at redhat.com Tue Jan 14 11:41:37 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Tue, 14 Jan 2020 11:41:37 +0000 Subject: [OpenStack-Infra] proposal: making gerrit/zuul message subject shorter Message-ID: I kinda find the default gerrit message subjects quite not practical for narrow screens (not only mobile). The information is stacked with too much repetitive boilerplate at the start. Most of the time only very few letters from the interesting part (like change name) are visible. Am I the only one observing this? Does it make sense to look into changing the templates to make them more compact? Few observations: - "Change in" is mostly boilerplate, I would be happy to use a bried suffix like `x: ` or an emoji but that may be controversial - Project name with organization suffix, how often do we have same project under two different orgs? - Is master really needed? I would rather avoid mentioning it for master or even avoiding it completely if not possible to hide it for master only - Advanced filtering of messages can be done using message headers, nobody forces us to rely on subject line alone - Even the message content could use some improvements, mainly about making it more compact What do you think? https://sbarnea.com/ss/Screen-Shot-2020-01-14-11-27-32.png (example screenshot) From sgw at linux.intel.com Tue Jan 14 21:49:07 2020 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 14 Jan 2020 13:49:07 -0800 Subject: [OpenStack-Infra] Fwd: CVE References in LPs are messed up after centos feature branch rebase In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0C16044F8@ALA-MBD.corp.ad.wrs.com> <10e92750-32f7-5065-2f91-d17d9009723b@linux.intel.com> <80983118-a157-48df-92a5-73f84cbda718@www.fastmail.com> <816d506c-1d2c-edd2-d0b5-98fc458477c3@linux.intel.com> Message-ID: <670109cf-449f-24ce-8382-648dc52539b2@linux.intel.com> Clark: Happy New Year! Circling back on this after the holidays, where do we stand, I am about to do another merge for the feature branch, since for the work we are doing across repos is best via feature branch vs Depends-On as talked about below. We talked about only reporting an abbreviated commit message when doing a merge commit, and I asked if a storyboard or launchpad was needed, did I lose a reply? Remember not all of us are on the Infra mailing list. Thanks Sau! On 12/18/19 10:19 AM, Saul Wold wrote: > > > On 12/17/19 10:36 AM, Clark Boylan wrote: >> On Tue, Dec 17, 2019, at 10:16 AM, Saul Wold wrote: >>> >>> >>> On 12/16/19 8:22 AM, Clark Boylan wrote: >>>> On Mon, Dec 16, 2019, at 7:46 AM, Saul Wold wrote: >>>>> >>>>> Hi Clark, >>>>> >>>>> Sorry, I only get the archive of Infra and Ghada is not on the >>>>> list, if >>>>> you can please reply to us and the list that would be great. >>>>>> >>>>>> I think what happened here is you merged bug fixes (in this case >>>>>> cve bug fixes) from master into a feature branch. Then when you >>>>>> pushed that merge commit and merged it, the bot noticed that those >>>>>> bug fixes had merged to the feature branch and commented with >>>>>> those details on the bug. I believe this is "correct" behavior >>>>>> from the bot. >>>>> >>>>> Is there a different way to do the merge activity? >>>>> >>>>>> >>>>>> Is the issue the existence of comments like >>>>>> https://bugs.launchpad.net/starlingx/+bug/1844579/comments/18 on >>>>>> the bugs? Or is there some other metadata that is being added that >>>>>> I am missing? >>>>>> >>>>> Yes, that comment does not belong with that bug and because the >>>>> comment >>>>> includes CVE-2019-XXXXX formating it adds the CVE References >>>>> metadata also. >>>> >>>> Can you expand on this? Why does the comment not belong with the >>>> bug? The bug was fixed on the f/centos8 branch and that is what the >>>> comment is telling you. Where is the CVE References metadata? >>>> >>> The "merge commit" message contains all the commits that are part of the >>> merge commit.  I guess the hook sees the merge commit with the Closes: >>> tag and adds the complete commit message to the associated launchpad >>> bugs (in the case of multiple closes due to multiple commit messages in >>> the merge commit. >>> >>> Since that larger "merge commit" message contains CVE reference they get >>> added to the Closes: tagged bugs. Look again at >>> https://bugs.launchpad.net/starlingx/+bug/1844579 >>> Below the description is the CVE Reference with links to the CVE >>> mentioned.  This launchpad has nothing to do with the CVEs in question. >>> I guess this is done inside launchpad, not in the opendev bugtask. >>> >>> Does that make more sense? >> >> Yes, that helps. And yes I believe launchpad is doing its own string >> scraping and deciding to list those CVEs. I don't believe we are >> triggering that explicitly. >> > Yes, I realized that after looking at the code you shared with me > earlier.  This is part of why we might want to consider a simpler merge > message to avoid this scraping problem. > >>> >>>>> >>>>>> If we don't want comments like that to appear you'd need to modify >>>>>> your merged trees so that bug fixes don't go from master into the >>>>>> feature branch. Or we'd need to come up with some rule set we can >>>>>> apply to the bot to filter bugs out in certain circumstances. >>>>> >>>>> Modifying the merge trees would defeat the purpose of doing the >>>>> merge I >>>>> think. Does this issue not affect other projects or are we yet again >>>>> doing strange operations in StarlingX ;-)!  Not sure how hare it would >>>>> be to filter for feature branches. >>>> >>>> Yes, you probably don't want to change the merge trees as the idea >>>> here is to bring the feature branch up to date, and probably the >>>> most important aspect of that is ensuring you've merged security fixes. >>>> >>>> Use of feature branches at all may qualify as "strange". Most >>>> projects tend to develop against their target branch. You'll see >>>> large change series from nova for example rather than creating >>>> feature branches for that work. This means most projects are never >>>> in a situation to potentially hit this problem. One major historical >>>> exception to this has been the swift project. It is possible they >>>> have run into this problem but ignored it? Or not seen it as >>>> problematic? >>>> >>> I think we chose to use feature branches since there are multiple repos >>> in StarlingX and we need a way to coordinate work across them. >> >> Note, Zuul's depends-on functionality is designed to address the need >> for coordinating work between repos without needing to drastically >> change workflow. >> > We are aware of that, but it's more about the StarlingX workflow and > enabling of changes like moving to a new base OS needs to be done > outside of master.  So we still need to use the feature branch to enable > and test new functionality outside of master. >>> >>> They might not have as many CVE reference also, since StarlingX has many >>> references to Linux Userspace which can contain more CVEs. >>> >>>> I did double check that the change merge hook code doesn't handle >>>> feature branches as a special case already (openstack uses the >>>> feature/ prefix not f/ so thought maybe there was a difference in >>>> matchers?) but I found nothing. >>>> https://opendev.org/opendev/jeepyb/src/branch/master/jeepyb/cmd/update_bug.py#L222-L252 >>>> is the code in question and what we'd end up updating if we wanted >>>> to apply some rule set to the bot around feature branches. >>>> >>> Yeah, I agree a check here might be the right place for this. >> >> Based on the above description of the problem what we'd need to do >> here is remove CVE references from merge commit comments on Launchpad? >> The tricky bit is knowing when that is appropriate or not and "this is >> a merge commit" might be the answer. One way to do that would be to >> report only the merge commit message to the bug. You'd potentially >> lose launchpad synchronization if you wanted updates in the child >> commits though. >> > Yes, As I mentioned above having an abbreviated commit message posted > via the hooks is likely the best approach so we can at least track the > merge happend for those bugs. > > Do you need some kind of storyboard or launchpad for this kind of change? > > Thanks >    Sau! > >>> >>>> Something else to keep in mind, there has been some discussion of >>>> replacing these existing bots with Zuul jobs similar to how github >>>> replication is done. That could possibly give different repos far >>>> more flexibility through Zuul configuration specific to that repo. >>>> This may be another approach worth taking if we find we end up doing >>>> something StarlingX specific. >>>> >>> Something to consider down the road. >>> >>> Sau! >>> >>>>> >>>>> Thanks >>>>> Sau! >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On 12/13/19 8:48 AM, Saul Wold wrote: >>>>>> >>>>>> Hello Infra team: >>>>>> >>>>>> Apparently something got messed up with Launchpad and updating a >>>>>> number >>>>>> of starlingx repos with a feature branch. >>>>>> >>>>>> I was following the methodology of updating a feature branch with >>>>>> changes from master via merges and I guess when I pushed that to >>>>>> gerrit >>>>>> and it merged, it caused some Launchpad ugliness. See email below. >>>>>> >>>>>> Thoughts? >>>>>> >>>>>> Thanks >>>>>> Sau! >>>>>> >>>>>> >>>>>> >>>>>> -------- Forwarded Message -------- >>>>>> Subject:     CVE References in LPs are messed up after centos feature >>>>>> branch rebase >>>>>> Date:     Fri, 13 Dec 2019 00:30:26 +0000 >>>>>> From:     Khalil, Ghada >>>>>> To:     Saul Wold >>>>>> >>>>>> >>>>>> >>>>>> Hi Saul, >>>>>> >>>>>> The CVE References in about 15 LPs are now messed up after the >>>>>> rebase of >>>>>> the f-centos8 feature branch. The rebase updated a large # of >>>>>> launchpads >>>>>> and somehow automatically added CVE references (from a subset of >>>>>> bugs) >>>>>> to all of them. Any idea what is going on here? >>>>>> >>>>>> Here are some examples: >>>>>> >>>>>> https://bugs.launchpad.net/starlingx/+bug/1844579 >>>>>> >>>>>> Originally had no CVE References. Now it has 3 references. >>>>>> >>>>>> https://bugs.launchpad.net/starlingx/+bug/1849200 >>>>>> >>>>>> Originally only had CVE-2018-15686 as a CVE Reference. Now it has all >>>>>> the recently fixed CVEs linked to this bug. >>>>>> >>>>>> Snapshot from the full activity log: >>>>>> >>>>>> Here is the query that shows that all the bugs that were picked up in >>>>>> the rebase now have CVE links: >>>>>> >>>>>> https://bugs.launchpad.net/starlingx/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=OPINION&field.status%3Alist=INVALID&field.status%3Alist=WONTFIX&field.status%3Alist=EXPIRED&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=FIXRELEASED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=in-f-centos8&field.tags_combinator=ANY&field.has_cve.used=&field.has_cve=on&field.omit_dupes.used=&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search >>>>>> >>>>>> >>>>>> >>>>>> *Ghada Khalil*, Manager, Titanium Cloud, *Wind River* >>>>>> direct 613.270.2273  skype ghada.khalil.ottawa >>>>>> >>>>>> 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 >>>>>> >>> From cboylan at sapwetik.org Tue Jan 14 22:08:43 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 14 Jan 2020 14:08:43 -0800 Subject: [OpenStack-Infra] =?utf-8?q?Fwd=3A_CVE_References_in_LPs_are_mess?= =?utf-8?q?ed_up_after_centos_feature_branch_rebase?= In-Reply-To: <670109cf-449f-24ce-8382-648dc52539b2@linux.intel.com> References: <151EE31B9FCCA54397A757BC674650F0C16044F8@ALA-MBD.corp.ad.wrs.com> <10e92750-32f7-5065-2f91-d17d9009723b@linux.intel.com> <80983118-a157-48df-92a5-73f84cbda718@www.fastmail.com> <816d506c-1d2c-edd2-d0b5-98fc458477c3@linux.intel.com> <670109cf-449f-24ce-8382-648dc52539b2@linux.intel.com> Message-ID: <211acdbb-b272-4100-9a58-b31582ce166f@www.fastmail.com> On Tue, Jan 14, 2020, at 1:49 PM, Saul Wold wrote: > Clark: > > Happy New Year! > > Circling back on this after the holidays, where do we stand, I am about > to do another merge for the feature branch, since for the work we are > doing across repos is best via feature branch vs Depends-On as talked > about below. > > We talked about only reporting an abbreviated commit message when doing > a merge commit, and I asked if a storyboard or launchpad was needed, did > I lose a reply? I don't think so, it has been quiet. You can file a bug if you like, but I expect that this is one of those cases where the quickest path to resolution will be for someone on the StarlingX side to push up a change with the edits that were suggested. > > Remember not all of us are on the Infra mailing list. > > Thanks > Sau! From cboylan at sapwetik.org Tue Jan 14 22:45:02 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 14 Jan 2020 14:45:02 -0800 Subject: [OpenStack-Infra] =?utf-8?q?proposal=3A_making_gerrit/zuul_messag?= =?utf-8?q?e_subject=09shorter?= In-Reply-To: References: Message-ID: <08ce8451-fcbd-4469-8976-ae0f7d7e4d54@www.fastmail.com> On Tue, Jan 14, 2020, at 3:41 AM, Sorin Sbarnea wrote: > I kinda find the default gerrit message subjects quite not practical > for narrow screens (not only mobile). > > The information is stacked with too much repetitive boilerplate at the > start. Most of the time only very few letters from the interesting part > (like change name) are visible. > > Am I the only one observing this? > > Does it make sense to look into changing the templates to make them > more compact? https://review.opendev.org/Documentation/config-mail.html Docs for changing the templates. > > Few observations: > - "Change in" is mostly boilerplate, I would be happy to use a bried > suffix like `x: ` or an emoji but that may be controversial We should avoid emoji as people may consume email in their terminal and readability shouldn't be dependent on special font glyphs. > - Project name with organization suffix, how often do we have same > project under two different orgs? With zuul configs we have a bit more chance of name collisions as some patterns are emerging. "project-config" and "base-jobs" in particular I expect will collide over time. > - Is master really needed? I would rather avoid mentioning it for > master or even avoiding it completely if not possible to hide it for > master only > - Advanced filtering of messages can be done using message headers, > nobody forces us to rely on subject line alone I don't think Gerrit's email templating system allows us to edit headers. Looking at an email from Gerrit today the only headers it seems to set are X-Gerrit-Change-Id, X-Gerrit-ChangeURL, and X-Gerrit-Commit. I think keeping the branch info in the subject is important as a result. > - Even the message content could use some improvements, mainly about > making it more compact > > What do you think? I think we should get switched over to the container based Gerrit deployment before we try to make changes like this. This will help keep the container transition simpler as we won't have to double account changes to templates in multiple places. Personally I don't use Gerrit email too much so I'll have to defer to those that do. Whatever changes we make should be communicated as people will likely need to update their existing filter rules. And we shouldn't remove information as that may prevent updating those rules to something that works. > > https://sbarnea.com/ss/Screen-Shot-2020-01-14-11-27-32.png (example screenshot) From sgw at linux.intel.com Tue Jan 14 22:46:18 2020 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 14 Jan 2020 14:46:18 -0800 Subject: [OpenStack-Infra] Fwd: CVE References in LPs are messed up after centos feature branch rebase In-Reply-To: <211acdbb-b272-4100-9a58-b31582ce166f@www.fastmail.com> References: <151EE31B9FCCA54397A757BC674650F0C16044F8@ALA-MBD.corp.ad.wrs.com> <10e92750-32f7-5065-2f91-d17d9009723b@linux.intel.com> <80983118-a157-48df-92a5-73f84cbda718@www.fastmail.com> <816d506c-1d2c-edd2-d0b5-98fc458477c3@linux.intel.com> <670109cf-449f-24ce-8382-648dc52539b2@linux.intel.com> <211acdbb-b272-4100-9a58-b31582ce166f@www.fastmail.com> Message-ID: <7827e709-5a62-5fc1-8b97-ebd34a9dcf9a@linux.intel.com> On 1/14/20 2:08 PM, Clark Boylan wrote: > On Tue, Jan 14, 2020, at 1:49 PM, Saul Wold wrote: >> Clark: >> >> Happy New Year! >> >> Circling back on this after the holidays, where do we stand, I am about >> to do another merge for the feature branch, since for the work we are >> doing across repos is best via feature branch vs Depends-On as talked >> about below. >> >> We talked about only reporting an abbreviated commit message when doing >> a merge commit, and I asked if a storyboard or launchpad was needed, did >> I lose a reply? > > I don't think so, it has been quiet. You can file a bug if you like, but I expect that this is one of those cases where the quickest path to resolution will be for someone on the StarlingX side to push up a change with the edits that were suggested. OK, I guess that falls to me! I need to set up appropriate testing, unless there is some existing test area I can play with. Sau! > >> >> Remember not all of us are on the Infra mailing list. >> >> Thanks >> Sau! From cboylan at sapwetik.org Tue Jan 14 22:55:16 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 14 Jan 2020 14:55:16 -0800 Subject: [OpenStack-Infra] =?utf-8?q?Fwd=3A_CVE_References_in_LPs_are_mess?= =?utf-8?q?ed_up_after_centos_feature_branch_rebase?= In-Reply-To: <7827e709-5a62-5fc1-8b97-ebd34a9dcf9a@linux.intel.com> References: <151EE31B9FCCA54397A757BC674650F0C16044F8@ALA-MBD.corp.ad.wrs.com> <10e92750-32f7-5065-2f91-d17d9009723b@linux.intel.com> <80983118-a157-48df-92a5-73f84cbda718@www.fastmail.com> <816d506c-1d2c-edd2-d0b5-98fc458477c3@linux.intel.com> <670109cf-449f-24ce-8382-648dc52539b2@linux.intel.com> <211acdbb-b272-4100-9a58-b31582ce166f@www.fastmail.com> <7827e709-5a62-5fc1-8b97-ebd34a9dcf9a@linux.intel.com> Message-ID: On Tue, Jan 14, 2020, at 2:46 PM, Saul Wold wrote: > > > On 1/14/20 2:08 PM, Clark Boylan wrote: > > On Tue, Jan 14, 2020, at 1:49 PM, Saul Wold wrote: > >> Clark: > >> > >> Happy New Year! > >> > >> Circling back on this after the holidays, where do we stand, I am about > >> to do another merge for the feature branch, since for the work we are > >> doing across repos is best via feature branch vs Depends-On as talked > >> about below. > >> > >> We talked about only reporting an abbreviated commit message when doing > >> a merge commit, and I asked if a storyboard or launchpad was needed, did > >> I lose a reply? > > > > I don't think so, it has been quiet. You can file a bug if you like, but I expect that this is one of those cases where the quickest path to resolution will be for someone on the StarlingX side to push up a change with the edits that were suggested. > > OK, I guess that falls to me! I need to set up appropriate testing, > unless there is some existing test area I can play with. We do have a review-dev.openstack.org we can deploy it to. I believe that server is setup to comment on launchpad for at least one of its projects? Monty would probably know since he has been converting our Gerrit config management recently. If you'd prefer to do it locally all Gerrit does on those events is call the hook scripts with a specific set of parameters. Those parameters are documented here: https://gerrit.googlesource.com/plugins/hooks/+/refs/heads/stable-2.13/src/main/resources/Documentation/hooks.md#change_merged. Then you can execute the script locally as if you were Gerrit. This is probably reasonably straightforward though you will want to set up a throw away LP bug to edit. > > Sau! From cboylan at sapwetik.org Mon Jan 20 23:51:29 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 20 Jan 2020 15:51:29 -0800 Subject: [OpenStack-Infra] Meeting Agenda for January 21, 2020 Message-ID: <36d8e698-ba39-44a6-8288-026dd129a6bc@www.fastmail.com> We will meet in #openstack-meeting on January 21, 2020 at 19:00UTC with this agenda. == Agenda for next meeting == * Announcements * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev *** Progress on Governance changes **** https://review.opendev.org/#/c/703134/ Split OpenDev out of OpenStack governance. **** https://review.opendev.org/#/c/703488 Updates to our project documentation with governance info *** Possible gitea/go-git bug in current version of gitea we are running https://storyboard.openstack.org/#!/story/2006849 * General topics ** Trusty Upgrade Progress (clarkb 20200107) *** Wiki updates ** static.openstack.org (ianw,corvus,mnaser,fungi 20200107) *** Need reviews on https://review.opendev.org/#/q/status:open+topic:static.opendev.org ** New Arm64 cloud * Open discussion From cboylan at sapwetik.org Wed Jan 22 22:12:11 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 22 Jan 2020 14:12:11 -0800 Subject: [OpenStack-Infra] Touching base; Airship CI cluster In-Reply-To: References: <8bdf26696d2a407fa7f58fc47bd9c6fa@att.com> Message-ID: On Tue, Jan 7, 2020, at 1:45 AM, Roman Gorshunov wrote: > Hello Clark, > > Thank you for your reply. Meeting time is OK for me. I have forwarded > invitation to Pete Birley and Matt McEuen, they would hopefully join > us. I wanted to make sure we got a summary of this meeting sent out. Notes were kept at https://etherpad.openstack.org/p/Wqxwce1UDq Airship needs to test their All in One deployment tool. This tool deploys their entire bootstrapping system into a single VM which is then used to deploy other software which may be clustered. Because the production usage of this tool is via a VM it is actually important to be able to test the contents of that VM in CI and that is what creates the memory requirements for Airship CI. We explained the benefits of being able to run Airship CI on less special hardware. Airship gains redundancy as more than one provider can supply these resources, reliability should be improved as nested virt has been known to be flaky, and better familiarity within the community with global resources means that debugging and working together is easier. However, we recognize that Airship has specific constraints today that require more specialized testing. The proposed plan is to move forward with adding a new cloud, but have it provide specialized and generic resources. The intent is to address Airship's needs of today with the expectation that they will work towards running on the generic resources. Having generic resources ensures that the infra team has exposure to this new cloud outside the context of Airship. This improves familiarity and debuggability of the system. It is also more equitable as other donations are globally used. Also, Nodepool doesn't actually allow us to prevent consumption of resources exposed to the system; however, we would ask that specialized resources only be used when necessary to test specific cases as with Airship. This is similar to our existing high memory, multi numa node, nested virt enabled test flavors. For next steps we'll work to add the new cloud with the two sets of flavors, and Airship will begin investigating what a modified test setup looks like to run on our generic resources. We'll see where that takes us. Let me know if this summary needs editing or updating. Finally, we'll be meeting again Wednesday January 29, 2020 at 1600UTC to followup on any questions now that things should be moving. I recently used jitsi meet and it worked really well so want to give that a try for this. Lets meet at https://meet.jit.si/AirshipCICloudFun. Fungi says you can click the circled "i" icon at that url to get dial in info if necessary. If for some reason jitsi doesn't work we'll fall back to the method used last time: https://wiki.openstack.org/wiki/Infrastructure/Conferencing room 6001. See you there, Clark From pabelanger at redhat.com Wed Jan 22 23:46:10 2020 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 22 Jan 2020 18:46:10 -0500 Subject: [OpenStack-Infra] Touching base; Airship CI cluster In-Reply-To: References: <8bdf26696d2a407fa7f58fc47bd9c6fa@att.com> Message-ID: <20200122234610.GB291742@localhost.localdomain> On Wed, Jan 22, 2020 at 02:12:11PM -0800, Clark Boylan wrote: > On Tue, Jan 7, 2020, at 1:45 AM, Roman Gorshunov wrote: > > Hello Clark, > > > > Thank you for your reply. Meeting time is OK for me. I have forwarded > > invitation to Pete Birley and Matt McEuen, they would hopefully join > > us. > > I wanted to make sure we got a summary of this meeting sent out. Notes were kept at https://etherpad.openstack.org/p/Wqxwce1UDq > > Airship needs to test their All in One deployment tool. This tool deploys their entire bootstrapping system into a single VM which is then used to deploy other software which may be clustered. Because the production usage of this tool is via a VM it is actually important to be able to test the contents of that VM in CI and that is what creates the memory requirements for Airship CI. > > We explained the benefits of being able to run Airship CI on less special hardware. Airship gains redundancy as more than one provider can supply these resources, reliability should be improved as nested virt has been known to be flaky, and better familiarity within the community with global resources means that debugging and working together is easier. > > However, we recognize that Airship has specific constraints today that require more specialized testing. The proposed plan is to move forward with adding a new cloud, but have it provide specialized and generic resources. The intent is to address Airship's needs of today with the expectation that they will work towards running on the generic resources. Having generic resources ensures that the infra team has exposure to this new cloud outside the context of Airship. This improves familiarity and debuggability of the system. It is also more equitable as other donations are globally used. Also, Nodepool doesn't actually allow us to prevent consumption of resources exposed to the system; however, we would ask that specialized resources only be used when necessary to test specific cases as with Airship. This is similar to our existing high memory, multi numa node, nested virt enabled test flavors. > > For next steps we'll work to add the new cloud with the two sets of flavors, and Airship will begin investigating what a modified test setup looks like to run on our generic resources. We'll see where that takes us. > > Let me know if this summary needs editing or updating. > > Finally, we'll be meeting again Wednesday January 29, 2020 at 1600UTC to followup on any questions now that things should be moving. I recently used jitsi meet and it worked really well so want to give that a try for this. Lets meet at https://meet.jit.si/AirshipCICloudFun. Fungi says you can click the circled "i" icon at that url to get dial in info if necessary. > > If for some reason jitsi doesn't work we'll fall back to the method used last time: https://wiki.openstack.org/wiki/Infrastructure/Conferencing room 6001. > Regarding the dedicated cloud, it might be an interesting discussion point to talk with some of the TripleO folks from when tripleo-test-cloud-rh1 cloud was still a thing. As most infra people know, this was a cloud dedicated to running tripleo specific jobs. There was an effort to make there jobs more generic, to run on any cloud infrastructure, which resulted IMO, to a large increase of testing (as there was much more capacity). While it took a bit of effort, I believe overall it was a better improvement for CI. Paul From ssbarnea at redhat.com Fri Jan 24 09:32:00 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Fri, 24 Jan 2020 09:32:00 +0000 Subject: [OpenStack-Infra] proposal: custom favicon for review.o.o Message-ID: <356178F8-BFA8-4232-9E32-369F64BDD4E1@redhat.com> We are currently using default Gerrit favicon on https://review.opendev.org and I would like to propose changing it in order to ease differentiation between it and other gerrit servers we may work with. How hard it would be to override it? (where) If others find useful I can propose a small alteration that make it bit different than vanilla one, while not making it too different, considering reusing od "pink". Thanks Sorin From Tommy.Chaoping.Li at ibm.com Thu Jan 23 23:10:06 2020 From: Tommy.Chaoping.Li at ibm.com (Tommy Chaoping Li) Date: Thu, 23 Jan 2020 23:10:06 +0000 Subject: [OpenStack-Infra] Question regarding access to OpenDev Message-ID: An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jan 25 15:46:50 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 25 Jan 2020 15:46:50 +0000 Subject: [OpenStack-Infra] Question regarding access to OpenDev In-Reply-To: References: Message-ID: <20200125154650.zv6mbzo7hjzk5um4@yuggoth.org> On 2020-01-23 23:10:06 +0000 (+0000), Tommy Chaoping Li wrote: > We are from IBM and we contributed to several open source projects > such as KubeFlow. We found out that the OpenStack team had > developed a great dashboard for capturing the GitHub metrics for > various Open Source projects. We found it very helpful and we want > to propose a few changes on the list of projects on stackalytics; > however, we weren't able to fork or create new propose changes to > the stackalytics project at https://opendev.org/x/stackalytics. Do > we have to apply for certain permission in order to create an > account on OpenDev? Thank you very much in advance. [Apologies for the direct Cc... I approved this message out of the moderation queue because you weren't a subscriber to the mailing list, and therefore suspect you may not see any replies which are only on-list; if you choose not to subscribe, please at least continue to include the mailing list's address in your replies so others have a chance to see them as well.] OpenDev's code contribution workflow can be found in the OpenStack Infrastructure Developer's Guide here (this probably merits adding to our FAQ on the main page for the opendev.org site): As evidenced by the name of the mailing list on which we're communicating and the domain name where that document is hosted, OpenDev is still in the midst of an identity shift. Some of the information you'll find in that guide is likely to be OpenStack-specific and less applicable to projects which aren't in the "openstack" Git namespace, but the lines are a bit blurry at times (for example, it looks like the x/stackalytics project is set to require contributors to agree to the OSF ICLA before they'll be able to push patches). Regardless, the basic workflow documented there using the `git-review` tool to push proposed patches to Gerrit is correct for all Git repositories hosted in OpenDev. The x/stackalytics maintainers would probably benefit from incorporating a standard CONTRIBUTING.rst document in the root of their Git tree. There's a template for a very basic one OpenStack uses here if you wanted to propose one for them to review along with the other changes you're considering, though it warrants further editing to make it less OpenStack-specific: As always, let us know if you have questions or encounter any roadblocks and we're happy to try to help either here on this mailing list or in the #openstack-infra channel on the Freenode IRC network. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Tue Jan 28 01:12:49 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 27 Jan 2020 17:12:49 -0800 Subject: [OpenStack-Infra] Meeting Agenda for January 28, 2020 Message-ID: We will meet in #openstack-meeting at 19:00 UTC January 28, 2020 with this agenda. == Agenda for next meeting == * Announcements * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev *** Progress on Governance changes **** https://review.opendev.org/#/c/703134/ Split OpenDev out of OpenStack governance. **** https://review.opendev.org/#/c/703488 Updates to our project documentation with governance info *** Possible gitea/go-git bug in current version of gitea we are running https://storyboard.openstack.org/#!/story/2006849 * General topics ** Trusty Upgrade Progress (clarkb 20200128) *** status.o.o in progress. Need gerritlib updates and release. *** Wiki updates ** static.openstack.org (ianw,corvus,mnaser,fungi 20200128) ** New Arm64 cloud (clarkb 20200128) *** nb03 unable to talk to https://us.linaro.cloud:5000 ** Airship CI cloud meeting Wednesday at 1600UTC (clarkb 20200128) * Open discussion From iwienand at redhat.com Wed Jan 29 04:56:24 2020 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 29 Jan 2020 15:56:24 +1100 Subject: [OpenStack-Infra] tarballs.openstack.org to AFS publishing gameplan Message-ID: <20200129045624.GA520891@fedora19.localdomain> Hello, We're at the point of implementing the tarballs.openstack.org publishing changes from [1], and I would just like to propose some low-level plans for feedback that exceeded the detail in the spec. We currently have tarballs.opendev.org which publishes content from /afs/openstack.org/project/opendev.org/tarballs. This is hosted on the physical server files02.openstack.org and managed by puppet [2]. 1) I propose we move tarballs.opendev.org to be served by static01.opendev.org and configured via ansible Because: * it's one less thing running on a Xenial host with puppet we don't want to maintain. * it will be alongside tarballs.openstack.org per below The /afs/openstack.org/project/opendev.org directory is currently a single AFS volume "projet.opendev" and contains subdirectories: docs tarballs www opendev.org jobs currently write their tarball content into the AFS location, which is periodically "vos released" by [3]. 2) I propose we make a separate volume, with separate quota, and mount it at /afs/openstack.org/project/tarballs.opendev.org. We copy the current data to that location and modify the opendev.org tarball publishing jobs to use that location, and setup the same periodic release. Because: * Although currently the volume is tiny (<100mb), it will become quite large when combined with ~140gb of openstack repos * this seems distincly separate from docs and www data * we have content for other hosts at /afs/openstack.org/project like this, it fits logically. The next steps are described in the spec; with this in place, we copy the current openstack tarballs from static.openstack.org:/srv/static/tarballs to /afs/openstack.org/project/tarballs.opendev.org/openstack/ We then update the openstack tarball publishing jobs to publish to this new location via AFS (we should be able to make this happen in parallel, initially). Finally, we need to serve these files. 3) I propose we make tarballs.openstack.org a vhost on static.opendev.org that serves the /afs/openstack.org/project/tarballs.opendev.org/openstack/ directory. Because * This is transparent for tarballs.openstack.org; all URLs work with no redirection, etc. * anyone hitting tarballs.opendev.org will see top-level project directories (openstack, zuul, airship, etc.) which makes sense. I think this will get us where we want to be. Any feedback welcome, thanks. We will keep track of things in [4]. [1] https://docs.opendev.org/opendev/infra-specs/latest/specs/retire-static.html [2] https://opendev.org/opendev/system-config/src/branch/master/manifests/site.pp#L441 [3] https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/files/openafs/release-volumes.py [4] https://storyboard.openstack.org/#!/story/2006598 From iwienand at redhat.com Wed Jan 29 05:15:39 2020 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 29 Jan 2020 16:15:39 +1100 Subject: [OpenStack-Infra] proposal: custom favicon for review.o.o In-Reply-To: <356178F8-BFA8-4232-9E32-369F64BDD4E1@redhat.com> References: <356178F8-BFA8-4232-9E32-369F64BDD4E1@redhat.com> Message-ID: <20200129051539.GB520891@fedora19.localdomain> On Fri, Jan 24, 2020 at 09:32:00AM +0000, Sorin Sbarnea wrote: > We are currently using default Gerrit favicon on > https://review.opendev.org and I would like to propose changing it > in order to ease differentiation between it and other gerrit servers > we may work with. I did notice google started putting this next to search results recently too, but then maybe reverted the change. > How hard it would be to override it? (where) I'm 99% sure it's built-in from [2] and there's no way to runtime override it. It looks like for robots.txt we tell the apache that fronts gerrit to look elsewhere [3]; I imagine the same would need to be done for favicon.ico. ... also be aware that upcoming containerisation of gerrit probably invalidates all that. -i [1] https://www.theverge.com/2020/1/24/21080424/google-search-result-ads-desktop-favicon-redesign-backtrack-controversial-experiment [2] https://opendev.org/opendev/gerrit/src/branch/openstack/2.13.12/gerrit-war/src/main/webapp [3] https://opendev.org/opendev/puppet-gerrit/src/branch/master/templates/gerrit.vhost.erb#L71 From fungi at yuggoth.org Wed Jan 29 05:18:55 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 29 Jan 2020 05:18:55 +0000 Subject: [OpenStack-Infra] tarballs.openstack.org to AFS publishing gameplan In-Reply-To: <20200129045624.GA520891@fedora19.localdomain> References: <20200129045624.GA520891@fedora19.localdomain> Message-ID: <20200129051855.76gvtqto3nijp5qh@yuggoth.org> On 2020-01-29 15:56:24 +1100 (+1100), Ian Wienand wrote: [...] > 3) I propose we make tarballs.openstack.org a vhost on > static.opendev.org that serves the > /afs/openstack.org/project/tarballs.opendev.org/openstack/ > directory. > > Because > > * This is transparent for tarballs.openstack.org; all URLs work with > no redirection, etc. > * anyone hitting tarballs.opendev.org will see top-level project > directories (openstack, zuul, airship, etc.) which makes sense. [...] While it could be a later step, if the OpenStack project leadership agrees I think I'd rather see the tarballs.openstack.org just host a permanent redirect from /(.*) to tarballs.opendev.org/$1 so that we might eventually be able to consider dropping the tarballs.openstack.org hostname entirely. Also continuing to host that tree as a separate white-labeled site may encourage other projects to request similar vanity tarball sites. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jan 29 05:21:49 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 29 Jan 2020 05:21:49 +0000 Subject: [OpenStack-Infra] tarballs.openstack.org to AFS publishing gameplan In-Reply-To: <20200129051855.76gvtqto3nijp5qh@yuggoth.org> References: <20200129045624.GA520891@fedora19.localdomain> <20200129051855.76gvtqto3nijp5qh@yuggoth.org> Message-ID: <20200129052149.item27qruosqzicf@yuggoth.org> On 2020-01-29 05:18:55 +0000 (+0000), Jeremy Stanley wrote: > On 2020-01-29 15:56:24 +1100 (+1100), Ian Wienand wrote: > [...] > > 3) I propose we make tarballs.openstack.org a vhost on > > static.opendev.org that serves the > > /afs/openstack.org/project/tarballs.opendev.org/openstack/ > > directory. > > > > Because > > > > * This is transparent for tarballs.openstack.org; all URLs work with > > no redirection, etc. > > * anyone hitting tarballs.opendev.org will see top-level project > > directories (openstack, zuul, airship, etc.) which makes sense. > [...] > > While it could be a later step, if the OpenStack project leadership > agrees I think I'd rather see the tarballs.openstack.org just host a > permanent redirect from /(.*) to tarballs.opendev.org/$1 so that we > might eventually be able to consider dropping the > tarballs.openstack.org hostname entirely. Also continuing to host > that tree as a separate white-labeled site may encourage other > projects to request similar vanity tarball sites. Of course I meant from /(.*) to tarballs.opendev.org/openstack/$1 so that clients actually get directed to the correct files. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ssbarnea at redhat.com Wed Jan 29 06:35:28 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Wed, 29 Jan 2020 06:35:28 +0000 Subject: [OpenStack-Infra] proposal: custom favicon for review.o.o In-Reply-To: <20200129051539.GB520891@fedora19.localdomain> References: <356178F8-BFA8-4232-9E32-369F64BDD4E1@redhat.com> <20200129051539.GB520891@fedora19.localdomain> Message-ID: I guess that means that you are not against the idea. Clearly I will wait for the container to be ready in that case. In fact it would be easier to override the file during container building. On Wed, 29 Jan 2020 at 05:15, Ian Wienand wrote: > On Fri, Jan 24, 2020 at 09:32:00AM +0000, Sorin Sbarnea wrote: > > We are currently using default Gerrit favicon on > > https://review.opendev.org and I would like to propose changing it > > in order to ease differentiation between it and other gerrit servers > > we may work with. > > I did notice google started putting this next to search results > recently too, but then maybe reverted the change. > > > How hard it would be to override it? (where) > > I'm 99% sure it's built-in from [2] and there's no way to runtime > override it. It looks like for robots.txt we tell the apache that > fronts gerrit to look elsewhere [3]; I imagine the same would need to > be done for favicon.ico. > > ... also be aware that upcoming containerisation of gerrit probably > invalidates all that. > > -i > > [1] > https://www.theverge.com/2020/1/24/21080424/google-search-result-ads-desktop-favicon-redesign-backtrack-controversial-experiment > [2] > https://opendev.org/opendev/gerrit/src/branch/openstack/2.13.12/gerrit-war/src/main/webapp > [3] > https://opendev.org/opendev/puppet-gerrit/src/branch/master/templates/gerrit.vhost.erb#L71 > > -- -- /sorin -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Wed Jan 29 08:25:14 2020 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 29 Jan 2020 19:25:14 +1100 Subject: [OpenStack-Infra] tarballs.openstack.org to AFS publishing gameplan In-Reply-To: <20200129052149.item27qruosqzicf@yuggoth.org> References: <20200129045624.GA520891@fedora19.localdomain> <20200129051855.76gvtqto3nijp5qh@yuggoth.org> <20200129052149.item27qruosqzicf@yuggoth.org> Message-ID: <20200129082514.GC520891@fedora19.localdomain> On Wed, Jan 29, 2020 at 05:21:49AM +0000, Jeremy Stanley wrote: > Of course I meant from /(.*) to tarballs.opendev.org/openstack/$1 so > that clients actually get directed to the correct files. ;) Ahh yes, sorry you mentioned that in IRC and I should have incorporated that. I'm happy with that; we can also have that in-place and test it by overriding our hosts files before any cut-over. -i From iwienand at redhat.com Wed Jan 29 10:38:03 2020 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 29 Jan 2020 21:38:03 +1100 Subject: [OpenStack-Infra] proposal: custom favicon for review.o.o In-Reply-To: References: <356178F8-BFA8-4232-9E32-369F64BDD4E1@redhat.com> <20200129051539.GB520891@fedora19.localdomain> Message-ID: <20200129103803.GD520891@fedora19.localdomain> On Wed, Jan 29, 2020 at 06:35:28AM +0000, Sorin Sbarnea wrote: > I guess that means that you are not against the idea. I know it's probably not what you want to hear, but as it seems favicons are becoming a component of branding like a logo I think you'd do well to run your proposed work by someone with the expertise to evaluate it with-respect-to whatever branding standards we have (I imagine someone on the TC would have such contacts from the Foundation or whoever does marketing). If you just make something up and send it, you're probably going to get review questions like "how can we know this meets the branding standards to be the logo on our most popular website" or "is this the right size, format etc. for browsers in 2020" which are things upstream marketing and web people could sign off on. So, personally, I'd suggest a bit of pre-coordination there would mean any resulting technical changes would be very non-controversial. -i From corvus at inaugust.com Wed Jan 29 14:31:17 2020 From: corvus at inaugust.com (James E. Blair) Date: Wed, 29 Jan 2020 06:31:17 -0800 Subject: [OpenStack-Infra] proposal: custom favicon for review.o.o In-Reply-To: <20200129103803.GD520891@fedora19.localdomain> (Ian Wienand's message of "Wed, 29 Jan 2020 21:38:03 +1100") References: <356178F8-BFA8-4232-9E32-369F64BDD4E1@redhat.com> <20200129051539.GB520891@fedora19.localdomain> <20200129103803.GD520891@fedora19.localdomain> Message-ID: <87wo9axrka.fsf@meyer.lemoncheese.net> Ian Wienand writes: > On Wed, Jan 29, 2020 at 06:35:28AM +0000, Sorin Sbarnea wrote: >> I guess that means that you are not against the idea. > > I know it's probably not what you want to hear, but as it seems > favicons are becoming a component of branding like a logo I think > you'd do well to run your proposed work by someone with the expertise > to evaluate it with-respect-to whatever branding standards we have (I > imagine someone on the TC would have such contacts from the Foundation > or whoever does marketing). > > If you just make something up and send it, you're probably going to > get review questions like "how can we know this meets the branding > standards to be the logo on our most popular website" or "is this the > right size, format etc. for browsers in 2020" which are things > upstream marketing and web people could sign off on. So, personally, > I'd suggest a bit of pre-coordination there would mean any resulting > technical changes would be very non-controversial. That bridge has been crossed for opendev.org, which has a well-thought-out favicon. I think adding the same one to review.opendev.org is a technical exercise at this point. -Jim From corvus at inaugust.com Wed Jan 29 14:33:39 2020 From: corvus at inaugust.com (James E. Blair) Date: Wed, 29 Jan 2020 06:33:39 -0800 Subject: [OpenStack-Infra] tarballs.openstack.org to AFS publishing gameplan In-Reply-To: <20200129082514.GC520891@fedora19.localdomain> (Ian Wienand's message of "Wed, 29 Jan 2020 19:25:14 +1100") References: <20200129045624.GA520891@fedora19.localdomain> <20200129051855.76gvtqto3nijp5qh@yuggoth.org> <20200129052149.item27qruosqzicf@yuggoth.org> <20200129082514.GC520891@fedora19.localdomain> Message-ID: <87sgjyxrgc.fsf@meyer.lemoncheese.net> Ian Wienand writes: > On Wed, Jan 29, 2020 at 05:21:49AM +0000, Jeremy Stanley wrote: >> Of course I meant from /(.*) to tarballs.opendev.org/openstack/$1 so >> that clients actually get directed to the correct files. ;) > > Ahh yes, sorry you mentioned that in IRC and I should have > incorporated that. I'm happy with that; we can also have that > in-place and test it by overriding our hosts files before any > cut-over. The overall plan sounds good to me, as does the follow-up. I'm ambivalent about when we put the redirects in place (during or after the host move). Whichever is easiest (but my guess is that due to the additional testing we would be able to do, *during* might be easiest). -Jim From Greg.Waines at windriver.com Thu Jan 30 13:21:55 2020 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 30 Jan 2020 13:21:55 +0000 Subject: [OpenStack-Infra] Help / Pointers on mirroring https://opendev.org/starlingx in github.com Message-ID: <4D82380F-4A28-4106-BEC7-CEAEDA761B9E@windriver.com> Hello, I am working in the OpenStack StarlingX team. We are working on getting StarlingX certified through the CNCF conformance program, https://www.cncf.io/certification/software-conformance/ . ( in the same way that the OpenStack Magnum project got certified with CNCF ... you can search for them on the above page ) In order for the logo to be shown as based on open-source, CNCF requires that the code be mirrored on github.com . e.g. OpenStack Magnum has done this: https://github.com/openstack/magnum In fact, not sure why, but almost all the core OpenStack projects have github mirrors: nova, neutron, keystone, cinder, glance, etc. Since so many of the OpenStack projects currently mirror to github.com, I was hoping there was a well-documented recipe for setting this up. ??? Is there a recipe for mirroring an openstack project’s opendev repositories in github.com ??? Or someone that can help out in doing this ? Let me know, Greg. p.s. I cc’d Thierry because I noticed that there are repos for all the StarlingX subprojects in github.com now e.g. https://github.com/openstack/stx-metal But with a final commit/comment from Thierry that these repos have been retired and moved to opendev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Jan 30 18:12:38 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 30 Jan 2020 10:12:38 -0800 Subject: [OpenStack-Infra] =?utf-8?q?Help_/_Pointers_on_mirroring_https=3A?= =?utf-8?q?//opendev=2Eorg/starlingx_in_github=2Ecom?= In-Reply-To: <4D82380F-4A28-4106-BEC7-CEAEDA761B9E@windriver.com> References: <4D82380F-4A28-4106-BEC7-CEAEDA761B9E@windriver.com> Message-ID: On Thu, Jan 30, 2020, at 5:21 AM, Waines, Greg wrote: > > Hello, > > > I am working in the OpenStack StarlingX team. > > We are working on getting StarlingX certified through the CNCF > conformance program, > https://www.cncf.io/certification/software-conformance/ . > > > ( in the same way that the OpenStack Magnum project got certified with > CNCF ... you can search for them on the above page ) > > > In order for the logo to be shown as based on open-source, CNCF > requires that the code be mirrored on github.com . > > > e.g. OpenStack Magnum has done this: https://github.com/openstack/magnum > > In fact, not sure why, but almost all the core OpenStack projects have > github mirrors: nova, neutron, keystone, cinder, glance, etc. > > > Since so many of the OpenStack projects currently mirror to github.com, > I was hoping there was a well-documented recipe for setting this up. > > > **??? Is there a recipe for mirroring an openstack project’s opendev > repositories in github.com ???** > > **Or someone that can help out in doing this ?** Roman provided good feedback[0] on the thread you started on openstack-discuss. I suggest we keep further discussion there as that serves as a good starting point. Note that magnum uses the legacy replication tooling, but Airship uses the modern process that Roman describes (this is what should be used). [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012268.html > > > Let me know, > > Greg. > > > > p.s. I cc’d Thierry because I noticed that there are repos for all the > StarlingX subprojects in github.com now > > e.g. https://github.com/openstack/stx-metal > > But with a final commit/comment from Thierry that these repos have been > retired and moved to opendev