From emilien at redhat.com Fri May 1 00:37:28 2020 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 30 Apr 2020 20:37:28 -0400 Subject: [tripleo] CI In-Reply-To: References: Message-ID: Thanks a lot Wes + team for the hard work, every day to keep CI stable. On Thu, Apr 30, 2020 at 7:02 PM Wesley Hayutin wrote: > Greetings, > > Status update... > Master: GREEN > > Stable Branches impacted by: > https://bugs.launchpad.net/tripleo/+bug/1875890 fixed > Now we are trying to promote each branch to level out pacemaker on the > node and containers. Queens is promoting now. > > Train: GREEN > > Queens: RED ( current priority ) > In addition to the pacemaker issue which has resolved in our periodic > testing jobs, we're hitting issues w/ instances timing out in tempest > https://bugs.launchpad.net/tripleo/+bug/1876087 > > Stein: RED > Also seems to have the same issue as Queens > https://bugs.launchpad.net/tripleo/+bug/1876087 ( under investigation for > Stein ) > > Rocky: RED > Also seems to have the same issue as Queens > https://bugs.launchpad.net/tripleo/+bug/1876087 ( under investigation for > Rocky ) > I will be promoting Rocky to level out pacemaker next. > > In order to get the voting jobs in queens back to green, I'm disabling > tempest on containers-multinode. https://review.opendev.org/#/c/724703/ > > Additional notes may be found: > https://hackmd.io/1pY-KQB_QwOe-a-5oEXTRg > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From masayuki.igawa at gmail.com Fri May 1 00:57:31 2020 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Fri, 01 May 2020 09:57:31 +0900 Subject: [qa][ptg] Virtual PTG planning In-Reply-To: <1c716736-b496-4f3a-b011-5ee4df7b5ace@www.fastmail.com> References: <036b07ba-2978-489c-b1b4-3d8f93a8cd30@www.fastmail.com> <1c716736-b496-4f3a-b011-5ee4df7b5ace@www.fastmail.com> Message-ID: Hi team, I've changed the QA slots to the followings since the original slots were conflicting with some of the QA topic owners' slots. * Monday, June 1: 13-14, 14-15 UTC * Tuesday, June 2: 13-14, 14-15 UTC If you have any comments or additional topics, please let us know on this ML or the IRC. Best Regards, -- Masayuki Igawa On Tue, Apr 28, 2020, at 23:16, Masayuki Igawa wrote: > Hi team, > > Based on the doodle voting, I've signed up QA for the following > 4 hours slots in the "Icehouse" room for the Vitctoria virtual PTG[1] > > Wednesday, June 3: 13-14, 14-15 UTC > Thursday, June 4: 13-14, 14-15 UTC > > If you have any comments or additional topics, please let us know > on this ML or the IRC. > > [1] https://etherpad.opendev.org/p/qa-victoria-ptg > > Best Regards, > -- Masayuki Igawa > > On Thu, Apr 23, 2020, at 11:31, Masayuki Igawa wrote: > > Hi team, > > > > Short summary: > > 1. Please take a look at the QA PTG etherpad[2] and put your name and > > topics there > > 2. Please vote on the doodle poll[3] > > > > Details: > > As we discussed in the office hours, we're now collecting topics[3] for > > the PTG[1]. > > And, we need to fix our slots for the PTG. We have now 5 topics[2], and > > we'll have > > some more, possibly. So, we'll need 2 or 3 hours slots for 2 days or > > more if we > > have more topics. > > > > Anyway, please fill out the doodle poll[3] if you'd like to participate > > in the PTG. And, > > please add your name and topics to the etherpad so that we can discuss > > them during > > the following office hours. > > > > > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.html > > [2] https://etherpad.opendev.org/p/qa-victoria-ptg > > [3] https://doodle.com/poll/5awstyshtquuwcq3 > > > > -- Masayuki > > > > > > From gmann at ghanshyammann.com Fri May 1 02:53:59 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 30 Apr 2020 21:53:59 -0500 Subject: [goals][Drop Python 2.7 Support] Status Report: COMPLETED \o/ Message-ID: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> Hello Everyone, Please find the final status for the py2.7 drop goal. Finally, it is COMPLETED!! With this (after three continuous cycles), I am taking a break from community goals (at least for V cycle :))!!. Wiki page is updated: https://wiki.openstack.org/wiki/Python3 I have added the 'Python3 Only' column here https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects Completion Report: =============== * Swift & Storlets are the two projects keeping py2.7 support. * All projects have completed the goal and now are python3-only. * It has been another roller coaster :) and I appreciate everyone helped in this work and all the projects reviewing the patches on time with few exceptions :). * A Lot of gate issues occurred and fixed during this transition especially making branchless tooling testing the mixed python version ( stable branch on py2 and other on py3). * I finished the audit for all the projects. requirement repo will be cleaning up the py2 caps in Victoria cycle[1]. Some ongoing work on deployement projects repo or unmaintained repo: ------------------------------------------------------------------------------------------- These are dpendendant on other work for testing framework etc. This can be continued and need not to be tracked under this goal. * python-barbicanclient: https://review.opendev.org/#/c/699096/ ** This repo seems unmaintained for last 6 months and the gate is already broken. * Openstack Charms - Most of them merged, few are waiting for the team to migrate to Python3 Zaza functional test framework. * Openstackansible - This will finish once centos jobs will be migrated to Centos8. * Openstack-Helm - No conclusion yet from help team on what else to do for py2 drop. -gmann From gouthampravi at gmail.com Fri May 1 04:58:16 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 30 Apr 2020 21:58:16 -0700 Subject: [manila][stable] Additions and Removals from stable-core In-Reply-To: References: Message-ID: Hello stable-maint-core members, Just bumping this thread up in case one of you can help me add these two individuals to the manila-stable-maint team! Thanks a ton :) Goutham On Fri, Apr 24, 2020 at 10:57 AM Goutham Pacha Ravi wrote: > > Hello Stackers, > > I'd like to add a couple of maintainers to manila-stable-maint: > https://review.opendev.org/#/admin/groups/1099,members > > - dviroel (Douglas Viroel) > - silvacarlos (Carlos Eduardo) > > Both Douglas and Carlos understand the stable branch policy [1] and > adhere to it. > > Thanks! > Goutham > > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html From sean.mcginnis at gmx.com Fri May 1 08:55:05 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 May 2020 03:55:05 -0500 Subject: [manila][stable] Additions and Removals from stable-core In-Reply-To: References: Message-ID: <618425dd-d138-5bdc-54d3-6c030a1f1900@gmx.com> >> Hello Stackers, >> >> I'd like to add a couple of maintainers to manila-stable-maint: >> https://review.opendev.org/#/admin/groups/1099,members >> >> - dviroel (Douglas Viroel) >> - silvacarlos (Carlos Eduardo) >> >> Both Douglas and Carlos understand the stable branch policy [1] and >> adhere to it. >> >> Thanks! >> Goutham >> >> >> [1] https://docs.openstack.org/project-team-guide/stable-branches.html I took a look at the stable branch reviews. I think normally we like to see a little more reviews having been done for stable branches, but there were at lease several done for the last few stable branches, and they all looked good. I have now added Douglas and Carlos to manila-stable-maint. The overall page is linked above, but just for completeness, please pay particular attention to the Review Guidelines section: https://docs.openstack.org/project-team-guide/stable-branches.html#review-guidelines Sean From ruslanas at lpic.lt Fri May 1 08:57:42 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 1 May 2020 10:57:42 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Hi, I just wanted to notice, that I have will it be sufficient to run: ansibple-playbook -vvvv -i inventory.yaml common_deploy_steps_tasks_step_1.yaml ??? cause testing takes looong time to run all openstack undercloud install Also trying with downgraded version of containernetworking-plugins.x86_64 0.8.1-2.el7.centos @extras My next thought is to run installation of packages to the same version as I have on the older box, where everything is running. On Thu, 30 Apr 2020 at 17:02, Ruslanas Gžibovskis wrote: > By the way, > > I have shrinked a bit ansible.log, you can find it here: > http://paste.debian.net/hidden/ffe95b54/ > > It looks like containers are going up, and I see them being up. > > but when I exec: > [root at remote-u stack]# podman run -it d7b8f19cc5ed /bin/bash > Error: error configuring network namespace for container > 0f5d43e509f3b673180a677e192dd5f2498f6061680cb0db9ae15d739c2337e3: Missing > CNI default network > [root at remote-u stack]# > > I get this interesting line, should I get it? > > also, here is my undercloud.conf: > http://paste.debian.net/hidden/09feaefa/ > > also, as I mentioned, previously, paunch.log is empty, I have added +w to > it but it still empty. > > [root at remote-u stack]# cat /var/log/paunch.log > [root at remote-u stack]# ls -la /var/log/paunch.log > -rw-rw-rw-. 1 root root 0 Apr 28 10:59 /var/log/paunch.log > [root at remote-u stack]# > > And I do not have proxy, luckily. but on another site, where we have proxy > we face similar issues. and fails on same place. > > On Wed, 29 Apr 2020 at 12:02, Wu, Heng wrote: > >> > I am new to containers, can I somehow transfer these images between, >> as I understand, it might help? >> >> >> >> The image will be downloaded automatically with the installation, you >> don't need to do it manually. >> >> The problem seems to be that the container failed to start, not the >> errors in the images themselves. >> >> You may need to check the configuration of undercloud.conf and proxy >> settings (if they exist). >> >> >> > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri May 1 09:48:57 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 1 May 2020 11:48:57 +0200 Subject: [Reno][Neutron] Release notes for stable/ussuri Message-ID: <20200501094857.og7cceclwnj63qgy@skaplons-mac> Hi, I wanted to add some prelude section to Neutron's release notes for stable/ussuri but I screw it up and forgot to do it before stable/ussuri branch was created. So now I have patch [1] but notes aren't going to stable/ussuri branch at all. I found the way how to exclude it from current (V) release notes but is there any way to somehow force to add it to stable/ussuri's release notes now? [1] https://review.opendev.org/#/c/724809 -- Slawek Kaplonski Senior software engineer Red Hat From sean.mcginnis at gmx.com Fri May 1 10:16:01 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 May 2020 05:16:01 -0500 Subject: [Reno][Neutron] Release notes for stable/ussuri In-Reply-To: <20200501094857.og7cceclwnj63qgy@skaplons-mac> References: <20200501094857.og7cceclwnj63qgy@skaplons-mac> Message-ID: On 5/1/20 4:48 AM, Slawek Kaplonski wrote: > Hi, > > I wanted to add some prelude section to Neutron's release notes for > stable/ussuri but I screw it up and forgot to do it before stable/ussuri branch > was created. > So now I have patch [1] but notes aren't going to stable/ussuri branch at all. > I found the way how to exclude it from current (V) release notes but is there > any way to somehow force to add it to stable/ussuri's release notes now? > > [1] https://review.opendev.org/#/c/724809 > In a case like this, I think the easiest approach is to just propose the release note directly to the stable/ussuri branch. That way you don't need to tell reno to ignore it on the master branch so it doesn't show up in what will be the victoria release notes. From sean.mcginnis at gmx.com Fri May 1 10:32:45 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 May 2020 05:32:45 -0500 Subject: [release] Release countdown for week R-1, May 4 - 8 Message-ID: <20200501103245.GA401127@sm-workstation> Development Focus ----------------- We are on the final mile of the Ussuri development cycle! Remember that the Ussuri final release will include the latest release candidate (for cycle-with-rc deliverables) or the latest intermediary release (for cycle-with-intermediary deliverables) available. May 7 is the deadline for final Ussuri release candidates as well as any last cycle-with-intermediary deliverables. We will then enter a quiet period until we tag the final release on May 13. Teams should be prioritizing fixing release-critical bugs, before that deadline. Otherwise it's time to start planning the Victoria development cycle, including discussing PTG sessions content, in preparation of the virtual Victoria PTG event taking place June 1-5. Actions ------- Watch for any translation patches coming through on the stable/ussuri branch and merge them quickly. If you discover a release-critical issue, please make sure to fix it on the master branch first, then backport the bugfix to the stable/ussuri branch before triggering a new release. Please drop by #openstack-release with any questions or concerns about the upcoming release! Upcoming Deadlines & Dates -------------------------- Final RC deadline: May 7 (R-1 week) Final Ussuri release: May 13 Virtual Victoria PTG: June 1-5 From ruslanas at lpic.lt Fri May 1 10:33:26 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 1 May 2020 12:33:26 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Hi all, I also noticed, that /var/lib/tripleo-config/container-startup-config-step_6.json looks empty, even /var/log/containers/stdouts/*.log contain "step": 6 message. [root at remote-u stack]# ls -la /var/lib/tripleo-config/container-startup-config-step_* -rw-------. 1 root root 7774 May 1 11:56 /var/lib/tripleo-config/container-startup-config-step_1.json -rw-------. 1 root root 9793 May 1 11:56 /var/lib/tripleo-config/container-startup-config-step_2.json -rw-------. 1 root root 28945 May 1 11:56 /var/lib/tripleo-config/container-startup-config-step_3.json -rw-------. 1 root root 57302 May 1 11:56 /var/lib/tripleo-config/container-startup-config-step_4.json -rw-------. 1 root root 4050 May 1 11:56 /var/lib/tripleo-config/container-startup-config-step_5.json -rw-------. 1 root root 2 Apr 28 10:55 /var/lib/tripleo-config/container-startup-config-step_6.json On Fri, 1 May 2020 at 10:57, Ruslanas Gžibovskis wrote: > Hi, I just wanted to notice, that I have > > will it be sufficient to run: > ansibple-playbook -vvvv -i inventory.yaml > common_deploy_steps_tasks_step_1.yaml ??? > > cause testing takes looong time to run all openstack undercloud install > > Also trying with downgraded version of containernetworking-plugins.x86_64 > 0.8.1-2.el7.centos @extras > > My next thought is to run installation of packages to the same version as > I have on the older box, where everything is running. > > > On Thu, 30 Apr 2020 at 17:02, Ruslanas Gžibovskis > wrote: > >> By the way, >> >> I have shrinked a bit ansible.log, you can find it here: >> http://paste.debian.net/hidden/ffe95b54/ >> >> It looks like containers are going up, and I see them being up. >> >> but when I exec: >> [root at remote-u stack]# podman run -it d7b8f19cc5ed /bin/bash >> Error: error configuring network namespace for container >> 0f5d43e509f3b673180a677e192dd5f2498f6061680cb0db9ae15d739c2337e3: Missing >> CNI default network >> [root at remote-u stack]# >> >> I get this interesting line, should I get it? >> >> also, here is my undercloud.conf: >> http://paste.debian.net/hidden/09feaefa/ >> >> also, as I mentioned, previously, paunch.log is empty, I have added +w to >> it but it still empty. >> >> [root at remote-u stack]# cat /var/log/paunch.log >> [root at remote-u stack]# ls -la /var/log/paunch.log >> -rw-rw-rw-. 1 root root 0 Apr 28 10:59 /var/log/paunch.log >> [root at remote-u stack]# >> >> And I do not have proxy, luckily. but on another site, where we have >> proxy we face similar issues. and fails on same place. >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri May 1 12:37:25 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 1 May 2020 08:37:25 -0400 Subject: [cinder] critical bugs/patches for RC-2 In-Reply-To: References: Message-ID: <15374c8c-ba22-990b-58aa-ba04cdfe5f79@gmail.com> I realize that today and this weekend are holidays in various parts of the world, so I wish a happy holiday to everyone (even us poor suckers who are working today). On the off chance that you get bored and are looking for something to lively up yourself, members of cinder-stable-maint can make themselves extremely helpful by reviewing the "approved for backport" patches on the etherpad: https://etherpad.opendev.org/p/cinder-ussuri-rc-backport-potential We can cut RC-2 as soon as those land. cheers, brian On 4/27/20 11:39 AM, Brian Rosmaita wrote: > The proposed list is here: >   https://etherpad.opendev.org/p/cinder-ussuri-rc-backport-potential > > Please prioritize your reviewing accordingly. > > There are several proposed but not approved bugs on the list; I could > use some feedback on whether they are release-critical or not.  Please > reply on the etherpad. > > > thanks, > brian From jungleboyj at gmail.com Fri May 1 14:28:38 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 1 May 2020 09:28:38 -0500 Subject: [goals][Drop Python 2.7 Support] Status Report: COMPLETED \o/ In-Reply-To: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> References: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> Message-ID: <835d59cd-fc70-788e-5eaf-7a89e4a8bbf7@gmail.com> Gmann, Congratulations and thank you for all your work around this! Jay On 4/30/2020 9:53 PM, Ghanshyam Mann wrote: > Hello Everyone, > > Please find the final status for the py2.7 drop goal. Finally, it is COMPLETED!! > With this (after three continuous cycles), I am taking a break from community goals (at least for V cycle :))!!. > > Wiki page is updated: https://wiki.openstack.org/wiki/Python3 > I have added the 'Python3 Only' column here https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > Completion Report: > =============== > * Swift & Storlets are the two projects keeping py2.7 support. > > * All projects have completed the goal and now are python3-only. > > * It has been another roller coaster :) and I appreciate everyone helped in this work and > all the projects reviewing the patches on time with few exceptions :). > > * A Lot of gate issues occurred and fixed during this transition especially making branchless > tooling testing the mixed python version ( stable branch on py2 and other on py3). > > * I finished the audit for all the projects. requirement repo will be cleaning up the py2 caps in Victoria cycle[1]. > > > Some ongoing work on deployement projects repo or unmaintained repo: > ------------------------------------------------------------------------------------------- > These are dpendendant on other work for testing framework etc. This can be continued and need > not to be tracked under this goal. > * python-barbicanclient: https://review.opendev.org/#/c/699096/ > ** This repo seems unmaintained for last 6 months and the gate is already broken. > * Openstack Charms - Most of them merged, few are waiting for the team to migrate to Python3 Zaza functional test framework. > * Openstackansible - This will finish once centos jobs will be migrated to Centos8. > * Openstack-Helm - No conclusion yet from help team on what else to do for py2 drop. > > -gmann > From whayutin at redhat.com Fri May 1 15:00:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 1 May 2020 09:00:14 -0600 Subject: [tripleo] CI In-Reply-To: References: Message-ID: On Thu, Apr 30, 2020 at 6:37 PM Emilien Macchi wrote: > Thanks a lot Wes + team for the hard work, every day to keep CI stable. > > On Thu, Apr 30, 2020 at 7:02 PM Wesley Hayutin > wrote: > >> Greetings, >> >> Status update... >> Master: GREEN >> >> Stable Branches impacted by: >> https://bugs.launchpad.net/tripleo/+bug/1875890 fixed >> Now we are trying to promote each branch to level out pacemaker on the >> node and containers. Queens is promoting now. >> >> Train: GREEN >> >> Queens: RED ( current priority ) >> In addition to the pacemaker issue which has resolved in our periodic >> testing jobs, we're hitting issues w/ instances timing out in tempest >> https://bugs.launchpad.net/tripleo/+bug/1876087 >> >> Stein: RED >> Also seems to have the same issue as Queens >> https://bugs.launchpad.net/tripleo/+bug/1876087 ( under investigation >> for Stein ) >> >> Rocky: RED >> Also seems to have the same issue as Queens >> https://bugs.launchpad.net/tripleo/+bug/1876087 ( under investigation >> for Rocky ) >> I will be promoting Rocky to level out pacemaker next. >> >> In order to get the voting jobs in queens back to green, I'm disabling >> tempest on containers-multinode. https://review.opendev.org/#/c/724703/ >> >> Additional notes may be found: >> https://hackmd.io/1pY-KQB_QwOe-a-5oEXTRg >> >> > > -- > Emilien Macchi > Status Update: Master: Green, but seeing several jobs failing on random tempest or container start issues atm. No pattern yet Train: Green Stein: Green Rocky: Green Queens: Green, the coverage here on scenario jobs is terrible as they are all failing. There is some discrepancy between periodic and check as periodic only went red on April 26th [1] vs. check jobs started going red on March 24th [2] Improvements in progress: At the moment we test CentOS CR [3] in our RDO periodic pipelines which is NOT sufficient to protect upstream jobs. It would not catch a pacemaker mismatch between the nodepool node and containers. Containers are rebuilt for each test in RDO. To help catch issues w/ the latest CentOS packages in CR we are making the following changes to our upstream periodic jobs. Gabriele Cerami and myself are working through the design now. Input is welcome. TLDR: Using the upstream zuul ensures that the containers and nodes have the potential to mismatch in versions and thus catching pacemaker issues in advance. https://review.opendev.org/#/c/724846/ https://review.opendev.org/#/c/724858/ Thanks all [1] https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-wednesday-weekend&job_name=periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset016-queens%09 [2] https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-scenario001-multinode-oooq-container&branch=stable%2Fqueens [3] https://wiki.centos.org/AdditionalResources/Repositories/CR Thanks Emilien! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Fri May 1 15:28:20 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 1 May 2020 11:28:20 -0400 Subject: [goals][Drop Python 2.7 Support] Status Report: COMPLETED \o/ In-Reply-To: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> References: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> Message-ID: <20200501152820.ml67pjhx7lzj5hyy@firewall> This has been a tremendous coordination effort with a huge number of changes, some of them quite complex. Congratulations Ghanshyam and everyone involved with the drop-py27 effort, and thank you for all the time and effort you have out into this. Nate On Thu, Apr 30, 2020 at 09:53:59PM -0500, Ghanshyam Mann wrote: > Hello Everyone, > > Please find the final status for the py2.7 drop goal. Finally, it is COMPLETED!! > With this (after three continuous cycles), I am taking a break from community goals (at least for V cycle :))!!. > > Wiki page is updated: https://wiki.openstack.org/wiki/Python3 > I have added the 'Python3 Only' column here https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > Completion Report: > =============== > * Swift & Storlets are the two projects keeping py2.7 support. > > * All projects have completed the goal and now are python3-only. > > * It has been another roller coaster :) and I appreciate everyone helped in this work and > all the projects reviewing the patches on time with few exceptions :). > > * A Lot of gate issues occurred and fixed during this transition especially making branchless > tooling testing the mixed python version ( stable branch on py2 and other on py3). > > * I finished the audit for all the projects. requirement repo will be cleaning up the py2 caps in Victoria cycle[1]. > > > Some ongoing work on deployement projects repo or unmaintained repo: > ------------------------------------------------------------------------------------------- > These are dpendendant on other work for testing framework etc. This can be continued and need > not to be tracked under this goal. > * python-barbicanclient: https://review.opendev.org/#/c/699096/ > ** This repo seems unmaintained for last 6 months and the gate is already broken. > * Openstack Charms - Most of them merged, few are waiting for the team to migrate to Python3 Zaza functional test framework. > * Openstackansible - This will finish once centos jobs will be migrated to Centos8. > * Openstack-Helm - No conclusion yet from help team on what else to do for py2 drop. > > -gmann > From skaplons at redhat.com Fri May 1 15:28:27 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 1 May 2020 17:28:27 +0200 Subject: [Reno][Neutron] Release notes for stable/ussuri In-Reply-To: References: <20200501094857.og7cceclwnj63qgy@skaplons-mac> Message-ID: <20200501152827.h6cvcfyrbuc2czqp@skaplons-mac> Hi, On Fri, May 01, 2020 at 05:16:01AM -0500, Sean McGinnis wrote: > On 5/1/20 4:48 AM, Slawek Kaplonski wrote: > > Hi, > > > > I wanted to add some prelude section to Neutron's release notes for > > stable/ussuri but I screw it up and forgot to do it before stable/ussuri branch > > was created. > > So now I have patch [1] but notes aren't going to stable/ussuri branch at all. > > I found the way how to exclude it from current (V) release notes but is there > > any way to somehow force to add it to stable/ussuri's release notes now? > > > > [1] https://review.opendev.org/#/c/724809 > > > In a case like this, I think the easiest approach is to just propose the > release note directly to the stable/ussuri branch. That way you don't > need to tell reno to ignore it on the master branch so it doesn't show > up in what will be the victoria release notes. > Thx Sean for reply. I did it that way. Patch is now proposed directly to stable/Ussuri branch [2] [2] https://review.opendev.org/724875 -- Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Fri May 1 15:47:43 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 May 2020 15:47:43 +0000 Subject: [all] IRC channel cleanup In-Reply-To: <20200312004325.p7zwr7pkc6aeuezc@yuggoth.org> References: <20200312004325.p7zwr7pkc6aeuezc@yuggoth.org> Message-ID: <20200501154743.mtjizlsapgx3kmld@yuggoth.org> Since my spot-check in March, discussion in #openstack-outreachy has picked back up slightly. The other 13 from that analysis, however, have continued to go unused. Since no objections were raised, I've proposed to stop logging the following channels and remove our meetbot, statusbot and gerritbot services from them: #congress #murano #openstack-browbeat #openstack-dragonflow #openstack-ec2api #openstack-forum #openstack-heat-translator #openstack-net-bgpvpn #openstack-performance #openstack-sprint #openstack-tricircle #openstack-women #scientific-wg These removals are being handled by changes https://review.opendev.org/724878 and https://review.opendev.org/724879 but can be partially reverted if one or more of the channels resumes its former uses. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Fri May 1 16:11:23 2020 From: amy at demarco.com (Amy Marrich) Date: Fri, 1 May 2020 11:11:23 -0500 Subject: [all] IRC channel cleanup In-Reply-To: <20200501154743.mtjizlsapgx3kmld@yuggoth.org> References: <20200312004325.p7zwr7pkc6aeuezc@yuggoth.org> <20200501154743.mtjizlsapgx3kmld@yuggoth.org> Message-ID: Jeremy, I thought I had gotten back to you on #openstack-women. As the group is now part of the D&I WG and has been for sometime it can be archived. I'll also add potentially renaming #openstacl-diversity to #osf-diversity to reflect our broader base of projects for Monday's meeting. Thanks, Amy (spotz) On Fri, May 1, 2020 at 10:50 AM Jeremy Stanley wrote: > Since my spot-check in March, discussion in #openstack-outreachy has > picked back up slightly. The other 13 from that analysis, however, > have continued to go unused. Since no objections were raised, I've > proposed to stop logging the following channels and remove our > meetbot, statusbot and gerritbot services from them: > > #congress > #murano > #openstack-browbeat > #openstack-dragonflow > #openstack-ec2api > #openstack-forum > #openstack-heat-translator > #openstack-net-bgpvpn > #openstack-performance > #openstack-sprint > #openstack-tricircle > #openstack-women > #scientific-wg > > These removals are being handled by changes > https://review.opendev.org/724878 and > https://review.opendev.org/724879 but can be partially reverted if > one or more of the channels resumes its former uses. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri May 1 16:34:18 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 1 May 2020 18:34:18 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> Message-ID: Hello Sean, to be honest I did not understand what is the difference between the first and second patch but it is due to my poor skill and my poor english. Anycase I would like to test it. I saw I can download files : workaround.py and neutron.py and there is a new option force_legacy_port_binding. How can I test? I must enable the new option under workaround section in the in nova.conf on compute nodes setting it to true? The files downloaded (from first or secondo patch?) must be copied on compute nodes under /usr/lib/python2.7/site_packages nova/conf/workaround.py and nova/network/neutron.py and then restart nova compute service? It should work only for new instances or also for running instances? Sorry for disturbing. Best Regards Ignazio Il Mer 29 Apr 2020, 19:49 Sean Mooney ha scritto: > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > Many thanks. > > Please keep in touch. > here are the two patches. > the first https://review.opendev.org/#/c/724386/ is the actual change to > add the new config opition > this needs a release note and some tests but it shoudl be functional hence > the [WIP] > i have not enable the workaround in any job in this patch so the ci run > will assert this does not break > anything in the default case > > the second patch is https://review.opendev.org/#/c/724387/ which enables > the workaround in the multi node ci jobs > and is testing that live migration exctra works when the workaround is > enabled. > > this should work as it is what we expect to happen if you are using a > moderne nova with an old neutron. > its is marked [DNM] as i dont intend that patch to merge but if the > workaround is useful we migth consider enableing > it for one of the jobs to get ci coverage but not all of the jobs. > > i have not had time to deploy a 2 node env today but ill try and test this > locally tomorow. > > > > > Ignazio > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > > ha scritto: > > > > > so bing pragmatic i think the simplest path forward given my other > patches > > > have not laned > > > in almost 2 years is to quickly add a workaround config option to > disable > > > mulitple port bindign > > > which we can backport and then we can try and work on the actual fix > after. > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that > shoudl > > > serve as a workaround > > > for thos that hav this issue but its a regression in functionality. > > > > > > i can create a patch that will do that in an hour or so and submit a > > > followup DNM patch to enabel the > > > workaound in one of the gate jobs that tests live migration. > > > i have a meeting in 10 mins and need to finish the pacht im currently > > > updating but ill submit a poc once that is done. > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > proposed last year but ill see what i can do. > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > PS > > > > I have testing environment on queens,rocky and stein and I can make > test > > > > as you need. > > > > Ignazio > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > > > Hello Sean, > > > > > the following is the configuration on my compute nodes: > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > As far as firewall driver > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > testing > > > > > environment and the > > > > > same firewall driver. > > > > > Live migration on provider network on queens works fine. > > > > > It does not work fine on rocky and stein (vm lost connection after > it > > > > > > is > > > > > migrated and start to respond only when the vm send a network > packet , > > > > > > for > > > > > example when chrony pools the time server). > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > smooney at redhat.com> > > > > > ha scritto: > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > Hello, some updated about this issue. > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > > > sent by > > > > > > > qemu during live miration. > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > > > bugged. > > > > > > > > > > > > it is not correct. > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > > > its mac > > > > > > learning frames > > > > > > instead > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > but was fixed by > > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > can you confirm you are not using the broken 2.6.0 release and > are > > > > > > using > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > libvirt/qemu > > > > > > > packages I installed on queens (I updated compute and > controllers > > > > > > node > > > > > > > > > > > > on > > > > > > > queens for obtaining same libvirt/qemu version deployed on > rocky > > > > > > and > > > > > > > > > > > > stein). > > > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > > > fine. > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > openstack > > > > > > > components . > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly > assumes > > > > > > that the port binding details wont > > > > > > change when it does a live migration and does not update the xml > for > > > > > > the > > > > > > netwrok interfaces. > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > post_livemigration > > > > > > in rocky+ neutron optionally uses the multiple port bindings > flow to > > > > > > prebind the port to the destiatnion > > > > > > so it can update the xml if needed and if post copy live > migration is > > > > > > enable it will asyconsly activate teh dest port > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > if you are using the iptables firewall os-vif will have > precreated > > > > > > the > > > > > > ovs port and intermediate linux bridge before the > > > > > > migration started which will allow neutron to wire it up (put it > on > > > > > > the > > > > > > correct vlan and install security groups) before > > > > > > the vm completes the migraton. > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > > > port > > > > > > but libvirt deletes it and recreats it too. > > > > > > as a result there is a race when using openvswitch firewall that > can > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > smooney at redhat.com> > > > > > > > ha scritto: > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > iptables_hybrid > > > > > > > > > firewall. > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > spec intoduced teh ablity to shorten the outage by pre > biding the > > > > > > > > > > > > port and > > > > > > > > activating it when > > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > > > live > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully > elimiated > > > > > > as > > > > > > > > > > > > some > > > > > > > > level of packet loss is > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be > aware > > > > > > that > > > > > > > > > > > > if a > > > > > > > > network partion happens > > > > > > > > during a post copy live migration the vm will crash and need > to > > > > > > be > > > > > > > > restarted. > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > performace > > > > > > > > > > > > but > > > > > > > > unlike pre copy migration if > > > > > > > > the guess resumes on the dest and the mempry page has not > been > > > > > > copied > > > > > > > > > > > > yet > > > > > > > > then it must wait for it to be copied > > > > > > > > and retrive it form the souce host. if the connection too the > > > > > > souce > > > > > > > > > > > > host > > > > > > > > is intrupted then the vm cant > > > > > > > > do that and the migration will fail and the instance will > crash. > > > > > > if > > > > > > > > > > > > you > > > > > > > > are using precopy migration > > > > > > > > if there is a network partaion during the migration the > > > > > > migration will > > > > > > > > fail but the instance will continue > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to > be > > > > > > aware > > > > > > > > > > > > of > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > ha > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm > migrate > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > node > > > > > > > > > > > to another I cannot ping it for several minutes. If in > the > > > > > > vm I > > > > > > > > > > > > put a > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > migration > > > > > > > > > > > > works > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > I can ping it. Why this happens ? I read something > about > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an > older > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > called > > > > > > > > > > RARP > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you > using > > > > > > > > > > > > iptables > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can > do > > > > > > until > > > > > > > > > > > > we > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > currently libvirt handels interface plugging for kernel > ovs > > > > > > when > > > > > > > > > > > > using > > > > > > > > > > > > > > > > the > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > > > but it > > > > > > > > > > > > and > > > > > > > > > > > > > > > > the > > > > > > > > > > neutron patch are > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > > > while > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > is > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > > > then mac > > > > > > > > > > > > > > > > learning > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have > opnestack > > > > > > > > > > > > rock or > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > it should reduce the downtime. in this conficution we do > not > > > > > > have > > > > > > > > > > > > the > > > > > > > > > > > > > > > > race > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri May 1 16:36:38 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 May 2020 16:36:38 +0000 Subject: [all] IRC channel cleanup In-Reply-To: References: <20200312004325.p7zwr7pkc6aeuezc@yuggoth.org> <20200501154743.mtjizlsapgx3kmld@yuggoth.org> Message-ID: <20200501163637.diihxamyg3ilbmpb@yuggoth.org> On 2020-05-01 11:11:23 -0500 (-0500), Amy Marrich wrote: > I thought I had gotten back to you on #openstack-women. As the > group is now part of the D&I WG and has been for sometime it can > be archived. You did, but since there's no rush it was just easier to include in the bulk removals as that involves fewer changes to review. As I said, nobody objected, and that doesn't contradict the fact you and others also responded in the affirmative on a few of these. > I'll also add potentially renaming #openstacl-diversity to > #osf-diversity to reflect our broader base of projects for > Monday's meeting. [...] That's definitely doable, though it requires a few manual steps to perform a thorough redirection (we have to set the old channel to forward any subsequent joins to the new channel, and inform anyone still lurking in the old one to move to the new one or maybe eventually just kick them out of the old channel so we can lock it completely). Be aware though, we haven't used #osf- prefixed channels before now and have a number of other OSF-specific channels which use the #openstack- prefix like #openstack-board, #openstack-foundation, #openstack-summit and so on. If we're going to start having more #osf- channels, we may want to look into reserving it as a namespace prefix with Freenode staff to give us better control over future channels using the same prefix. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Fri May 1 16:47:18 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 May 2020 17:47:18 +0100 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> Message-ID: <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> On Fri, 2020-05-01 at 18:34 +0200, Ignazio Cassano wrote: > Hello Sean, > to be honest I did not understand what is the difference between the first > and second patch but it is due to my poor skill and my poor english. no worries. the first patch is the actual change to add the new config option. the second patch is just a change to force our ci jobs to enable the config option we proably dont want to do that permently which is why i have marked it [DNM] or "do not merge" so it just there to prove the first patch is correct. > Anycase I would like to test it. I saw I can download files : > workaround.py and neutron.py and there is a new option > force_legacy_port_binding. > How can I test? > I must enable the new option under workaround section in the in nova.conf > on compute nodes setting it to true? yes that is correct if you apply the first patch you need to set the new config option in the workarouds section in the nova.conf on the contoler.specifcally the conductor needs to have this set. i dont think this is needed on the compute nodes at least it should not need to be set in the compute node nova.conf for the live migration issue. > The files downloaded (from first or secondo patch?) must be copied on > compute nodes under /usr/lib/python2.7/site_packages > nova/conf/workaround.py and nova/network/neutron.py and then restart nova > compute service? once we have merged this in master ill backport it to the different openstack version back to rocky if you want to test it before then the simpelest thing to do is just manually make the same change unless you are using devstack in which case you could cherry pick the cange to whatever branch you are testing. > It should work only for new instances or also for running instances? it will apply to all instances. what the cange is doing is disabling our detection of neutron supprot for the multiple port binding workflow. we still have compatibility code for supporting old version of neutron. we proably shoudl remove that at some point but when the config option is set we will ignore if you are using old or new neutorn and just fall back to how we did things before rocky. in principal that should make live migration have more packet loss but since people have reproted it actully fixes the issue in this case i have written the patch so you can opt in to the old behaviour. if that work for you in your testing we can continue to keep the workaround and old compatibility code until we resolve the issue when using the multiple port binding flow. > Sorry for disturbing. dont be sorry it fine to ask questions although just be aware its a long weekend so i will not be working monday but i should be back on tuseday ill update the patch then with a release note and a unit test and hopefully i can get some cores to review it. > Best Regards > Ignazio > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney ha scritto: > > > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > > Many thanks. > > > Please keep in touch. > > > > here are the two patches. > > the first https://review.opendev.org/#/c/724386/ is the actual change to > > add the new config opition > > this needs a release note and some tests but it shoudl be functional hence > > the [WIP] > > i have not enable the workaround in any job in this patch so the ci run > > will assert this does not break > > anything in the default case > > > > the second patch is https://review.opendev.org/#/c/724387/ which enables > > the workaround in the multi node ci jobs > > and is testing that live migration exctra works when the workaround is > > enabled. > > > > this should work as it is what we expect to happen if you are using a > > moderne nova with an old neutron. > > its is marked [DNM] as i dont intend that patch to merge but if the > > workaround is useful we migth consider enableing > > it for one of the jobs to get ci coverage but not all of the jobs. > > > > i have not had time to deploy a 2 node env today but ill try and test this > > locally tomorow. > > > > > > > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > > > > ha scritto: > > > > > > > so bing pragmatic i think the simplest path forward given my other > > > > patches > > > > have not laned > > > > in almost 2 years is to quickly add a workaround config option to > > > > disable > > > > mulitple port bindign > > > > which we can backport and then we can try and work on the actual fix > > > > after. > > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that > > > > shoudl > > > > serve as a workaround > > > > for thos that hav this issue but its a regression in functionality. > > > > > > > > i can create a patch that will do that in an hour or so and submit a > > > > followup DNM patch to enabel the > > > > workaound in one of the gate jobs that tests live migration. > > > > i have a meeting in 10 mins and need to finish the pacht im currently > > > > updating but ill submit a poc once that is done. > > > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > > proposed last year but ill see what i can do. > > > > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > > PS > > > > > I have testing environment on queens,rocky and stein and I can make > > > > test > > > > > as you need. > > > > > Ignazio > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > > > > > Hello Sean, > > > > > > the following is the configuration on my compute nodes: > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > > > > As far as firewall driver > > > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > > > testing > > > > > > environment and the > > > > > > same firewall driver. > > > > > > Live migration on provider network on queens works fine. > > > > > > It does not work fine on rocky and stein (vm lost connection after > > > > it > > > > > > > > is > > > > > > migrated and start to respond only when the vm send a network > > > > packet , > > > > > > > > for > > > > > > example when chrony pools the time server). > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, some updated about this issue. > > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > > > > > sent by > > > > > > > > qemu during live miration. > > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > > > > > bugged. > > > > > > > > > > > > > > it is not correct. > > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > > > > > its mac > > > > > > > learning frames > > > > > > > instead > > > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > > but was fixed by > > > > > > > > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > > can you confirm you are not using the broken 2.6.0 release and > > > > are > > > > > > > > using > > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > > > libvirt/qemu > > > > > > > > packages I installed on queens (I updated compute and > > > > controllers > > > > > > > > node > > > > > > > > > > > > > > on > > > > > > > > queens for obtaining same libvirt/qemu version deployed on > > > > rocky > > > > > > > > and > > > > > > > > > > > > > > stein). > > > > > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > > > > > fine. > > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > > > openstack > > > > > > > > components . > > > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly > > > > assumes > > > > > > > that the port binding details wont > > > > > > > change when it does a live migration and does not update the xml > > > > for > > > > > > > > the > > > > > > > netwrok interfaces. > > > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > > post_livemigration > > > > > > > in rocky+ neutron optionally uses the multiple port bindings > > > > flow to > > > > > > > prebind the port to the destiatnion > > > > > > > so it can update the xml if needed and if post copy live > > > > migration is > > > > > > > enable it will asyconsly activate teh dest port > > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > > > if you are using the iptables firewall os-vif will have > > > > precreated > > > > > > > > the > > > > > > > ovs port and intermediate linux bridge before the > > > > > > > migration started which will allow neutron to wire it up (put it > > > > on > > > > > > > > the > > > > > > > correct vlan and install security groups) before > > > > > > > the vm completes the migraton. > > > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > > > > > port > > > > > > > but libvirt deletes it and recreats it too. > > > > > > > as a result there is a race when using openvswitch firewall that > > > > can > > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > > > smooney at redhat.com> > > > > > > > > ha scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > > > iptables_hybrid > > > > > > > > > > firewall. > > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky > > > > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > > spec intoduced teh ablity to shorten the outage by pre > > > > biding the > > > > > > > > > > > > > > port and > > > > > > > > > activating it when > > > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > > > > > live > > > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully > > > > elimiated > > > > > > > > as > > > > > > > > > > > > > > some > > > > > > > > > level of packet loss is > > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be > > > > aware > > > > > > > > that > > > > > > > > > > > > > > if a > > > > > > > > > network partion happens > > > > > > > > > during a post copy live migration the vm will crash and need > > > > to > > > > > > > > be > > > > > > > > > restarted. > > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > > > performace > > > > > > > > > > > > > > but > > > > > > > > > unlike pre copy migration if > > > > > > > > > the guess resumes on the dest and the mempry page has not > > > > been > > > > > > > > copied > > > > > > > > > > > > > > yet > > > > > > > > > then it must wait for it to be copied > > > > > > > > > and retrive it form the souce host. if the connection too the > > > > > > > > souce > > > > > > > > > > > > > > host > > > > > > > > > is intrupted then the vm cant > > > > > > > > > do that and the migration will fail and the instance will > > > > crash. > > > > > > > > if > > > > > > > > > > > > > > you > > > > > > > > > are using precopy migration > > > > > > > > > if there is a network partaion during the migration the > > > > > > > > migration will > > > > > > > > > fail but the instance will continue > > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to > > > > be > > > > > > > > aware > > > > > > > > > > > > > > of > > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > > > > ha > > > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm > > > > migrate > > > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > > > node > > > > > > > > > > > > to another I cannot ping it for several minutes. If in > > > > the > > > > > > > > vm I > > > > > > > > > > > > > > put a > > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > > > migration > > > > > > > > > > > > > > works > > > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > I can ping it. Why this happens ? I read something > > > > about > > > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an > > > > older > > > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > > > called > > > > > > > > > > > RARP > > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you > > > > using > > > > > > > > > > > > > > iptables > > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can > > > > do > > > > > > > > until > > > > > > > > > > > > > > we > > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > > currently libvirt handels interface plugging for kernel > > > > ovs > > > > > > > > when > > > > > > > > > > > > > > using > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > > > > > but it > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > neutron patch are > > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > > > > > while > > > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > > > is > > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > > > > > then mac > > > > > > > > > > > > > > > > > > learning > > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have > > > > opnestack > > > > > > > > > > > > > > rock or > > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > > it should reduce the downtime. in this conficution we do > > > > not > > > > > > > > have > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > > > race > > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From pramchan at yahoo.com Fri May 1 19:05:21 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 1 May 2020 19:05:21 +0000 (UTC) Subject: [InteropWG] Draft Guidelines for 2020.06 Ussuri core and addons for orchestration and dns References: <1892120700.539821.1588359921647.ref@mail.yahoo.com> Message-ID: <1892120700.539821.1588359921647@mail.yahoo.com> Hi all, We have been trying to draft  Ussuri guidelines API for Market logo mark as part of InteropWGIn case anyone is interested in reviewing the draft patch please feel free to comment with +1 only.The +2's are reserved for Mark-Voelker & Egle.sigler  c Vice Chair and past Vice-Chairs for Interop WGCall for Review: https://review.opendev.org/722137  We are investing add-ons for OpenStack Edge , Containers and Swift backends.If you have any suggestions please pitch in. Please add your ideas to PTG etherpad link in the follwoing (Jine 1st - 1st 2 hour slot)https://etherpad.opendev.org/p/interop ThnaksPrakash https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri May 1 19:21:56 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 1 May 2020 21:21:56 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> Message-ID: Hello Sean, I hope you'll read this email next week (this is only a report on my testing and I know you are not working now, but I am writing to keep track of testing) I tested the patch on stein. I downloaded the new workarounds.py file and I copied it on controllers under /usr/lib/python2.7/site-packages/nova/conf directory I downloaded neutron.py and I copied it on controllers under /usr/lib/python2.7/site-packages/nova/network directory On each controllers I added in nova.conf workarounds section the following: force_legacy_port_binding = True I restarted all nova services. But when I try a live migration I got the following error in /var/log/nova/nova-api.log: 35f85547c4add392a221af1aab - default default] 10.102.184.193 "GET /v2.1/os-services?binary=nova-compute HTTP/1.1" status: 200 len: 968 time: 0.0685341 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi [req-1a449846-51bf-46e3-afdb-43469a0226ea 0c7a2d6006614fe2b3e81e47377dd2a9 c26f8d35f85547c4add392a221af1aab - default default] Unexpected exception in API method: NoSuchOptError: no such option enable_consoleauth in group [workarounds] 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi Traceback (most recent call last): 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 671, in wrapped 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return f(*args, **kwargs) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return func(*args, **kwargs) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return func(*args, **kwargs) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return func(*args, **kwargs) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, in wrapper 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return func(*args, **kwargs) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/migrate_server.py", line 141, in _migrate_live 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi async_) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 207, in inner 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return function(self, context, instance, *args, **kwargs) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 215, in _wrapped 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return fn(self, context, instance, *args, **kwargs) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 155, in inner 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return f(self, context, instance, *args, **kw) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 4756, in live_migrate 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi if CONF.cells.enable or CONF.workarounds.enable_consoleauth 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3124, in __getattr__ 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi return self._conf._get(name, self._group) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2621, in _get 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi value, loc = self._do_get(name, group, namespace) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2639, in _do_get 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi info = self._get_opt_info(name, group) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2839, in _get_opt_info 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi raise NoSuchOptError(opt_name, group) 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi NoSuchOptError: no such option enable_consoleauth in group [workarounds] 2020-05-01 19:39:08.140 3225168 ERROR nova.api.openstack.wsgi 2020-05-01 19:39:08.151 3225168 INFO nova.api.openstack.wsgi [req-1a449846-51bf-46e3-afdb-43469a0226ea 0c7a2d6006614fe2b3e81e47377dd2a9 c26f8d35f85547c4add392a221af1aab - default default] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. I checked nova.conf and I not found enable_consoleauth under workarounds section. I 'll wait your checks next week Have fun on the long weekend Ignazio Il giorno ven 1 mag 2020 alle ore 18:47 Sean Mooney ha scritto: > On Fri, 2020-05-01 at 18:34 +0200, Ignazio Cassano wrote: > > Hello Sean, > > to be honest I did not understand what is the difference between the > first > > and second patch but it is due to my poor skill and my poor english. > no worries. the first patch is the actual change to add the new config > option. > the second patch is just a change to force our ci jobs to enable the > config option > we proably dont want to do that permently which is why i have marked it > [DNM] or "do not merge" > so it just there to prove the first patch is correct. > > > Anycase I would like to test it. I saw I can download files : > > workaround.py and neutron.py and there is a new option > > force_legacy_port_binding. > > How can I test? > > I must enable the new option under workaround section in the in nova.conf > > on compute nodes setting it to true? > yes that is correct if you apply the first patch you need to set the new > config > option in the workarouds section in the nova.conf on the > contoler.specifcally the conductor > needs to have this set. i dont think this is needed on the compute nodes > at least it should not > need to be set in the compute node nova.conf for the live migration issue. > > > The files downloaded (from first or secondo patch?) must be copied on > > compute nodes under /usr/lib/python2.7/site_packages > > nova/conf/workaround.py and nova/network/neutron.py and then restart nova > > compute service? > > once we have merged this in master ill backport it to the different > openstack version back to rocky > if you want to test it before then the simpelest thing to do is just > manually make the same change > unless you are using devstack in which case you could cherry pick the > cange to whatever branch you are testing. > > > It should work only for new instances or also for running instances? > > it will apply to all instances. what the cange is doing is disabling our > detection of > neutron supprot for the multiple port binding workflow. we still have > compatibility code for supporting old version of > neutron. we proably shoudl remove that at some point but when the config > option is set we will ignore if you are using > old or new neutorn and just fall back to how we did things before rocky. > > in principal that should make live migration have more packet loss but > since people have reproted it actully fixes the > issue in this case i have written the patch so you can opt in to the old > behaviour. > > if that work for you in your testing we can continue to keep the > workaround and old compatibility code until we resolve > the issue when using the multiple port binding flow. > > Sorry for disturbing. > > dont be sorry it fine to ask questions although just be aware its a long > weekend so i will not be working monday > but i should be back on tuseday ill update the patch then with a release > note and a unit test and hopefully i can get > some cores to review it. > > Best Regards > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kecarter at redhat.com Fri May 1 20:18:13 2020 From: kecarter at redhat.com (Kevin Carter) Date: Fri, 1 May 2020 15:18:13 -0500 Subject: [tripleo] Container image tooling roadmap Message-ID: Hello Stackers, As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea is to build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for the TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area. To highlight how all this works, we've put together several POC changes: * Role and playbook to implement the Containerfile specification [1]. * Tripleoclient review to interface with the new role and playbook [2]. * Directory structure for variable file layout [3]. * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. * Example configuration file examples are here [5][6][7]. A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: - base: + Kolla: 588 MB - new: 211 MB # based on ubi8, smaller than centos8 - nova-base: + Kolla: 1.09 GB - new: 720 MB - nova-libvirt: + Kolla: 2.14 GB - new: 1.9 GB - keystone: + Kolla: 973 MB - new: 532 MB - memcached: + Kolla: 633 MB - new: 379 MB While the links shown are many, the actual volume of the proposed change is small, although the impact is massive: * With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. We're looking to chart a more reliable path forward, and making the TripleO user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out. We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned. Thanks, Kevin and Emilien [0] https://review.opendev.org/#/c/723665/ [1] https://review.opendev.org/#/c/722557/ [2] https://review.opendev.org/#/c/724147/ [3] https://review.opendev.org/#/c/722486/ [4] https://files.macchi.pro:8443/demo-container-images/ [5] http://paste.openstack.org/show/792995/ [6] http://paste.openstack.org/show/792994/ [7] http://paste.openstack.org/show/792993/ [8] https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcdn.com/724436/2/check/tripleo-build-containers-centos-8/78eefc3/logs/containers-successfully-built.log Kevin Carter IRC: kecarter -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Fri May 1 20:26:34 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 1 May 2020 22:26:34 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Hi all, In keepalived container config gen execution I find: "+ rc=2", "+ '[' False = false ']'", "+ set -e", "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", "+ verbosity=", "+ verbosity=-v", "+ '[' -z '' ']'", -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri May 1 20:46:34 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 1 May 2020 16:46:34 -0400 Subject: [goals][Drop Python 2.7 Support] Status Report: COMPLETED \o/ In-Reply-To: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> References: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> Message-ID: <9897445E-9075-4BB6-8FB7-D8EDD1D806ED@doughellmann.com> > On Apr 30, 2020, at 10:53 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > Please find the final status for the py2.7 drop goal. Finally, it is COMPLETED!! > With this (after three continuous cycles), I am taking a break from community goals (at least for V cycle :))!!. > > Wiki page is updated: https://wiki.openstack.org/wiki/Python3 > I have added the 'Python3 Only' column here https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > Completion Report: > =============== > * Swift & Storlets are the two projects keeping py2.7 support. > > * All projects have completed the goal and now are python3-only. > > * It has been another roller coaster :) and I appreciate everyone helped in this work and > all the projects reviewing the patches on time with few exceptions :). > > * A Lot of gate issues occurred and fixed during this transition especially making branchless > tooling testing the mixed python version ( stable branch on py2 and other on py3). > > * I finished the audit for all the projects. requirement repo will be cleaning up the py2 caps in Victoria cycle[1]. > > > Some ongoing work on deployement projects repo or unmaintained repo: > ------------------------------------------------------------------------------------------- > These are dpendendant on other work for testing framework etc. This can be continued and need > not to be tracked under this goal. > * python-barbicanclient: https://review.opendev.org/#/c/699096/ > ** This repo seems unmaintained for last 6 months and the gate is already broken. > * Openstack Charms - Most of them merged, few are waiting for the team to migrate to Python3 Zaza functional test framework. > * Openstackansible - This will finish once centos jobs will be migrated to Centos8. > * Openstack-Helm - No conclusion yet from help team on what else to do for py2 drop. > > -gmann > Congratulations! Moving to python 3 has been a multi-year process, but it shows that we can make significant maintenance-oriented changes like this when the community works together. Thank you for seeing this last phase of the work through to completion, Ghanshyam. Doug From ruslanas at lpic.lt Fri May 1 21:55:18 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 1 May 2020 23:55:18 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Hi all, it took me sevceral days to understand what you were saying Alex and Heng it looks like in this file: ./common_deploy_steps_tasks.yaml paunch is not executed, or executed without logging? paunch --verbose apply --file /var/lib/tripleo-config/container-startup-config-step_1.json --config-id tripleo_step1 --default-runtime podman --container-log-path /var/log/paunch.log When I executed it manually, I got some errors: https://pastebin.com/hnmW0rGX According to some error messages, it might be, that it is trying to launch with value for log file: ... '--log-driver', 'k8s-file', '--log-opt', 'path=/var/log/paunch.log/rabbitmq_init_logs.log' ... 2020-05-01 23:14:48.328 28129 ERROR paunch [ ] stderr: [conmon:e]: Failed to open log file Not a directory ... '--log-driver', 'k8s-file', '--log-opt', 'path=/var/log/paunch.log/mysql_init_logs.log' ... 020-05-01 23:14:48.812 28129 ERROR paunch [ ] stderr: [conmon:e]: Failed to open log file Not a directory Also, i see it failed to enable in systemctl tripleo_memcached "Failed to start memcached container." and same for haproxy and keepalived... Any ideas where to go further? -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri May 1 22:06:27 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 May 2020 23:06:27 +0100 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: > Hello Stackers, > > As you may have seen, the TripleO project has been testing the idea of > building container images using a simplified toolchain [0]. The idea is to > build smaller, more easily maintained images to simplify the lives of > TripleO consumers. Since TripleO's move to containers, the project has been > leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO > has created considerable tooling to bend Kolla images to its needs. Sadly > this has resulted in an image size explosion and the proliferation of > difficult to maintain tools, with low bus factors, which are essential to > the success of the project. To address the risk centered around the TripleO > containerization story, we've drafted a spec [0], which we believe outlines > a more sustainable future. In this specification, we've designed a new, > much more straightforward, approach to building container images for the > TripleO project. The "simple container generation," specification does not > intend to be a general-purpose tool used to create images for the greater > OpenStack community, both Loci and Kolla do that already. This effort is to > build containers only for TripleO using distro provided repositories, with > distro maintained tooling. By focusing only on what we need, we're able to > remove all general-purpose assumptions and create a vertically integrated > stack resulting in a much smaller surface area. > > To highlight how all this works, we've put together several POC changes: > * Role and playbook to implement the Containerfile specification [1]. > * Tripleoclient review to interface with the new role and playbook [2]. > * Directory structure for variable file layout [3]. > * To see how this works using the POC code, building images we've tested in > real deployments, please watch the ASCII-cast [4]. > * Example configuration file examples are here [5][6][7]. > > A few examples of size comparisons between our proposed tooling versus > current Kolla based images [8]: > - base: > + Kolla: 588 MB > - new: 211 MB # based on ubi8, smaller than centos8 kolla could also use the ubi8 image as a base you could select centos as the base image type and then pass the url to the ubi image and that should would work. > - nova-base: > + Kolla: 1.09 GB > - new: 720 MB unless you are using layers fo rthe new iamge keep in mind you have to subtract the size of the base imag form the nova base image to caluate how big it actully is so its acully only usein about 500MB if you are using layser for the ubi nova-base then these iamge are actully the same size the diffenrec is entirly coming for the reduction in the base image > - nova-libvirt: > + Kolla: 2.14 GB > - new: 1.9 GB again here you have to do the same arithmatich so 2.14-1.09 so this image is adding 1.05 GB of layers in the kolla case and the ubi version is adding 1.2GB so the ubi image is acully using more space assumign tis using layer if tis not then its 1.05GB vs 1.9GB and the kolla image still comes out better by an even larger margin > - keystone: > + Kolla: 973 MB > - new: 532 MB again here this si all in the deleta of the base image > - memcached: > + Kolla: 633 MB > - new: 379 MB as is this so over all i think the ubi based iamges are using more or the same space then the kolla ones. they just have a smaller base image. so rather then doing this i think it makes more sense to just use the ubi iamge with the kolla build system unless you expect be be able to signicicalty reduce the size of the images more. based on size alone i tone se anny really benifit so far > > While the links shown are many, the actual volume of the proposed change is > small, although the impact is massive: > * With more straightforward to understand tools, we'll be able to get > broader participation from the TripleO community to maintain our > containerization efforts and extend our service capabilities. given customer likely have built up ther own template override files unless you also have a way to support that this is a breaking change to there workflow. > * With smaller images, the TripleO community will serve OpenStack deployers > and operators better; less bandwidth consumption and faster install times. again when you take into the account that kolla uses layers the image dont appear to be smaller and the nova-libvert image is actully bigger kolla gets large space saving buy using the copy on write layers that make up docker images to share common code so you can just look a the size fo indivigual iamages you have look at the size of the total set and compare those. > > We're looking to chart a more reliable path forward, and making the TripleO > user experience a great one. While the POC changes are feature-complete and > functional, more work is needed to create the required variable files; > however, should the proposed specification be ratified, we expect to make > quick work of what's left. As such, if you would like to be involved or > have any feedback on anything presented here, please don't hesitate to > reach out. > > We aim to provide regular updates regarding our progress on the "Simple > container generation" initiative, so stay tuned. honestly im not sure this is a benifty or something that should be done but it seam like ye have already decieded on a path. not the kolla image can also be made smaller then they currently are using multi state builds to remove and deps installed for buidl requirement or reducing the optional depencies. many for the deps currently installed are to supprot the different vender backedns. if we improve the bindep files in each project too group those depencies by the optional backedn kolla could easily be updated to use bindep for package installation. and the image would get even smaller. > > Thanks, > > Kevin and Emilien > > [0] https://review.opendev.org/#/c/723665/ > [1] https://review.opendev.org/#/c/722557/ > [2] https://review.opendev.org/#/c/724147/ > [3] https://review.opendev.org/#/c/722486/ > [4] https://files.macchi.pro:8443/demo-container-images/ > [5] http://paste.openstack.org/show/792995/ > [6] http://paste.openstack.org/show/792994/ > [7] http://paste.openstack.org/show/792993/ > [8] > https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcdn.com/724436/2/check/tripleo-build-containers-centos-8/78eefc3/logs/containers-successfully-built.log > > Kevin Carter > IRC: kecarter From ruslanas at lpic.lt Fri May 1 22:32:26 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 2 May 2020 00:32:26 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Hi all, it was my error, that paunch did not launch containers when executed paunch manually. I have fixed log path to containers dir, and it worked, reexecuted undercloud deploy again with --force-stack-update and it failed again, same step, and did not exec paunch still. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri May 1 22:36:12 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 May 2020 17:36:12 -0500 Subject: [goals][Drop Python 2.7 Support] Status Report: COMPLETED \o/ In-Reply-To: <9897445E-9075-4BB6-8FB7-D8EDD1D806ED@doughellmann.com> References: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> <9897445E-9075-4BB6-8FB7-D8EDD1D806ED@doughellmann.com> Message-ID: <171d26209e1.1191e985a90093.7358777020928423199@ghanshyammann.com> ---- On Fri, 01 May 2020 15:46:34 -0500 Doug Hellmann wrote ---- > > > > On Apr 30, 2020, at 10:53 PM, Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > Please find the final status for the py2.7 drop goal. Finally, it is COMPLETED!! > > With this (after three continuous cycles), I am taking a break from community goals (at least for V cycle :))!!. > > > > Wiki page is updated: https://wiki.openstack.org/wiki/Python3 > > I have added the 'Python3 Only' column here https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > > > Completion Report: > > =============== > > * Swift & Storlets are the two projects keeping py2.7 support. > > > > * All projects have completed the goal and now are python3-only. > > > > * It has been another roller coaster :) and I appreciate everyone helped in this work and > > all the projects reviewing the patches on time with few exceptions :). > > > > * A Lot of gate issues occurred and fixed during this transition especially making branchless > > tooling testing the mixed python version ( stable branch on py2 and other on py3). > > > > * I finished the audit for all the projects. requirement repo will be cleaning up the py2 caps in Victoria cycle[1]. > > > > > > Some ongoing work on deployement projects repo or unmaintained repo: > > ------------------------------------------------------------------------------------------- > > These are dpendendant on other work for testing framework etc. This can be continued and need > > not to be tracked under this goal. > > * python-barbicanclient: https://review.opendev.org/#/c/699096/ > > ** This repo seems unmaintained for last 6 months and the gate is already broken. > > * Openstack Charms - Most of them merged, few are waiting for the team to migrate to Python3 Zaza functional test framework. > > * Openstackansible - This will finish once centos jobs will be migrated to Centos8. > > * Openstack-Helm - No conclusion yet from help team on what else to do for py2 drop. > > > > -gmann > > > > Congratulations! > > Moving to python 3 has been a multi-year process, but it shows that we can make significant maintenance-oriented changes like this when the community works together. Indeed, it was a lot of hard work for many years. While updating the wiki page, I read a few old comments, blogs and realized it was not an easy job at all :). > > Thank you for seeing this last phase of the work through to completion, Ghanshyam. > > Doug > > From aschultz at redhat.com Fri May 1 22:56:21 2020 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 1 May 2020 16:56:21 -0600 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: On Fri, May 1, 2020 at 4:12 PM Sean Mooney wrote: > > On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: > > Hello Stackers, > > > > As you may have seen, the TripleO project has been testing the idea of > > building container images using a simplified toolchain [0]. The idea is to > > build smaller, more easily maintained images to simplify the lives of > > TripleO consumers. Since TripleO's move to containers, the project has been > > leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO > > has created considerable tooling to bend Kolla images to its needs. Sadly > > this has resulted in an image size explosion and the proliferation of > > difficult to maintain tools, with low bus factors, which are essential to > > the success of the project. To address the risk centered around the TripleO > > containerization story, we've drafted a spec [0], which we believe outlines > > a more sustainable future. In this specification, we've designed a new, > > much more straightforward, approach to building container images for the > > TripleO project. The "simple container generation," specification does not > > intend to be a general-purpose tool used to create images for the greater > > OpenStack community, both Loci and Kolla do that already. This effort is to > > build containers only for TripleO using distro provided repositories, with > > distro maintained tooling. By focusing only on what we need, we're able to > > remove all general-purpose assumptions and create a vertically integrated > > stack resulting in a much smaller surface area. > > > > To highlight how all this works, we've put together several POC changes: > > * Role and playbook to implement the Containerfile specification [1]. > > * Tripleoclient review to interface with the new role and playbook [2]. > > * Directory structure for variable file layout [3]. > > * To see how this works using the POC code, building images we've tested in > > real deployments, please watch the ASCII-cast [4]. > > * Example configuration file examples are here [5][6][7]. > > > > A few examples of size comparisons between our proposed tooling versus > > current Kolla based images [8]: > > - base: > > + Kolla: 588 MB > > - new: 211 MB # based on ubi8, smaller than centos8 > kolla could also use the ubi8 image as a base you could select centos as the base image type and then pass the url to > the ubi image and that should would work. ubi8 is smaller, but it doesn't account for it all. We're likely not importing some other deps that make it into the normal base that perhaps we don't need. I'd still like to see rpm diffs to better understand if this savings is real. > > - nova-base: > > + Kolla: 1.09 GB > > - new: 720 MB > unless you are using layers fo rthe new iamge keep in mind you have to subtract the size of the base imag form the nova > base image to caluate how big it actully is so its acully only usein about 500MB > > if you are using layser for the ubi nova-base then these iamge are actully the same size the diffenrec is entirly coming > for the reduction in the base image > > - nova-libvirt: > > + Kolla: 2.14 GB > > - new: 1.9 GB > again here you have to do the same arithmatich so 2.14-1.09 so this image is adding 1.05 GB of layers in the kolla case > and the ubi version is adding 1.2GB so the ubi image is acully using more space assumign tis using layer if tis not then > its 1.05GB vs 1.9GB and the kolla image still comes out better by an even larger margin > > - keystone: > > + Kolla: 973 MB > > - new: 532 MB > > again here this si all in the deleta of the base image > > - memcached: > > + Kolla: 633 MB > > - new: 379 MB > > as is this > > so over all i think the ubi based iamges are using more or the same space then the kolla ones. > they just have a smaller base image. so rather then doing this i think it makes more sense to just use the ubi iamge > with the kolla build system unless you expect be be able to signicicalty reduce the size of the images more. > > based on size alone i tone se anny really benifit so far > > > > While the links shown are many, the actual volume of the proposed change is > > small, although the impact is massive: > > > * With more straightforward to understand tools, we'll be able to get > > broader participation from the TripleO community to maintain our > > containerization efforts and extend our service capabilities. > given customer likely have built up ther own template override files unless you also have a way to support that this is > a breaking change to there workflow. > > * With smaller images, the TripleO community will serve OpenStack deployers > > and operators better; less bandwidth consumption and faster install times. > again when you take into the account that kolla uses layers the image dont appear to be smaller and the nova-libvert > image is actully bigger kolla gets large space saving buy using the copy on write layers that make up docker images > to share common code so you can just look a the size fo indivigual iamages you have look at the size of the total set > and compare those. > > > > > We're looking to chart a more reliable path forward, and making the TripleO > > user experience a great one. While the POC changes are feature-complete and > > functional, more work is needed to create the required variable files; > > however, should the proposed specification be ratified, we expect to make > > quick work of what's left. As such, if you would like to be involved or > > have any feedback on anything presented here, please don't hesitate to > > reach out. > > > > We aim to provide regular updates regarding our progress on the "Simple > > container generation" initiative, so stay tuned. > honestly im not sure this is a benifty or something that should be done but it seam like ye have already decieded on a > path. not the kolla image can also be made smaller then they currently are using multi state builds to remove and deps > installed for buidl requirement or reducing the optional depencies. > > many for the deps currently installed are to supprot the different vender backedns. > if we improve the bindep files in each project too group those depencies by the optional backedn kolla could easily be > updated to use bindep for package installation. and the image would get even smaller. The issues that we (tripleo) have primarily run into are the expectations around versions (rabbitmq being the latest) and being able to deploy via source. Honestly if tripleo was going to support installations running source or alternative distros (which neither we currently plan to do), it would likely make sense to continue down the Kolla path. However we already end up doing so many overrides to the standard Kolla container configurations[0] that I'm not sure it really makes sense to continue. Additionally since we no longer support building via docker, we're basically using kolla as a glorified template engine to give us Dockerfiles. The proposal is to stop using Kolla as a glorified templating engine and actually just manage one that fits our needs. We're using a very specific path through the Kolla code and I'm uncertain if it's beneficial for either of us anymore. This also frees up kolla + kolla-ansible improve their integration and likely be able to make some of the tougher choices that they've brought up in the other email about the future of kolla. Personally I see this move as freeing up Kolla to be able to innovate and TripleO being able to simplify. As one of the few people who know how the container sausage is made in the TripleO world, I think it's likely for the best. [0] https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_kolla_template_overrides.j2 > > > > Thanks, > > > > Kevin and Emilien > > > > [0] https://review.opendev.org/#/c/723665/ > > [1] https://review.opendev.org/#/c/722557/ > > [2] https://review.opendev.org/#/c/724147/ > > [3] https://review.opendev.org/#/c/722486/ > > [4] https://files.macchi.pro:8443/demo-container-images/ > > [5] http://paste.openstack.org/show/792995/ > > [6] http://paste.openstack.org/show/792994/ > > [7] http://paste.openstack.org/show/792993/ > > [8] > > > https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcdn.com/724436/2/check/tripleo-build-containers-centos-8/78eefc3/logs/containers-successfully-built.log > > > > Kevin Carter > > IRC: kecarter > > From kecarter at redhat.com Fri May 1 23:18:05 2020 From: kecarter at redhat.com (Kevin Carter) Date: Fri, 1 May 2020 18:18:05 -0500 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: On Fri, May 1, 2020 at 5:06 PM Sean Mooney wrote: > On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: > > Hello Stackers, > > > > As you may have seen, the TripleO project has been testing the idea of > > building container images using a simplified toolchain [0]. The idea is > to > > build smaller, more easily maintained images to simplify the lives of > > TripleO consumers. Since TripleO's move to containers, the project has > been > > leveraging Kolla to provide Dockerfiles, and while this has worked, > TripleO > > has created considerable tooling to bend Kolla images to its needs. Sadly > > this has resulted in an image size explosion and the proliferation of > > difficult to maintain tools, with low bus factors, which are essential to > > the success of the project. To address the risk centered around the > TripleO > > containerization story, we've drafted a spec [0], which we believe > outlines > > a more sustainable future. In this specification, we've designed a new, > > much more straightforward, approach to building container images for the > > TripleO project. The "simple container generation," specification does > not > > intend to be a general-purpose tool used to create images for the greater > > OpenStack community, both Loci and Kolla do that already. This effort is > to > > build containers only for TripleO using distro provided repositories, > with > > distro maintained tooling. By focusing only on what we need, we're able > to > > remove all general-purpose assumptions and create a vertically integrated > > stack resulting in a much smaller surface area. > > > > To highlight how all this works, we've put together several POC changes: > > * Role and playbook to implement the Containerfile specification [1]. > > * Tripleoclient review to interface with the new role and playbook [2]. > > * Directory structure for variable file layout [3]. > > * To see how this works using the POC code, building images we've tested > in > > real deployments, please watch the ASCII-cast [4]. > > * Example configuration file examples are here [5][6][7]. > > > > A few examples of size comparisons between our proposed tooling versus > > current Kolla based images [8]: > > - base: > > + Kolla: 588 MB > > - new: 211 MB # based on ubi8, smaller than centos8 > kolla could also use the ubi8 image as a base you could select centos as > the base image type and then pass the url to > the ubi image and that should would work. > > - nova-base: > > + Kolla: 1.09 GB > > - new: 720 MB > unless you are using layers fo rthe new iamge keep in mind you have to > subtract the size of the base imag form the nova > base image to caluate how big it actully is so its acully only usein about > 500MB > > if you are using layser for the ubi nova-base then these iamge are actully > the same size the diffenrec is entirly coming > for the reduction in the base image > > - nova-libvirt: > > + Kolla: 2.14 GB > > - new: 1.9 GB > again here you have to do the same arithmatich so 2.14-1.09 so this image > is adding 1.05 GB of layers in the kolla case > and the ubi version is adding 1.2GB so the ubi image is acully using more > space assumign tis using layer if tis not then > its 1.05GB vs 1.9GB and the kolla image still comes out better by an even > larger margin > > - keystone: > > + Kolla: 973 MB > > - new: 532 MB > > again here this si all in the deleta of the base image > > - memcached: > > + Kolla: 633 MB > > - new: 379 MB > > as is this > You are correct that the size of each application is smaller due to the layers at play, this holds true for both Kolla and the new image build process we're using for comparison. The figures here are the total size as reported by something like a `(podman || docker) image list`. While the benefits of a COW'ing file system are not represented in these figures, rest assured we've made good use of layering techniques to ensure optimal image sizes. > so over all i think the ubi based iamges are using more or the same space > then the kolla ones. > they just have a smaller base image. so rather then doing this i think it > makes more sense to just use the ubi iamge > with the kolla build system unless you expect be be able to signicicalty > reduce the size of the images more. > > based on size alone i tone se anny really benifit so far > > > > While the links shown are many, the actual volume of the proposed change > is > > small, although the impact is massive: > > > * With more straightforward to understand tools, we'll be able to get > > broader participation from the TripleO community to maintain our > > containerization efforts and extend our service capabilities. > given customer likely have built up ther own template override files > unless you also have a way to support that this is > a breaking change to there workflow. > > * With smaller images, the TripleO community will serve OpenStack > deployers > > and operators better; less bandwidth consumption and faster install > times. > again when you take into the account that kolla uses layers the image dont > appear to be smaller and the nova-libvert > image is actully bigger kolla gets large space saving buy using the copy > on write layers that make up docker images > to share common code so you can just look a the size fo indivigual iamages > you have look at the size of the total set > and compare those. > So far we've not encountered a single image using Kolla that is smaller and we've tested both CentOS8 and UBI8 as a starting point. In all cases we've been able to produce smaller containers to the tune of hundreds of megabytes saved per-application without feature gaps (as it pertains to TripleO), granted, the testing has not been exhaustive. The noted UBI8 starting point was chosen because it produced the greatest savings. > > > > > We're looking to chart a more reliable path forward, and making the > TripleO > > user experience a great one. While the POC changes are feature-complete > and > > functional, more work is needed to create the required variable files; > > however, should the proposed specification be ratified, we expect to make > > quick work of what's left. As such, if you would like to be involved or > > have any feedback on anything presented here, please don't hesitate to > > reach out. > > > > We aim to provide regular updates regarding our progress on the "Simple > > container generation" initiative, so stay tuned. > honestly im not sure this is a benifty or something that should be done > but it seam like ye have already decieded on a > path. not the kolla image can also be made smaller then they currently are > using multi state builds to remove and deps > installed for buidl requirement or reducing the optional depencies. > > many for the deps currently installed are to supprot the different vender > backedns. > if we improve the bindep files in each project too group those depencies > by the optional backedn kolla could easily be > updated to use bindep for package installation. and the image would get > even smaller. > > > > Thanks, > > > > Kevin and Emilien > > > > [0] https://review.opendev.org/#/c/723665/ > > [1] https://review.opendev.org/#/c/722557/ > > [2] https://review.opendev.org/#/c/724147/ > > [3] https://review.opendev.org/#/c/722486/ > > [4] https://files.macchi.pro:8443/demo-container-images/ > > [5] http://paste.openstack.org/show/792995/ > > [6] http://paste.openstack.org/show/792994/ > > [7] http://paste.openstack.org/show/792993/ > > [8] > > > > https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcdn.com/724436/2/check/tripleo-build-containers-centos-8/78eefc3/logs/containers-successfully-built.log > > > > Kevin Carter > > IRC: kecarter > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kecarter at redhat.com Fri May 1 23:20:54 2020 From: kecarter at redhat.com (Kevin Carter) Date: Fri, 1 May 2020 18:20:54 -0500 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: Kevin Carter IRC: kecarter On Fri, May 1, 2020 at 5:57 PM Alex Schultz wrote: > On Fri, May 1, 2020 at 4:12 PM Sean Mooney wrote: > > > > On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: > > > Hello Stackers, > > > > > > As you may have seen, the TripleO project has been testing the idea of > > > building container images using a simplified toolchain [0]. The idea > is to > > > build smaller, more easily maintained images to simplify the lives of > > > TripleO consumers. Since TripleO's move to containers, the project has > been > > > leveraging Kolla to provide Dockerfiles, and while this has worked, > TripleO > > > has created considerable tooling to bend Kolla images to its needs. > Sadly > > > this has resulted in an image size explosion and the proliferation of > > > difficult to maintain tools, with low bus factors, which are essential > to > > > the success of the project. To address the risk centered around the > TripleO > > > containerization story, we've drafted a spec [0], which we believe > outlines > > > a more sustainable future. In this specification, we've designed a new, > > > much more straightforward, approach to building container images for > the > > > TripleO project. The "simple container generation," specification does > not > > > intend to be a general-purpose tool used to create images for the > greater > > > OpenStack community, both Loci and Kolla do that already. This effort > is to > > > build containers only for TripleO using distro provided repositories, > with > > > distro maintained tooling. By focusing only on what we need, we're > able to > > > remove all general-purpose assumptions and create a vertically > integrated > > > stack resulting in a much smaller surface area. > > > > > > To highlight how all this works, we've put together several POC > changes: > > > * Role and playbook to implement the Containerfile specification [1]. > > > * Tripleoclient review to interface with the new role and playbook [2]. > > > * Directory structure for variable file layout [3]. > > > * To see how this works using the POC code, building images we've > tested in > > > real deployments, please watch the ASCII-cast [4]. > > > * Example configuration file examples are here [5][6][7]. > > > > > > A few examples of size comparisons between our proposed tooling versus > > > current Kolla based images [8]: > > > - base: > > > + Kolla: 588 MB > > > - new: 211 MB # based on ubi8, smaller than centos8 > > kolla could also use the ubi8 image as a base you could select centos as > the base image type and then pass the url to > > the ubi image and that should would work. > > > ubi8 is smaller, but it doesn't account for it all. We're likely not > importing some other deps that make it into the normal base that > perhaps we don't need. I'd still like to see rpm diffs to better > understand if this savings is real. > +1 I think it would be a good exercise to produce an RPM diff for a set of images. Maybe just the ones we've already ported? > > > > - nova-base: > > > + Kolla: 1.09 GB > > > - new: 720 MB > > unless you are using layers fo rthe new iamge keep in mind you have to > subtract the size of the base imag form the nova > > base image to caluate how big it actully is so its acully only usein > about 500MB > > > > if you are using layser for the ubi nova-base then these iamge are > actully the same size the diffenrec is entirly coming > > for the reduction in the base image > > > - nova-libvirt: > > > + Kolla: 2.14 GB > > > - new: 1.9 GB > > again here you have to do the same arithmatich so 2.14-1.09 so this > image is adding 1.05 GB of layers in the kolla case > > and the ubi version is adding 1.2GB so the ubi image is acully using > more space assumign tis using layer if tis not then > > its 1.05GB vs 1.9GB and the kolla image still comes out better by an > even larger margin > > > - keystone: > > > + Kolla: 973 MB > > > - new: 532 MB > > > > again here this si all in the deleta of the base image > > > - memcached: > > > + Kolla: 633 MB > > > - new: 379 MB > > > > as is this > > > > so over all i think the ubi based iamges are using more or the same > space then the kolla ones. > > they just have a smaller base image. so rather then doing this i think > it makes more sense to just use the ubi iamge > > with the kolla build system unless you expect be be able to signicicalty > reduce the size of the images more. > > > > based on size alone i tone se anny really benifit so far > > > > > > While the links shown are many, the actual volume of the proposed > change is > > > small, although the impact is massive: > > > > > * With more straightforward to understand tools, we'll be able to get > > > broader participation from the TripleO community to maintain our > > > containerization efforts and extend our service capabilities. > > given customer likely have built up ther own template override files > unless you also have a way to support that this is > > a breaking change to there workflow. > > > * With smaller images, the TripleO community will serve OpenStack > deployers > > > and operators better; less bandwidth consumption and faster install > times. > > again when you take into the account that kolla uses layers the image > dont appear to be smaller and the nova-libvert > > image is actully bigger kolla gets large space saving buy using the copy > on write layers that make up docker images > > to share common code so you can just look a the size fo indivigual > iamages you have look at the size of the total set > > and compare those. > > > > > > > > We're looking to chart a more reliable path forward, and making the > TripleO > > > user experience a great one. While the POC changes are > feature-complete and > > > functional, more work is needed to create the required variable files; > > > however, should the proposed specification be ratified, we expect to > make > > > quick work of what's left. As such, if you would like to be involved or > > > have any feedback on anything presented here, please don't hesitate to > > > reach out. > > > > > > We aim to provide regular updates regarding our progress on the "Simple > > > container generation" initiative, so stay tuned. > > honestly im not sure this is a benifty or something that should be done > but it seam like ye have already decieded on a > > path. not the kolla image can also be made smaller then they currently > are using multi state builds to remove and deps > > installed for buidl requirement or reducing the optional depencies. > > > > many for the deps currently installed are to supprot the different > vender backedns. > > if we improve the bindep files in each project too group those depencies > by the optional backedn kolla could easily be > > updated to use bindep for package installation. and the image would get > even smaller. > > The issues that we (tripleo) have primarily run into are the > expectations around versions (rabbitmq being the latest) and being > able to deploy via source. Honestly if tripleo was going to support > installations running source or alternative distros (which neither we > currently plan to do), it would likely make sense to continue down the > Kolla path. However we already end up doing so many overrides to the > standard Kolla container configurations[0] that I'm not sure it really > makes sense to continue. Additionally since we no longer support > building via docker, we're basically using kolla as a glorified > template engine to give us Dockerfiles. The proposal is to stop using > Kolla as a glorified templating engine and actually just manage one > that fits our needs. We're using a very specific path through the > Kolla code and I'm uncertain if it's beneficial for either of us > anymore. This also frees up kolla + kolla-ansible improve their > integration and likely be able to make some of the tougher choices > that they've brought up in the other email about the future of kolla. > > Personally I see this move as freeing up Kolla to be able to innovate > and TripleO being able to simplify. As one of the few people who know > how the container sausage is made in the TripleO world, I think it's > likely for the best. > > [0] > https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_kolla_template_overrides.j2 > > > > > > > Thanks, > > > > > > Kevin and Emilien > > > > > > [0] https://review.opendev.org/#/c/723665/ > > > [1] https://review.opendev.org/#/c/722557/ > > > [2] https://review.opendev.org/#/c/724147/ > > > [3] https://review.opendev.org/#/c/722486/ > > > [4] https://files.macchi.pro:8443/demo-container-images/ > > > [5] http://paste.openstack.org/show/792995/ > > > [6] http://paste.openstack.org/show/792994/ > > > [7] http://paste.openstack.org/show/792993/ > > > [8] > > > > > > https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcdn.com/724436/2/check/tripleo-build-containers-centos-8/78eefc3/logs/containers-successfully-built.log > > > > > > Kevin Carter > > > IRC: kecarter > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri May 1 23:43:35 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 May 2020 18:43:35 -0500 Subject: [all][qa] Gate Status: week 20-18 Message-ID: <171d29fbacc.1167add3f90534.8359030808030287527@ghanshyammann.com> Hello Everyone, I am highlighting the few of the gate issues we fixed or in-progress recently. 1. Ubuntu xenial node jobs on stable stein onwards gate or 3rd party CI. Tempest 23.0.0 is the last version supported for py2.7 or py3.5. And we used the that version to tests stable/rocky jobs(py3.5 env) gate and any py2.7 job on stein or train gate. It was fixed on many projects when gate started failing with master Tempest. Last week when Tempest new version 24.0.0 is released and the same is updated in master upper-constraint, we found xenial node job running on stein gate (for Octavia and Ironic at least). Fixing those is not easy which needs overriding the constraint for that particular job. Octavia and Ironic removed the xenial job which is the right step. But few 3rd party CI not migrated to Bionic stein onwards will stop working now. One workaround can be to override the constraint to stable/rocky via env var UPPER_CONSTRAINTS_FILE but I have not tried if that works or not. If not, moving your CI to Bionic is the way forward. I am trying a new approach to cap the constraint in tox.ini itself whenever a new Tempest version is released. That can avoid hacking the same via UPPER_CONSTRAINTS_FILE. For example, Tempest 24.0.0 tag's tox.ini use stable/ussuri constraint. 2. Bug#1876330. devstack-plugin-ceph-tempest-py3 failing for rescue BFV instance test New tests added for rescue BFV instance feature on Nova failed on ceph job. lyarwood is working on the bug but for now, we have taken the step forward to skip the tests for ceph. Any new tests which do not work on ceph, we need to add the tests in ceph blacklist file until the bug is fixed. Job is green now. if it was failing on your gate where it is running as voting, please recheck. - https://review.opendev.org/#/c/724866/ I have also proposed to make ceph job as voting on the Tempest side (it was n-v till now) so that we can take these steps well before merging the new tests on the Tempest gate. - https://review.opendev.org/#/c/724948/1 -gmann From gmann at ghanshyammann.com Sat May 2 00:39:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 May 2020 19:39:21 -0500 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan Message-ID: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Hello Everyone, Finally, after so many cycles, grenade base job has been migrated to zuulv3 native[1]. This is merged in the Ussuri branch. Thanks to tosky for keep working on this and finish it!! New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and 'grenade-forward' are zuulv3 native now and we kept old job name 'grenade-py3' as an alias to 'grenade' so that it would not break the gate using 'grenade-py3'. * Start using this new base job for your projects specific grenade jobs. * Migration to new job name: The integrated template and projects are using old job name 'grenade-py3' need to move to a new job name. Job is defined in the integrated template and also in the project's zuul.yaml for irrelevant-files. So we need to switch both places at the same time otherwise you will be seeing the two jobs running on your gate (master as well as on Ussuri). Integrated service specific template has switched to new job[2] which means you might see two jobs running 'grenade' & 'grenade-py3' running on your gate until you change .zuul.yaml. Example: Nova did for master as well as Ussuri gate - https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d It needs to be done in Ussuri also as Tempest where integrated templates are present is branchless and apply the change for Ussuri job also. [1] https://review.opendev.org/#/c/548936/ [2] https://review.opendev.org/#/c/722551/ -gmann From emilien at redhat.com Sat May 2 05:44:55 2020 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 2 May 2020 01:44:55 -0400 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: Sean, The size isn't the primary concern here, just a "small" bonus eventually. Kolla doesn't support ubi8; yes we could have done it but again this is going to be a bunch of work (like it was for RHEL8/centos8) that I'm not sure is worth it if the only consumers are TripleO folks at this point. What Alex described is our major motivation to go down that path. To be fully transparent with people outside of Red Hat; there are currently 3 extra-layers on top of vanilla kolla images so we can use them downstream. I'm part of the folks who actually maintain them upstream and downstream and I'm tired of solving the problem multiple times at different levels. Our proposal is going to make it extremely simple: one YAML file per image with no downstream version of it. No extra overrides; no complications. One file, consumed upstream, downstream everywhere. As for customers/partners/third party; it'll be very easy to create their own images. The new interface is basically the Dockerfile one; and we'll make sure this is well documented with proper examples (e.g neutron drivers, etc). We have a strong desire to collaborate with the community; for example there is potential work to do on the container images gating with Zuul and such. However, based on our usage of Kolla, I believe that it is time to go. On Fri, May 1, 2020 at 7:27 PM Kevin Carter wrote: > > > Kevin Carter > IRC: kecarter > > > On Fri, May 1, 2020 at 5:57 PM Alex Schultz wrote: > >> On Fri, May 1, 2020 at 4:12 PM Sean Mooney wrote: >> > >> > On Fri, 2020-05-01 at 15:18 -0500, Kevin Carter wrote: >> > > Hello Stackers, >> > > >> > > As you may have seen, the TripleO project has been testing the idea of >> > > building container images using a simplified toolchain [0]. The idea >> is to >> > > build smaller, more easily maintained images to simplify the lives of >> > > TripleO consumers. Since TripleO's move to containers, the project >> has been >> > > leveraging Kolla to provide Dockerfiles, and while this has worked, >> TripleO >> > > has created considerable tooling to bend Kolla images to its needs. >> Sadly >> > > this has resulted in an image size explosion and the proliferation of >> > > difficult to maintain tools, with low bus factors, which are >> essential to >> > > the success of the project. To address the risk centered around the >> TripleO >> > > containerization story, we've drafted a spec [0], which we believe >> outlines >> > > a more sustainable future. In this specification, we've designed a >> new, >> > > much more straightforward, approach to building container images for >> the >> > > TripleO project. The "simple container generation," specification >> does not >> > > intend to be a general-purpose tool used to create images for the >> greater >> > > OpenStack community, both Loci and Kolla do that already. This effort >> is to >> > > build containers only for TripleO using distro provided repositories, >> with >> > > distro maintained tooling. By focusing only on what we need, we're >> able to >> > > remove all general-purpose assumptions and create a vertically >> integrated >> > > stack resulting in a much smaller surface area. >> > > >> > > To highlight how all this works, we've put together several POC >> changes: >> > > * Role and playbook to implement the Containerfile specification [1]. >> > > * Tripleoclient review to interface with the new role and playbook >> [2]. >> > > * Directory structure for variable file layout [3]. >> > > * To see how this works using the POC code, building images we've >> tested in >> > > real deployments, please watch the ASCII-cast [4]. >> > > * Example configuration file examples are here [5][6][7]. >> > > >> > > A few examples of size comparisons between our proposed tooling versus >> > > current Kolla based images [8]: >> > > - base: >> > > + Kolla: 588 MB >> > > - new: 211 MB # based on ubi8, smaller than centos8 >> > kolla could also use the ubi8 image as a base you could select centos >> as the base image type and then pass the url to >> > the ubi image and that should would work. >> >> >> ubi8 is smaller, but it doesn't account for it all. We're likely not >> importing some other deps that make it into the normal base that >> perhaps we don't need. I'd still like to see rpm diffs to better >> understand if this savings is real. >> > > +1 I think it would be a good exercise to produce an RPM diff for a set of > images. Maybe just the ones we've already ported? > > >> >> > > - nova-base: >> > > + Kolla: 1.09 GB >> > > - new: 720 MB >> > unless you are using layers fo rthe new iamge keep in mind you have to >> subtract the size of the base imag form the nova >> > base image to caluate how big it actully is so its acully only usein >> about 500MB >> > >> > if you are using layser for the ubi nova-base then these iamge are >> actully the same size the diffenrec is entirly coming >> > for the reduction in the base image >> > > - nova-libvirt: >> > > + Kolla: 2.14 GB >> > > - new: 1.9 GB >> > again here you have to do the same arithmatich so 2.14-1.09 so this >> image is adding 1.05 GB of layers in the kolla case >> > and the ubi version is adding 1.2GB so the ubi image is acully using >> more space assumign tis using layer if tis not then >> > its 1.05GB vs 1.9GB and the kolla image still comes out better by an >> even larger margin >> > > - keystone: >> > > + Kolla: 973 MB >> > > - new: 532 MB >> > >> > again here this si all in the deleta of the base image >> > > - memcached: >> > > + Kolla: 633 MB >> > > - new: 379 MB >> > >> > as is this >> > >> > so over all i think the ubi based iamges are using more or the same >> space then the kolla ones. >> > they just have a smaller base image. so rather then doing this i think >> it makes more sense to just use the ubi iamge >> > with the kolla build system unless you expect be be able to >> signicicalty reduce the size of the images more. >> > >> > based on size alone i tone se anny really benifit so far >> > > >> > > While the links shown are many, the actual volume of the proposed >> change is >> > > small, although the impact is massive: >> > >> > > * With more straightforward to understand tools, we'll be able to get >> > > broader participation from the TripleO community to maintain our >> > > containerization efforts and extend our service capabilities. >> > given customer likely have built up ther own template override files >> unless you also have a way to support that this is >> > a breaking change to there workflow. >> > > * With smaller images, the TripleO community will serve OpenStack >> deployers >> > > and operators better; less bandwidth consumption and faster install >> times. >> > again when you take into the account that kolla uses layers the image >> dont appear to be smaller and the nova-libvert >> > image is actully bigger kolla gets large space saving buy using the >> copy on write layers that make up docker images >> > to share common code so you can just look a the size fo indivigual >> iamages you have look at the size of the total set >> > and compare those. >> > >> > > >> > > We're looking to chart a more reliable path forward, and making the >> TripleO >> > > user experience a great one. While the POC changes are >> feature-complete and >> > > functional, more work is needed to create the required variable files; >> > > however, should the proposed specification be ratified, we expect to >> make >> > > quick work of what's left. As such, if you would like to be involved >> or >> > > have any feedback on anything presented here, please don't hesitate to >> > > reach out. >> > > >> > > We aim to provide regular updates regarding our progress on the >> "Simple >> > > container generation" initiative, so stay tuned. >> > honestly im not sure this is a benifty or something that should be done >> but it seam like ye have already decieded on a >> > path. not the kolla image can also be made smaller then they currently >> are using multi state builds to remove and deps >> > installed for buidl requirement or reducing the optional depencies. >> > >> > many for the deps currently installed are to supprot the different >> vender backedns. >> > if we improve the bindep files in each project too group those >> depencies by the optional backedn kolla could easily be >> > updated to use bindep for package installation. and the image would get >> even smaller. >> >> The issues that we (tripleo) have primarily run into are the >> expectations around versions (rabbitmq being the latest) and being >> able to deploy via source. Honestly if tripleo was going to support >> installations running source or alternative distros (which neither we >> currently plan to do), it would likely make sense to continue down the >> Kolla path. However we already end up doing so many overrides to the >> standard Kolla container configurations[0] that I'm not sure it really >> makes sense to continue. Additionally since we no longer support >> building via docker, we're basically using kolla as a glorified >> template engine to give us Dockerfiles. The proposal is to stop using >> Kolla as a glorified templating engine and actually just manage one >> that fits our needs. We're using a very specific path through the >> Kolla code and I'm uncertain if it's beneficial for either of us >> anymore. This also frees up kolla + kolla-ansible improve their >> integration and likely be able to make some of the tougher choices >> that they've brought up in the other email about the future of kolla. >> >> Personally I see this move as freeing up Kolla to be able to innovate >> and TripleO being able to simplify. As one of the few people who know >> how the container sausage is made in the TripleO world, I think it's >> likely for the best. >> >> [0] >> https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_kolla_template_overrides.j2 >> >> > > >> > > Thanks, >> > > >> > > Kevin and Emilien >> > > >> > > [0] https://review.opendev.org/#/c/723665/ >> > > [1] https://review.opendev.org/#/c/722557/ >> > > [2] https://review.opendev.org/#/c/724147/ >> > > [3] https://review.opendev.org/#/c/722486/ >> > > [4] https://files.macchi.pro:8443/demo-container-images/ >> > > [5] http://paste.openstack.org/show/792995/ >> > > [6] http://paste.openstack.org/show/792994/ >> > > [7] http://paste.openstack.org/show/792993/ >> > > [8] >> > > >> > >> https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcdn.com/724436/2/check/tripleo-build-containers-centos-8/78eefc3/logs/containers-successfully-built.log >> > > >> > > Kevin Carter >> > > IRC: kecarter >> > >> > >> >> -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Sat May 2 08:16:29 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Sat, 2 May 2020 10:16:29 +0200 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: <2bc49cc3-b020-5972-c728-a27e5319bfd8@linaro.org> W dniu 02.05.2020 o 01:18, Kevin Carter pisze: > So far we've not encountered a single image using Kolla that is smaller and > we've tested both CentOS8 and UBI8 as a starting point. In all cases we've > been able to produce smaller containers to the tune of hundreds of > megabytes saved per-application without feature gaps (as it pertains to > TripleO), granted, the testing has not been exhaustive. The noted UBI8 > starting point was chosen because it produced the greatest savings. Can you share what was tested? I wonder how many of those can be applied to Kolla to get some savings here and there. registry.access.redhat.com/ubi8/ubi latest 8e0b0194b7e1 9 days ago 204MB centos 8 470671670cac 3 months ago 237MB 33 MB difference of base image does not look much. From amotoki at gmail.com Sat May 2 08:24:51 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Sat, 2 May 2020 17:24:51 +0900 Subject: [goals][Drop Python 2.7 Support] Status Report: COMPLETED \o/ In-Reply-To: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> References: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> Message-ID: This is a great achievement as the community. Thank you Ghanshyam for your great leadership, continuous effort and patiences. Congrats everyone involved with this goal. -amotoki On Fri, May 1, 2020 at 11:57 AM Ghanshyam Mann wrote: > > Hello Everyone, > > Please find the final status for the py2.7 drop goal. Finally, it is COMPLETED!! > With this (after three continuous cycles), I am taking a break from community goals (at least for V cycle :))!!. > > Wiki page is updated: https://wiki.openstack.org/wiki/Python3 > I have added the 'Python3 Only' column here https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > Completion Report: > =============== > * Swift & Storlets are the two projects keeping py2.7 support. > > * All projects have completed the goal and now are python3-only. > > * It has been another roller coaster :) and I appreciate everyone helped in this work and > all the projects reviewing the patches on time with few exceptions :). > > * A Lot of gate issues occurred and fixed during this transition especially making branchless > tooling testing the mixed python version ( stable branch on py2 and other on py3). > > * I finished the audit for all the projects. requirement repo will be cleaning up the py2 caps in Victoria cycle[1]. > > > Some ongoing work on deployement projects repo or unmaintained repo: > ------------------------------------------------------------------------------------------- > These are dpendendant on other work for testing framework etc. This can be continued and need > not to be tracked under this goal. > * python-barbicanclient: https://review.opendev.org/#/c/699096/ > ** This repo seems unmaintained for last 6 months and the gate is already broken. > * Openstack Charms - Most of them merged, few are waiting for the team to migrate to Python3 Zaza functional test framework. > * Openstackansible - This will finish once centos jobs will be migrated to Centos8. > * Openstack-Helm - No conclusion yet from help team on what else to do for py2 drop. > > -gmann > From ignaziocassano at gmail.com Sat May 2 08:43:26 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 2 May 2020 10:43:26 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> Message-ID: Hello Sean, I modified the no workloads.py to add the consoleauth code, so now it does note returns errors during live migration phase, as I wrote in my last email. Keep in mind my stein is from an upgrade. Sorry if I am not sending all email history here, but if message body is too big the email needs the moderator approval. Anycase, I added the following code: cfg.BoolOpt( 'enable_consoleauth', default=False, deprecated_for_removal=True, deprecated_since="18.0.0", deprecated_reason=""" This option has been added as deprecated originally because it is used for avoiding a upgrade issue and it will not be used in the future. See the help text for more details. """, help=""" Enable the consoleauth service to avoid resetting unexpired consoles. Console token authorizations have moved from the ``nova-consoleauth`` service to the database, so all new consoles will be supported by the database backend. With this, consoles that existed before database backend support will be reset. For most operators, this should be a minimal disruption as the default TTL of a console token is 10 minutes. Operators that have much longer token TTL configured or otherwise wish to avoid immediately resetting all existing consoles can enable this flag to continue using the ``nova-consoleauth`` service in addition to the database backend. Once all of the old ``nova-consoleauth`` supported console tokens have expired, this flag should be disabled. For example, if a deployment has configured a token TTL of one hour, the operator may disable the flag, one hour after deploying the new code during an upgrade. .. note:: Cells v1 was not converted to use the database backend for console token authorizations. Cells v1 console token authorizations will continue to be supported by the ``nova-consoleauth`` service and use of the ``[workarounds]/enable_consoleauth`` option does not apply to Cells v1 users. Related options: * ``[consoleauth]/token_ttl`` """), Now the live migration starts e the instance is moved but the it continues to be unreachable after live migration. It starts to respond only when it starts a connection (for example a polling to ntp server). If I disable chrony in the instance, it stop to respond for ever. Best Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat May 2 11:23:41 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 2 May 2020 13:23:41 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Hi all, After executing: [stack at remote-u undercloud-ansible-CxDDOO]$ sudo ansible-playbook-2 -vvvv -i inventory.yaml deploy_steps_playbook.yaml not sure, who has no attribute: AttributeError: 'module' object has no attribute 'load_config'\n" TASK [Start containers for step 1 using paunch] ***************************************************************************************************************************** task path: /home/stack/builddir/a/undercloud-ansible-CxDDOO/common_deploy_steps_tasks.yaml:257 Saturday 02 May 2020 13:11:01 +0200 (0:00:00.165) 0:04:57.422 ********** Using module file /usr/share/ansible/plugins/modules/paunch.py Pipelining is enabled. <10.120.129.222> ESTABLISH LOCAL CONNECTION FOR USER: root <10.120.129.222> EXEC /bin/sh -c 'TRIPLEO_MINOR_UPDATE=False /usr/bin/python2 && sleep 0' ok: [remote-u] => { "changed": false, "failed_when_result": false, "module_stderr": "Traceback (most recent call last):\n File \"\", line 114, in \n File \"\", line 106, in _ansiballz_main\n File \"\", line 49, in invoke_module\n File \"/tmp/ansible_paunch_payload_8IPdGs/__main__.py\", line 250, in \n File \"/tmp/ansible_paunch_payload_8IPdGs/__main__.py\", line 246, in main\n File \"/tmp/ansible_paunch_payload_8IPdGs/__main__.py\", line 172, in __init__\nAttributeError: 'module' object has no attribute 'load_config'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } TASK [Debug output for task: Start containers for step 1] ******************************************************************************************************************* task path: /home/stack/builddir/a/undercloud-ansible-CxDDOO/common_deploy_steps_tasks.yaml:275 Saturday 02 May 2020 13:11:02 +0200 (0:00:00.901) 0:04:58.323 ********** fatal: [remote-u]: FAILED! => { "failed_when_result": true, "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] } NO MORE HOSTS LEFT ********************************************************************************************************************************************************** PLAY RECAP ****************************************************************************************************************************************************************** remote-u : ok=242 changed=78 unreachable=0 failed=1 skipped=94 rescued=0 ignored=2 On Sat, 2 May 2020 at 00:32, Ruslanas Gžibovskis wrote: > Hi all, > > it was my error, that paunch did not launch containers when executed > paunch manually. > I have fixed log path to containers dir, and it worked, reexecuted > undercloud deploy again with --force-stack-update > > and it failed again, same step, and did not exec paunch still. > > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat May 2 12:12:27 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 2 May 2020 14:12:27 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Hi all, Just for your info: the solution was: su - c "yum downgrade tripleo-ansible.noarch" yum list | grep -i tripleo-ansible tripleo-ansible.noarch 0.4.1-1.el7 @centos-openstack-train tripleo-ansible.noarch 0.5.0-1.el7 centos-openstack-train What is wrong with *tripleo-ansible.noarch 0.5.0-1.el7 centos-openstack-train* ??? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat May 2 13:37:14 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 2 May 2020 13:37:14 +0000 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: <20200502133714.awqr4e35u5feotin@yuggoth.org> On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: > As you may have seen, the TripleO project has been testing the > idea of building container images using a simplified toolchain [...] Is there an opportunity to collaborate around the proposed plan for publishing basic docker-image-based packages for OpenStack services? https://review.opendev.org/720107 Obviously you're aiming at solving this for a comprehensive deployment rather than at a packaging level, just wondering if there's a way to avoid having an explosion of different images for the same services if they could ultimately use the same building blocks. (A cynical part of me worries that distro "party lines" will divide folks on what the source of underlying files going into container images should be, but I'm sure our community is better than that, after all we're all in this together.) Either way, if they can both make use of the same speculative container building workflow pioneered in Zuul/OpenDev, that seems like a huge win (and I gather the Kolla "krew" are considering redoing their CI jobs along those same lines as well). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Sat May 2 13:39:31 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 2 May 2020 13:39:31 +0000 Subject: [goals][Drop Python 2.7 Support] Status Report: COMPLETED \o/ In-Reply-To: <9897445E-9075-4BB6-8FB7-D8EDD1D806ED@doughellmann.com> References: <171ce27ae80.c0ca7c0366175.5456871336254934108@ghanshyammann.com> <9897445E-9075-4BB6-8FB7-D8EDD1D806ED@doughellmann.com> Message-ID: <20200502133931.xbwzjhwijzazw4va@yuggoth.org> On 2020-05-01 16:46:34 -0400 (-0400), Doug Hellmann wrote: [...] > Thank you for seeing this last phase of the work through to > completion, Ghanshyam. Absolutely, many thanks to Ghanshyam, but also to you Doug for getting this rolling to begin with, and to everyone else who pitched in along the way! It takes a village. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Sat May 2 15:40:08 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 2 May 2020 17:40:08 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> Message-ID: Hello Sean, I am continuing my test (so you we'll have to read a lot :-) ) If I understood well file neutron.py contains a patch for /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py for reading the configuration force_legacy_port_binding. If it is true it returns false. I patched the api.py and inserting a LOG.info call I saw it reads the variable but it seems do nothing and the migrate instance stop to respond. Best Regards Ignazio Il giorno sab 2 mag 2020 alle ore 10:43 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello Sean, > I modified the no workloads.py to add the consoleauth code, so now it does > note returns errors during live migration phase, as I wrote in my last > email. > Keep in mind my stein is from an upgrade. > Sorry if I am not sending all email history here, but if message body is > too big the email needs the moderator approval. > Anycase, I added the following code: > > cfg.BoolOpt( > 'enable_consoleauth', > default=False, > deprecated_for_removal=True, > deprecated_since="18.0.0", > deprecated_reason=""" > This option has been added as deprecated originally because it is used > for avoiding a upgrade issue and it will not be used in the future. > See the help text for more details. > """, > help=""" > Enable the consoleauth service to avoid resetting unexpired consoles. > > Console token authorizations have moved from the ``nova-consoleauth`` > service > to the database, so all new consoles will be supported by the database > backend. > With this, consoles that existed before database backend support will be > reset. > For most operators, this should be a minimal disruption as the default TTL > of a > console token is 10 minutes. > > Operators that have much longer token TTL configured or otherwise wish to > avoid > immediately resetting all existing consoles can enable this flag to > continue > using the ``nova-consoleauth`` service in addition to the database backend. > Once all of the old ``nova-consoleauth`` supported console tokens have > expired, > this flag should be disabled. For example, if a deployment has configured a > token TTL of one hour, the operator may disable the flag, one hour after > deploying the new code during an upgrade. > > .. note:: Cells v1 was not converted to use the database backend for > console token authorizations. Cells v1 console token authorizations will > continue to be supported by the ``nova-consoleauth`` service and use of > the ``[workarounds]/enable_consoleauth`` option does not apply to > Cells v1 users. > > Related options: > > * ``[consoleauth]/token_ttl`` > """), > > Now the live migration starts e the instance is moved but the it > continues to be unreachable after live migration. > It starts to respond only when it starts a connection (for example a > polling to ntp server). > If I disable chrony in the instance, it stop to respond for ever. > Best Regards > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun May 3 00:58:30 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Sun, 3 May 2020 00:58:30 +0000 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: Great accomplishment. -----Original Message----- From: Ghanshyam Mann Sent: Friday, May 1, 2020 7:39 PM To: openstack-discuss Subject: [all][qa] grenade zuulv3 native job available now and its migration plan [EXTERNAL EMAIL] Hello Everyone, Finally, after so many cycles, grenade base job has been migrated to zuulv3 native[1]. This is merged in the Ussuri branch. Thanks to tosky for keep working on this and finish it!! New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and 'grenade-forward' are zuulv3 native now and we kept old job name 'grenade-py3' as an alias to 'grenade' so that it would not break the gate using 'grenade-py3'. * Start using this new base job for your projects specific grenade jobs. * Migration to new job name: The integrated template and projects are using old job name 'grenade-py3' need to move to a new job name. Job is defined in the integrated template and also in the project's zuul.yaml for irrelevant-files. So we need to switch both places at the same time otherwise you will be seeing the two jobs running on your gate (master as well as on Ussuri). Integrated service specific template has switched to new job[2] which means you might see two jobs running 'grenade' & 'grenade-py3' running on your gate until you change .zuul.yaml. Example: Nova did for master as well as Ussuri gate - https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d It needs to be done in Ussuri also as Tempest where integrated templates are present is branchless and apply the change for Ussuri job also. [1] https://review.opendev.org/#/c/548936/ [2] https://review.opendev.org/#/c/722551/ -gmann From Arkady.Kanevsky at dell.com Sun May 3 00:59:42 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Sun, 3 May 2020 00:59:42 +0000 Subject: [InteropWG] Draft Guidelines for 2020.06 Ussuri core and addons for orchestration and dns In-Reply-To: <1892120700.539821.1588359921647@mail.yahoo.com> References: <1892120700.539821.1588359921647.ref@mail.yahoo.com> <1892120700.539821.1588359921647@mail.yahoo.com> Message-ID: <9deb084f3bfe4cf09372f367de19502e@AUSX13MPS308.AMER.DELL.COM> Nice progress Prakash From: prakash RAMCHANDRAN Sent: Friday, May 1, 2020 2:05 PM To: openstack-discuss at lists.openstack.org Subject: [InteropWG] Draft Guidelines for 2020.06 Ussuri core and addons for orchestration and dns [EXTERNAL EMAIL] Hi all, We have been trying to draft Ussuri guidelines API for Market logo mark as part of InteropWG In case anyone is interested in reviewing the draft patch please feel free to comment with +1 only. The +2's are reserved for Mark-Voelker & Egle.sigler c Vice Chair and past Vice-Chairs for Interop WG Call for Review: https://review.opendev.org/722137 We are investing add-ons for OpenStack Edge , Containers and Swift backends. If you have any suggestions please pitch in. Please add your ideas to PTG etherpad link in the follwoing (Jine 1st - 1st 2 hour slot) https://etherpad.opendev.org/p/interop Thnaks Prakash https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Sun May 3 01:31:41 2020 From: smooney at redhat.com (Sean Mooney) Date: Sun, 03 May 2020 02:31:41 +0100 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <20200502133714.awqr4e35u5feotin@yuggoth.org> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> Message-ID: <0c13833ade6abda7838ea3aeb9a5c15faac94c81.camel@redhat.com> On Sat, 2020-05-02 at 13:37 +0000, Jeremy Stanley wrote: > On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: > > As you may have seen, the TripleO project has been testing the > > idea of building container images using a simplified toolchain > > [...] > > Is there an opportunity to collaborate around the proposed plan for > publishing basic docker-image-based packages for OpenStack services? > > https://review.opendev.org/720107 assumign that goes ageand then i guess that would be one path forward to avoid yet another set of openstack images as i share your concern although i dont think the technical case has been made that ooo should replace the kolla images with a new set or that the proposed cross project goal should be accpeted. > > Obviously you're aiming at solving this for a comprehensive > deployment rather than at a packaging level, just wondering if > there's a way to avoid having an explosion of different images for > the same services if they could ultimately use the same building > blocks. (A cynical part of me worries that distro "party lines" will > divide folks on what the source of underlying files going into > container images should be, but I'm sure our community is better > than that, after all we're all in this together.) > > Either way, if they can both make use of the same speculative by the way waht do you define as speculative image building? if its just building contier image for git repos prepared by zuul then kolla source image could trivialy support that. kolla supports building images for git repos so you can just override the source location for each image to point to the zull cloned git repos. i have not really followed why https://review.opendev.org/720107 is somehow unique in being able to support that? obviously without a way to rebuild distroy packages we build kolla binary images speculativly. but kolla source images which just pip install the different service into a virtual env could totally support depens-on and other featuer we get for free in the devstack jobs by virtue of using the soruce repos prepared by zuul. > container building workflow pioneered in Zuul/OpenDev, that seems > like a huge win (and I gather the Kolla "krew" are considering > redoing their CI jobs along those same lines as well). From skaplons at redhat.com Sun May 3 07:47:07 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 3 May 2020 09:47:07 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: <20200503074707.pk4xs3yv3qukplbx@skaplons-mac> Hi, Thx a lot. Great job. I already pushed some patch to neutron to switch our legacy jobs to be zuulv3. It's https://review.opendev.org/725073 - lets see what we will need to change/fix there :) On Fri, May 01, 2020 at 07:39:21PM -0500, Ghanshyam Mann wrote: > Hello Everyone, > > Finally, after so many cycles, grenade base job has been migrated to zuulv3 native[1]. This is > merged in the Ussuri branch. > > Thanks to tosky for keep working on this and finish it!! > > New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and 'grenade-forward' are > zuulv3 native now and we kept old job name 'grenade-py3' as an alias to 'grenade' so that it would > not break the gate using 'grenade-py3'. > > * Start using this new base job for your projects specific grenade jobs. > > * Migration to new job name: > The integrated template and projects are using old job name 'grenade-py3' need to move to > a new job name. Job is defined in the integrated template and also in the project's zuul.yaml for irrelevant-files. > So we need to switch both places at the same time otherwise you will be seeing the two jobs running on your gate > (master as well as on Ussuri). > > Integrated service specific template has switched to new job[2] which means you might see two jobs running > 'grenade' & 'grenade-py3' running on your gate until you change .zuul.yaml. Example: Nova did for master as well as Ussuri gate - https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d > > It needs to be done in Ussuri also as Tempest where integrated templates are present is branchless and apply the change for Ussuri job also. > > [1] https://review.opendev.org/#/c/548936/ > [2] https://review.opendev.org/#/c/722551/ > > -gmann > -- Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Sun May 3 13:07:30 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 3 May 2020 13:07:30 +0000 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <0c13833ade6abda7838ea3aeb9a5c15faac94c81.camel@redhat.com> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <0c13833ade6abda7838ea3aeb9a5c15faac94c81.camel@redhat.com> Message-ID: <20200503130729.tiuchhtwmgyfuqxz@yuggoth.org> On 2020-05-03 02:31:41 +0100 (+0100), Sean Mooney wrote: [...] > i have not really followed why https://review.opendev.org/720107 > is somehow unique in being able to support that? obviously without > a way to rebuild distroy packages we build kolla binary images > speculativly. but kolla source images which just pip install the > different service into a virtual env could totally support > depens-on and other featuer we get for free in the devstack jobs > by virtue of using the soruce repos prepared by zuul. [...] "Build speculatively" in this case meaning the jobs are designed to be able to use depends-on to test with layers built from changes in other projects which also haven't merged yet, incorporating changes to images which are queued ahead of them in the gate pipeline, and so on. That is, jobs can make use of OpenDev's buildset and intermediate registry proxies to pull images from other jobs in the same buildset or in (implicit or explicit) dependencies rather than only using images built in the job itself or temporarily uploading them somewhere public like tarballs.o.o, dockerhub, et cetera. https://docs.opendev.org/opendev/base-jobs/latest/docker-image.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Sun May 3 19:16:37 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 3 May 2020 21:16:37 +0200 Subject: [puppet] Configuring Debian's uwsgi for each service Message-ID: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> Hi, In Debian all service APIs are by default using uwsgi. This means that there's no reason to setup Apache for the API, as uwsgi comes by default. However, there are a number of things that operators may want to configure, like for example, the number of workers. This is why I started this PR: https://review.opendev.org/725065 Since Tobias asked for more details about what I was doing, I thought it was a good idea to start a thread in the list, rather than just discussing in the patch review. So, what I did for Neutron is only the beginning. If it works as I expect, I intend to generalize this on all OpenStack services supported by Debian. This means at least: - nova - keystone - glance - cinder - barbican - neutron - aodh - panko - gnocchi - etc. The first step is having this neutron_uwsgi_config provider up and running, then I'll see how to integrate with the rest of puppet-openstack and have things like number of workers integrated with ::api stuff... Your thoughts? Cheers, Thomas Goirand (zigo) From aschultz at redhat.com Sun May 3 19:26:49 2020 From: aschultz at redhat.com (Alex Schultz) Date: Sun, 3 May 2020 13:26:49 -0600 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <20200502133714.awqr4e35u5feotin@yuggoth.org> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> Message-ID: On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley wrote: > > On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: > > As you may have seen, the TripleO project has been testing the > > idea of building container images using a simplified toolchain > [...] > > Is there an opportunity to collaborate around the proposed plan for > publishing basic docker-image-based packages for OpenStack services? > > https://review.opendev.org/720107 > > Obviously you're aiming at solving this for a comprehensive > deployment rather than at a packaging level, just wondering if > there's a way to avoid having an explosion of different images for > the same services if they could ultimately use the same building > blocks. (A cynical part of me worries that distro "party lines" will > divide folks on what the source of underlying files going into > container images should be, but I'm sure our community is better > than that, after all we're all in this together.) > I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need. I believe the issue is that the overall process to go from zero to an application in the container is something like the following: 1) input image (centos/ubi0/ubuntu/clear/whatever) 2) Packaging method for the application (source/rpm/dpkg/magic) 3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) 4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc) 5) How configurations are provided to the application (at run time or at build) 6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc) 7) Container build method (docker/buildah/other) The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container. IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information. I applaud the desire to try and unify all the things, but as we've seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer. Thanks, -Alex > Either way, if they can both make use of the same speculative > container building workflow pioneered in Zuul/OpenDev, that seems > like a huge win (and I gather the Kolla "krew" are considering > redoing their CI jobs along those same lines as well). > -- > Jeremy Stanley From tobias.urdin at binero.com Sun May 3 20:17:56 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Sun, 3 May 2020 20:17:56 +0000 Subject: [puppet] Configuring Debian's uwsgi for each service In-Reply-To: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> References: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> Message-ID: <1588537076621.82076@binero.com> Hello Thomas, Thanks for starting this thread! I've been thinking about this as well, but due to other reason, how we can continue to abstract the service layer used in the Puppet modules. The most deployed way right now is the ::wsgi::apache (using puppetlabs-apache). Do you think there is a way we could move uWSGI into it's own ::wsgi::uwsgi layer and then use an upstream uWSGI module to do the actual configuration? Since uWSGI isn't in the OpenStack scope, same as Apache configuration hence the usage of puppetlabs-apache. I've been thinking about a similar concept with other service layers, like Docker with Kolla containers etc even though it's kind of out of scope for the Puppet OpenStack project, just thoughts. Best regards Tobias ________________________________________ From: Thomas Goirand Sent: Sunday, May 3, 2020 9:16 PM To: openstack-discuss Subject: [puppet] Configuring Debian's uwsgi for each service Hi, In Debian all service APIs are by default using uwsgi. This means that there's no reason to setup Apache for the API, as uwsgi comes by default. However, there are a number of things that operators may want to configure, like for example, the number of workers. This is why I started this PR: https://review.opendev.org/725065 Since Tobias asked for more details about what I was doing, I thought it was a good idea to start a thread in the list, rather than just discussing in the patch review. So, what I did for Neutron is only the beginning. If it works as I expect, I intend to generalize this on all OpenStack services supported by Debian. This means at least: - nova - keystone - glance - cinder - barbican - neutron - aodh - panko - gnocchi - etc. The first step is having this neutron_uwsgi_config provider up and running, then I'll see how to integrate with the rest of puppet-openstack and have things like number of workers integrated with ::api stuff... Your thoughts? Cheers, Thomas Goirand (zigo) From tobias.urdin at binero.com Sun May 3 20:30:21 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Sun, 3 May 2020 20:30:21 +0000 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: <20200502133714.awqr4e35u5feotin@yuggoth.org>, Message-ID: <1588537822054.1034@binero.com> Interesting discussion going on in this thread! >From a mostly operator viewpoint (and CentOS-only shop) we've closely been monitoring what way Red Hat's enterprise based products has been taking and been planing for a while to explore the Kolla-based containers route due to the backing of Red Hat's usage. Now based on TripleO's usecase it's understandable to make their tooling easier but I think this also starts rooting into the OpenStack official containers and "make OpenStack more application like" that Mohammed Naser was digging into with his TC application. Operating an OpenStack deployment today is hard, but much better and enterprise than before, upgrading OpenStack now is a dream (jumped Rocky -> Train just some weeks ago) compared to before but my opinion is that we are missing a lot of view on OpenStack as an application today. It would be sad to see Red Hat's involvement in Kolla scale down. Just my 2c which probably was mostly offtopic from TripleO, my apologies for that. Best regards Tobias ________________________________________ From: Alex Schultz Sent: Sunday, May 3, 2020 9:26 PM To: Jeremy Stanley Cc: OpenStack Discuss Subject: Re: [tripleo] Container image tooling roadmap On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley wrote: > > On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: > > As you may have seen, the TripleO project has been testing the > > idea of building container images using a simplified toolchain > [...] > > Is there an opportunity to collaborate around the proposed plan for > publishing basic docker-image-based packages for OpenStack services? > > https://review.opendev.org/720107 > > Obviously you're aiming at solving this for a comprehensive > deployment rather than at a packaging level, just wondering if > there's a way to avoid having an explosion of different images for > the same services if they could ultimately use the same building > blocks. (A cynical part of me worries that distro "party lines" will > divide folks on what the source of underlying files going into > container images should be, but I'm sure our community is better > than that, after all we're all in this together.) > I think this assumes we want an all-in-one system to provide containers. And we don't. That I think is the missing piece that folks don't understand about containers and what we actually need. I believe the issue is that the overall process to go from zero to an application in the container is something like the following: 1) input image (centos/ubi0/ubuntu/clear/whatever) 2) Packaging method for the application (source/rpm/dpkg/magic) 3) dependencies provided depending on item #1 & 2 (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) 4) layer dependency declaration (base -> nova-base -> nova-api, nova-compute, etc) 5) How configurations are provided to the application (at run time or at build) 6) How application is invoked when container is ultimately launched (via docker/podman/k8s/etc) 7) Container build method (docker/buildah/other) The answer to each one of these is dependent on the expectations of the user or application consuming these containers. Additionally this has to be declared for each dependent application as well (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost because it needs to support any number of combinations for each of these. Today TripleO doesn't use the build method provided by Kolla anymore because we no longer support docker. This means we only use Kolla to generate Dockerfiles as inputs to other processes. It should be noted that we also only want Dockerfiles for the downstream because they get rebuilt with yet another different process. So for us, we don't want the container and we want a method for generating the contents of the container. IMHO containers are just glorified packaging (yet again and one that lacks ways of expressing dependencies which is really not beneficial for OpenStack). I do not believe you can or should try to unify the entire container declaration and building into a single application. You could rally around a few different sets of tooling that could provide you the pieces for consumption. e.g. A container file templating engine, a building engine, and a way of expressing/consuming configuration+execution information. I applaud the desire to try and unify all the things, but as we've seen time and time again when it comes to deployment, configuration and use cases. Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases (look at tripleo for crying out loud). I believe it's time to stop trying to solve all the things with a giant hammer and work on a bunch of smaller nails and let folks construct their own hammer. Thanks, -Alex > Either way, if they can both make use of the same speculative > container building workflow pioneered in Zuul/OpenDev, that seems > like a huge win (and I gather the Kolla "krew" are considering > redoing their CI jobs along those same lines as well). > -- > Jeremy Stanley From pierre at stackhpc.com Sun May 3 21:12:34 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Sun, 3 May 2020 23:12:34 +0200 Subject: [blazar] IRC meeting cancelled on May 5 Message-ID: Hello, Due to a scheduling conflict, I will not be able to chair the Blazar IRC meeting on Tuesday May 5. I propose to cancel and meet as usual on May 12. Best wishes, Pierre Riteau (priteau) From aschultz at redhat.com Sun May 3 21:32:50 2020 From: aschultz at redhat.com (Alex Schultz) Date: Sun, 3 May 2020 15:32:50 -0600 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <1588537822054.1034@binero.com> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <1588537822054.1034@binero.com> Message-ID: On Sun, May 3, 2020 at 2:35 PM Tobias Urdin wrote: > > Interesting discussion going on in this thread! > > From a mostly operator viewpoint (and CentOS-only shop) we've closely been monitoring what way > Red Hat's enterprise based products has been taking and been planing for a while to explore the > Kolla-based containers route due to the backing of Red Hat's usage. > > Now based on TripleO's usecase it's understandable to make their tooling easier but I think this > also starts rooting into the OpenStack official containers and "make OpenStack more application like" > that Mohammed Naser was digging into with his TC application. > The crux of all of this is what is the official base and software packaging/distribution method. "Official Images" are nice but likely can't be consumed by a number of folks due to the previous two items. Whenever folks are talking containers they should s/container/package/ and ask the same questions. Providing "Official Images" seems to go against the original intent of letting folks build what they need as they need it. Having lived in the deployment of openstack for 5 years now, I can tell you just having a thing doesn't solve anything when it comes to OpenStack. Not really certain what the intent of this effort is. I guess I should go read up on it. > Operating an OpenStack deployment today is hard, but much better and enterprise than before, > upgrading OpenStack now is a dream (jumped Rocky -> Train just some weeks ago) compared to > before but my opinion is that we are missing a lot of view on OpenStack as an application today. > > It would be sad to see Red Hat's involvement in Kolla scale down. > Just my 2c which probably was mostly offtopic from TripleO, my apologies for that. > There hasn't really been much involvement in Kolla in some time. It was mostly static unless we found a bug or need to add some new packages/containers. Dependencies get pulled in mostly by way of the rpms that are installed so with the exception of building, there isn't much to do anymore. We did provide some python3 conversion bits as we hit them but I felt like it was more of just having to find all the places where the package names needed to be changed and adding logic. There isn't anything wrong with the Kolla images themselves and it's more of being able to rebuild containers and their consumption that seems to be something that isn't exactly solved by any of the items being discussed. I've always been a proponent of building blocks and letting folks put them together to make something that fits their needs. The current discussions around containers doesn't seem to be aligned with this. We're currently investigating how we can create building blocks that can be consumed to result in containers. 1) container file generation, 2) building 3) distribution. The first item is a global problem and is really the main thing that people will continue to struggle with as it depends on what you're packaging together. Be it UBI8+RPMS, Ubuntu+debian packaging, Ubuntu+cloud dpkgs, Clear LInux+source, etc. That all gets defined in the container file. The rest is building from that and distributing the output. Kolla today does all three things and allows for any of the base container + packaging methods. Since we (tripleo) need these 3 items to remain distinct blocks for various reasons, we would like to seem them remain independent but that seems to go against what anyone else wants. > Best regards > Tobias > ________________________________________ > From: Alex Schultz > Sent: Sunday, May 3, 2020 9:26 PM > To: Jeremy Stanley > Cc: OpenStack Discuss > Subject: Re: [tripleo] Container image tooling roadmap > > On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley wrote: > > > > On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: > > > As you may have seen, the TripleO project has been testing the > > > idea of building container images using a simplified toolchain > > [...] > > > > Is there an opportunity to collaborate around the proposed plan for > > publishing basic docker-image-based packages for OpenStack services? > > > > https://review.opendev.org/720107 > > > > Obviously you're aiming at solving this for a comprehensive > > deployment rather than at a packaging level, just wondering if > > there's a way to avoid having an explosion of different images for > > the same services if they could ultimately use the same building > > blocks. (A cynical part of me worries that distro "party lines" will > > divide folks on what the source of underlying files going into > > container images should be, but I'm sure our community is better > > than that, after all we're all in this together.) > > > > I think this assumes we want an all-in-one system to provide > containers. And we don't. That I think is the missing piece that > folks don't understand about containers and what we actually need. > > I believe the issue is that the overall process to go from zero to an > application in the container is something like the following: > > 1) input image (centos/ubi0/ubuntu/clear/whatever) > 2) Packaging method for the application (source/rpm/dpkg/magic) > 3) dependencies provided depending on item #1 & 2 > (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) > 4) layer dependency declaration (base -> nova-base -> nova-api, > nova-compute, etc) > 5) How configurations are provided to the application (at run time or at build) > 6) How application is invoked when container is ultimately launched > (via docker/podman/k8s/etc) > 7) Container build method (docker/buildah/other) > > The answer to each one of these is dependent on the expectations of > the user or application consuming these containers. Additionally this > has to be declared for each dependent application as well > (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost > because it needs to support any number of combinations for each of > these. Today TripleO doesn't use the build method provided by Kolla > anymore because we no longer support docker. This means we only use > Kolla to generate Dockerfiles as inputs to other processes. It should > be noted that we also only want Dockerfiles for the downstream because > they get rebuilt with yet another different process. So for us, we > don't want the container and we want a method for generating the > contents of the container. > > IMHO containers are just glorified packaging (yet again and one that > lacks ways of expressing dependencies which is really not beneficial > for OpenStack). I do not believe you can or should try to unify the > entire container declaration and building into a single application. > You could rally around a few different sets of tooling that could > provide you the pieces for consumption. e.g. A container file > templating engine, a building engine, and a way of > expressing/consuming configuration+execution information. > > I applaud the desire to try and unify all the things, but as we've > seen time and time again when it comes to deployment, configuration > and use cases. Trying to solve for all the things ends up having a > negative effect on the UX because of the complexity required to handle > all the cases (look at tripleo for crying out loud). I believe it's > time to stop trying to solve all the things with a giant hammer and > work on a bunch of smaller nails and let folks construct their own > hammer. > > Thanks, > -Alex > > > > > > Either way, if they can both make use of the same speculative > > container building workflow pioneered in Zuul/OpenDev, that seems > > like a huge win (and I gather the Kolla "krew" are considering > > redoing their CI jobs along those same lines as well). > > -- > > Jeremy Stanley > > > > From rosmaita.fossdev at gmail.com Sun May 3 21:39:29 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sun, 3 May 2020 17:39:29 -0400 Subject: [cinder] critical bugs/patches for RC-2 In-Reply-To: <15374c8c-ba22-990b-58aa-ba04cdfe5f79@gmail.com> References: <15374c8c-ba22-990b-58aa-ba04cdfe5f79@gmail.com> Message-ID: Thanks to everyone who reviewed release-critical patches. Everything has merged, and I've proposed the release of cinder RC-2: https://review.opendev.org/#/c/725143/ This will be the official Ussuri release of cinder unless someone finds a critical bug in the next few days. (The final release candidate must be cut on May 7.) So if you do find something critical, please make some noise in #openstack-cinder or here on the ML or on the ussuri release etherpad: https://etherpad.opendev.org/p/cinder-ussuri-rc-backport-potential cheers, brian On 5/1/20 8:37 AM, Brian Rosmaita wrote: > I realize that today and this weekend are holidays in various parts of > the world, so I wish a happy holiday to everyone (even us poor suckers > who are working today). > > On the off chance that you get bored and are looking for something to > lively up yourself, members of cinder-stable-maint can make themselves > extremely helpful by reviewing the "approved for backport" patches on > the etherpad: >   https://etherpad.opendev.org/p/cinder-ussuri-rc-backport-potential > > We can cut RC-2 as soon as those land. > > cheers, > brian > > > On 4/27/20 11:39 AM, Brian Rosmaita wrote: >> The proposed list is here: >>    https://etherpad.opendev.org/p/cinder-ussuri-rc-backport-potential >> >> Please prioritize your reviewing accordingly. >> >> There are several proposed but not approved bugs on the list; I could >> use some feedback on whether they are release-critical or not.  Please >> reply on the etherpad. >> >> >> thanks, >> brian > From fungi at yuggoth.org Sun May 3 21:42:40 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 3 May 2020 21:42:40 +0000 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <1588537822054.1034@binero.com> Message-ID: <20200503214240.iwjfp3bhygf7pbzc@yuggoth.org> On 2020-05-03 15:32:50 -0600 (-0600), Alex Schultz wrote: [...] > I've always been a proponent of building blocks and letting folks put > them together to make something that fits their needs. The current > discussions around containers doesn't seem to be aligned with this. > We're currently investigating how we can create building blocks that > can be consumed to result in containers. 1) container file generation, > 2) building 3) distribution. The first item is a global problem and > is really the main thing that people will continue to struggle with as > it depends on what you're packaging together. Be it UBI8+RPMS, > Ubuntu+debian packaging, Ubuntu+cloud dpkgs, Clear LInux+source, etc. > That all gets defined in the container file. The rest is building > from that and distributing the output. Kolla today does all three > things and allows for any of the base container + packaging methods. > Since we (tripleo) need these 3 items to remain distinct blocks for > various reasons, we would like to seem them remain independent but > that seems to go against what anyone else wants. [...] My understanding of Mohammed's proposal is that he wants to create basic building blocks out of Docker container images which can be reused in a variety of contexts. Opinionated on the underlying operating system and installation method, it seems like the suggestion is to have something akin to Python sdist or wheel packages, but consumable from DockerHub using container-oriented tooling instead of from PyPI using pip. The images would in theory be built in a templated and consistent fashion taking cues from the Python packaging metadata, similar to Monty's earlier PBRX experiment maybe. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ruslanas at lpic.lt Sun May 3 22:09:51 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 4 May 2020 00:09:51 +0200 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <20200503214240.iwjfp3bhygf7pbzc@yuggoth.org> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <1588537822054.1034@binero.com> <20200503214240.iwjfp3bhygf7pbzc@yuggoth.org> Message-ID: Hi all, Sorry to get into, but I have chosen to use TripleO with CentOS (around 10clouds with 16 racks), and several production clouds as RHOSP (3). Not sure, how using ubi8 impacts CentOS8 and future releases... Also, I loved the concept, that Emilien and Alex mention. Quote by Emilien: "Our proposal is going to make it extremely simple: one YAML file per image with no downstream version of it. No extra overrides; no complications. One file, consumed upstream, downstream everywhere. As for customers/partners/third party; it'll be very easy to create their own images. The new interface is basically the Dockerfile one; and we'll make sure this is well documented with proper examples (e.g neutron drivers, etc)." Also, Alex idea, as I understood: "User select whichever image base user wants (OpenBSD + pip install of OSP components of maybe even exact build and/or tag from github or even local copy?!) in one simple build file. And generates those images in users own environment, and places them in \"undercloud\" or local docker/podman repo." That would be perfect for me, as my second step, once I have all setup deviations in place I might like/need to apply some additional tools into containers, also, some light infra modifications like logging... Maybe TripleO already has this covered, will need to dig into it. Sorry for making you read all of this (if someone did), as I could only help by doing deployments and run some tests (but running tests is OSP part, not TripleO) with high throughput (around 17Mpps with small packets [Mega Packets Per Second]) and running instances with 2000 IP addresses on a single port, as the app is too old to rewrite it, but it is still in use and will be :) and, I believe, that even some of you use it indirectly even now :) Thank you Have a nice daytime and keep a smile on your face! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sun May 3 22:20:03 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 4 May 2020 00:20:03 +0200 Subject: [TripleO][Rdo][Train] Deploy OpenStack using only provisioning network Message-ID: Hi all, I am doing some testing and will do some deployment on some remote hosts. Remote hosts will use provider network only specific for each compute. I was thinking, do I really need all the External, InternalAPI, Storage, StorageManagemnt, Tenant networks provided to all of the nodes? Maybe I could use a Provision network for all of that, and make swift/glance copy on all computes to provide local images. I understand, if I do not have tenant network, all VM's in same project but in different sites, will not see each other, but it is ok at the moment. Thank you for your help -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Sun May 3 22:47:00 2020 From: smooney at redhat.com (Sean Mooney) Date: Sun, 03 May 2020 23:47:00 +0100 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <20200503214240.iwjfp3bhygf7pbzc@yuggoth.org> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <1588537822054.1034@binero.com> <20200503214240.iwjfp3bhygf7pbzc@yuggoth.org> Message-ID: On Sun, 2020-05-03 at 21:42 +0000, Jeremy Stanley wrote: > On 2020-05-03 15:32:50 -0600 (-0600), Alex Schultz wrote: > [...] > > I've always been a proponent of building blocks and letting folks put > > them together to make something that fits their needs. The current > > discussions around containers doesn't seem to be aligned with this. > > We're currently investigating how we can create building blocks that > > can be consumed to result in containers. 1) container file generation, > > 2) building 3) distribution. The first item is a global problem and > > is really the main thing that people will continue to struggle with as > > it depends on what you're packaging together. Be it UBI8+RPMS, > > Ubuntu+debian packaging, Ubuntu+cloud dpkgs, Clear LInux+source, etc. > > That all gets defined in the container file. The rest is building > > from that and distributing the output. Kolla today does all three > > things and allows for any of the base container + packaging methods. > > Since we (tripleo) need these 3 items to remain distinct blocks for > > various reasons, we would like to seem them remain independent but > > that seems to go against what anyone else wants. > > [...] > > My understanding of Mohammed's proposal is that he wants to create > basic building blocks out of Docker container images which can be > reused in a variety of contexts those images will not be usable by a downstream product due to the enforced choice of disto and install method and the fact that it include packages that are not supported or exculde packages that are. so they wont be reusable as a building block for ooo (since the proposed iamges are deb based_ or RHOSP( since they are not build with the Redhat rpms on rhel with our downstream backports). the docker files also wont be reusable unless then are configurable at which point you are back to kolla again or each vendor will need to use there own solution. as a side note i am not sold on Mohammed's proposal being the right path forward vs just creating another project for that goal. for example if the intent of using the python slim image as a base is to have a small base iamge that facilates deps being installed via pip i would suggest that we should be looking at python:alpine instead not the debian buster slim image. > . Opinionated on the underlying > operating system and installation method, it seems like the > suggestion is to have something akin to Python sdist or wheel > packages, but consumable from DockerHub using container-oriented > tooling instead of from PyPI using pip. > The images would in theory > be built in a templated and consistent fashion taking cues from the > Python packaging metadata, similar to Monty's earlier PBRX > experiment maybe. well that is kind of what kolla was trying to do in a way. provide a set of templates to build images in a consistent fashion. i think we could get even more uniformatiy in the kolla images by relying more on bindeps and have in a set of bindep labels per image to control what gets installed. From tdecacqu at redhat.com Sun May 3 23:07:56 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sun, 03 May 2020 23:07:56 +0000 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <1588537822054.1034@binero.com> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <1588537822054.1034@binero.com> Message-ID: <87v9lck46r.tristanC@fedora> On Sun, May 03, 2020 at 20:30 Tobias Urdin wrote: > Interesting discussion going on in this thread! > > From a mostly operator viewpoint (and CentOS-only shop) we've closely been monitoring what way > Red Hat's enterprise based products has been taking and been planing for a while to explore the > Kolla-based containers route due to the backing of Red Hat's usage. > > Now based on TripleO's usecase it's understandable to make their tooling easier but I think this > also starts rooting into the OpenStack official containers and "make OpenStack more application like" > that Mohammed Naser was digging into with his TC application. > > Operating an OpenStack deployment today is hard, but much better and enterprise than before, > upgrading OpenStack now is a dream (jumped Rocky -> Train just some weeks ago) compared to > before but my opinion is that we are missing a lot of view on OpenStack as an application today. > > It would be sad to see Red Hat's involvement in Kolla scale down. > Just my 2c which probably was mostly offtopic from TripleO, my apologies for that. > > Best regards > Tobias > ________________________________________ > From: Alex Schultz > Sent: Sunday, May 3, 2020 9:26 PM > To: Jeremy Stanley > Cc: OpenStack Discuss > Subject: Re: [tripleo] Container image tooling roadmap > > On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley wrote: >> >> On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: >> > As you may have seen, the TripleO project has been testing the >> > idea of building container images using a simplified toolchain >> [...] >> >> Is there an opportunity to collaborate around the proposed plan for >> publishing basic docker-image-based packages for OpenStack services? >> >> https://review.opendev.org/720107 >> >> Obviously you're aiming at solving this for a comprehensive >> deployment rather than at a packaging level, just wondering if >> there's a way to avoid having an explosion of different images for >> the same services if they could ultimately use the same building >> blocks. (A cynical part of me worries that distro "party lines" will >> divide folks on what the source of underlying files going into >> container images should be, but I'm sure our community is better >> than that, after all we're all in this together.) >> > > I think this assumes we want an all-in-one system to provide > containers. And we don't. That I think is the missing piece that > folks don't understand about containers and what we actually need. > > I believe the issue is that the overall process to go from zero to an > application in the container is something like the following: > > 1) input image (centos/ubi0/ubuntu/clear/whatever) > 2) Packaging method for the application (source/rpm/dpkg/magic) > 3) dependencies provided depending on item #1 & 2 > (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) > 4) layer dependency declaration (base -> nova-base -> nova-api, > nova-compute, etc) > 5) How configurations are provided to the application (at run time or at build) > 6) How application is invoked when container is ultimately launched > (via docker/podman/k8s/etc) > 7) Container build method (docker/buildah/other) > > The answer to each one of these is dependent on the expectations of > the user or application consuming these containers. Additionally this > has to be declared for each dependent application as well > (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost > because it needs to support any number of combinations for each of > these. Today TripleO doesn't use the build method provided by Kolla > anymore because we no longer support docker. This means we only use > Kolla to generate Dockerfiles as inputs to other processes. It should > be noted that we also only want Dockerfiles for the downstream because > they get rebuilt with yet another different process. So for us, we > don't want the container and we want a method for generating the > contents of the container. > > IMHO containers are just glorified packaging (yet again and one that > lacks ways of expressing dependencies which is really not beneficial > for OpenStack). I do not believe you can or should try to unify the > entire container declaration and building into a single application. > You could rally around a few different sets of tooling that could > provide you the pieces for consumption. e.g. A container file > templating engine, a building engine, and a way of > expressing/consuming configuration+execution information. > > I applaud the desire to try and unify all the things, but as we've > seen time and time again when it comes to deployment, configuration > and use cases. Trying to solve for all the things ends up having a > negative effect on the UX because of the complexity required to handle > all the cases (look at tripleo for crying out loud). I believe it's > time to stop trying to solve all the things with a giant hammer and > work on a bunch of smaller nails and let folks construct their own > hammer. > > Thanks, > -Alex > > > > >> Either way, if they can both make use of the same speculative >> container building workflow pioneered in Zuul/OpenDev, that seems >> like a huge win (and I gather the Kolla "krew" are considering >> redoing their CI jobs along those same lines as well). >> -- >> Jeremy Stanley Is it defined somewhere how to udpate the images? In particular, is the intermediary layer update taken into account, or is it expected that everything is rebuilt from scratch to get any updates? I can't find much information about that in this proposal or in the tc container-images goal, and that seems like a non trivial process. Could this be a good oportunity for collaboration? Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From fungi at yuggoth.org Sun May 3 23:26:38 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 3 May 2020 23:26:38 +0000 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <87v9lck46r.tristanC@fedora> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <1588537822054.1034@binero.com> <87v9lck46r.tristanC@fedora> Message-ID: <20200503232638.i3k7f7luvvfplekp@yuggoth.org> On 2020-05-03 23:07:56 +0000 (+0000), Tristan Cacqueray wrote: [...] > Is it defined somewhere how to udpate the images? Images are built when their respective source repositories change. I gather there is work in progress in Zuul to be able to trigger new builds of images when related repositories change as well, though I don't recall the exact details. > In particular, is the intermediary layer update taken into > account, or is it expected that everything is rebuilt from scratch > to get any updates? I won't claim to be an expert in these matters, but my understanding is that Zuul's dependency mechanisms guarantee that an image won't be promoted unless the images it depends on are also promoted. > I can't find much information about that in this proposal or in > the tc container-images goal, and that seems like a non trivial > process. Could this be a good oportunity for collaboration? Absolutely! This is why we're trying to get more projects to try to make use of the workflow, so we have opportunities to improve on it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From marcin.juszkiewicz at linaro.org Mon May 4 06:12:01 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 4 May 2020 08:12:01 +0200 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <1588537822054.1034@binero.com> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <1588537822054.1034@binero.com> Message-ID: W dniu 03.05.2020 o 22:30, Tobias Urdin pisze: > It would be sad to see Red Hat's involvement in Kolla scale down. > Just my 2c which probably was mostly offtopic from TripleO, my > apologies for that. I am on of Kolla core devs. Under my Linaro hat there is also Red Hat one ;D In Kolla I do mostly AArch64 stuff, keep Debian support alive and fight CI fires. That's because I am a member of Red Hat ARM team assigned as an engineer to Linaro project. From cgoncalves at redhat.com Mon May 4 07:26:12 2020 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Mon, 4 May 2020 09:26:12 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: Hey Ghanshyam, This is pretty good news! Congratulations to the whole team! I posted a patch to migrate the Octavia grenade job to native Zuul v3 [1]. It was really easy and simple to do it, so thanks again. One question. The job runs Tempest some tests (Nova, Neutron, ...) right after Grenade. From an Octavia perspective, it is not something we really don't care about but rather would prefer running (a subset of) Octavia tests or none. Is this possible? In patch set 6 [2], I tried to disable Tempest completely. I did so by disabling "tempest" in devstack_services and set ENABLE_TEMPEST=False in the Grenade plugin settings file. Yet, Tempest tests still ran. Looking at the grenade job definition, it includes the tempest role -- is this correct? I would expect Tempest to run only if ENABLE_TEMPEST=True. Thanks, Carlos [1] https://review.opendev.org/#/c/725098/ [2] https://review.opendev.org/#/c/725098/6/ On Sat, May 2, 2020 at 2:40 AM Ghanshyam Mann wrote: > Hello Everyone, > > Finally, after so many cycles, grenade base job has been migrated to > zuulv3 native[1]. This is > merged in the Ussuri branch. > > Thanks to tosky for keep working on this and finish it!! > > New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and > 'grenade-forward' are > zuulv3 native now and we kept old job name 'grenade-py3' as an alias to > 'grenade' so that it would > not break the gate using 'grenade-py3'. > > * Start using this new base job for your projects specific grenade jobs. > > * Migration to new job name: > The integrated template and projects are using old job name 'grenade-py3' > need to move to > a new job name. Job is defined in the integrated template and also in the > project's zuul.yaml for irrelevant-files. > So we need to switch both places at the same time otherwise you will be > seeing the two jobs running on your gate > (master as well as on Ussuri). > > Integrated service specific template has switched to new job[2] which > means you might see two jobs running > 'grenade' & 'grenade-py3' running on your gate until you change > .zuul.yaml. Example: Nova did for master as well as Ussuri gate - > https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d > > It needs to be done in Ussuri also as Tempest where integrated templates > are present is branchless and apply the change for Ussuri job also. > > [1] https://review.opendev.org/#/c/548936/ > [2] https://review.opendev.org/#/c/722551/ > > -gmann > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Mon May 4 08:03:22 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 04 May 2020 10:03:22 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: <7812278.T7Z3S40VBb@whitebase.usersys.redhat.com> On Monday, 4 May 2020 09:26:12 CEST Carlos Goncalves wrote: > Hey Ghanshyam, > > This is pretty good news! Congratulations to the whole team! > > I posted a patch to migrate the Octavia grenade job to native Zuul v3 [1]. > It was really easy and simple to do it, so thanks again. > > One question. The job runs Tempest some tests (Nova, Neutron, ...) right > after Grenade. From an Octavia perspective, it is not something we really > don't care about but rather would prefer running (a subset of) Octavia > tests or none. Is this possible? Yes, you can definitely check the subset of tests. That behavior is driven by the same variable as in devstack-tempest jobs, which is tempest_test_regex. You can check an example here: https://review.opendev.org/#/c/638390/41/.zuul.yaml > > In patch set 6 [2], I tried to disable Tempest completely. I did so by > disabling "tempest" in devstack_services and set ENABLE_TEMPEST=False in > the Grenade plugin settings file. Yet, Tempest tests still ran. Looking at > the grenade job definition, it includes the tempest role -- is this > correct? I would expect Tempest to run only if ENABLE_TEMPEST=True. That variable only contronls the smoke tests executed post-upgrade (currently disabled), not the job-specific tempest tests. I realized now I have a long standing note to fix that, but it's not critical anyway in order to implement your requirement. Please remember to use the "native-zuulv3-migration" topic for those kind of changes (they are part of the more general "removal of legacy jobs" goal for Victoria). Ciao -- Luigi From thierry at openstack.org Mon May 4 08:47:00 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 4 May 2020 10:47:00 +0200 Subject: [largescale-sig] Next meeting: May 6, 8utc Message-ID: <8eda3e2d-42ac-553f-6ad0-207bb0a2d7f3@openstack.org> Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, May 6 at 8 UTC[1] in the #openstack-meeting-3 channel on IRC: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200506T08 Feel free to add topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting A reminder of the TODOs we had from last meeting, in case you have time to make progress on them: - amorin to propose patch against Nova doc - masahito to prepare oslo.metric POC code release - belmoreira to continue working on scaling story on https://etherpad.openstack.org/p/scaling-stories - ttx to talk to oneswig about his bare metal cluster scaling story work item Talk to you all on Wednesday, -- Thierry Carrez From mark at stackhpc.com Mon May 4 08:53:58 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 4 May 2020 09:53:58 +0100 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: On Fri, 1 May 2020 at 21:19, Kevin Carter wrote: > > Hello Stackers, > > As you may have seen, the TripleO project has been testing the idea of building container images using a simplified toolchain [0]. The idea is to build smaller, more easily maintained images to simplify the lives of TripleO consumers. Since TripleO's move to containers, the project has been leveraging Kolla to provide Dockerfiles, and while this has worked, TripleO has created considerable tooling to bend Kolla images to its needs. Sadly this has resulted in an image size explosion and the proliferation of difficult to maintain tools, with low bus factors, which are essential to the success of the project. To address the risk centered around the TripleO containerization story, we've drafted a spec [0], which we believe outlines a more sustainable future. In this specification, we've designed a new, much more straightforward, approach to building container images for the TripleO project. The "simple container generation," specification does not intend to be a general-purpose tool used to create images for the greater OpenStack community, both Loci and Kolla do that already. This effort is to build containers only for TripleO using distro provided repositories, with distro maintained tooling. By focusing only on what we need, we're able to remove all general-purpose assumptions and create a vertically integrated stack resulting in a much smaller surface area. > > To highlight how all this works, we've put together several POC changes: > * Role and playbook to implement the Containerfile specification [1]. > * Tripleoclient review to interface with the new role and playbook [2]. > * Directory structure for variable file layout [3]. > * To see how this works using the POC code, building images we've tested in real deployments, please watch the ASCII-cast [4]. > * Example configuration file examples are here [5][6][7]. > > A few examples of size comparisons between our proposed tooling versus current Kolla based images [8]: > - base: > + Kolla: 588 MB > - new: 211 MB # based on ubi8, smaller than centos8 > - nova-base: > + Kolla: 1.09 GB > - new: 720 MB > - nova-libvirt: > + Kolla: 2.14 GB > - new: 1.9 GB > - keystone: > + Kolla: 973 MB > - new: 532 MB > - memcached: > + Kolla: 633 MB > - new: 379 MB > > While the links shown are many, the actual volume of the proposed change is small, although the impact is massive: > * With more straightforward to understand tools, we'll be able to get broader participation from the TripleO community to maintain our containerization efforts and extend our service capabilities. > * With smaller images, the TripleO community will serve OpenStack deployers and operators better; less bandwidth consumption and faster install times. > > We're looking to chart a more reliable path forward, and making the TripleO user experience a great one. While the POC changes are feature-complete and functional, more work is needed to create the required variable files; however, should the proposed specification be ratified, we expect to make quick work of what's left. As such, if you would like to be involved or have any feedback on anything presented here, please don't hesitate to reach out. > > We aim to provide regular updates regarding our progress on the "Simple container generation" initiative, so stay tuned. Thanks for sharing this Kevin & Emilien. I have mixed feelings about it. Of course it is sad to see a large consumer of the Kolla images move on and build something new, rather than improving common tooling. OTOH, I can see the reasons for doing it, and it has been clear since Martin Andre left the core team that there was not too much interest from Red Hat in pushing the Kolla project forward. I will resist the urge to bikeshed on image size :) I will make an alternative proposal, although I expect it has at least one fatal flaw. Kolla is composed mainly of two parts - a kolla-build Python tool (the aforementioned *glorious* templating engine) and a set of Dockerfile templates. It's quite possible to host your own collection of Dockerfile templates, and pass these to kolla-build via --docker-dir. Tripleo could maintain its own set, and cut out the uninteresting parts over time. The possibly fatal flaw? It wouldn't support the buildah shell format. It is just another template format though, perhaps it could be made to work with minimal changes. The benefit would be that you get to keep the template override format that is (presumably) exposed to users. If you do decide to go (I expect that decision has already been made), please keep us in the loop. We will of course continue to support Tripleo for as long as necessary, but if you could provide us with a timeframe it will help us to plan life after Tripleo. We're also looking for ways to streamline and simplify, and as Alex mentioned, this opens up some new possibilities. At the very least we can drop the tripleoclient image :) Finally, if you could share any analysis that ends up being done on the images, and outcomes from it (e.g. drop these packages), that would be a nice parting gift. > > Thanks, > > Kevin and Emilien > > [0] https://review.opendev.org/#/c/723665/ > [1] https://review.opendev.org/#/c/722557/ > [2] https://review.opendev.org/#/c/724147/ > [3] https://review.opendev.org/#/c/722486/ > [4] https://files.macchi.pro:8443/demo-container-images/ > [5] http://paste.openstack.org/show/792995/ > [6] http://paste.openstack.org/show/792994/ > [7] http://paste.openstack.org/show/792993/ > [8] https://4ce3fa2efa42bb6b3a69-771991cd07d409aaec3e4ca5eafdd7e0.ssl.cf2.rackcdn.com/724436/2/check/tripleo-build-containers-centos-8/78eefc3/logs/containers-successfully-built.log > > Kevin Carter > IRC: kecarter From root.mch at gmail.com Mon May 4 09:30:00 2020 From: root.mch at gmail.com (=?UTF-8?Q?=C4=B0zzettin_Erdem?=) Date: Mon, 4 May 2020 12:30:00 +0300 Subject: Fwd: [MURANO] DevStack Murano "yaql_function" Error In-Reply-To: References: Message-ID: Thanks Andy. Regards. Forwarded Conversation Subject: [MURANO] DevStack Murano "yaql_function" Error ------------------------ Gönderen: Andy Botting Date: 30 Nis 2020 Per, 14:13 To: İzzettin Erdem Cc: Rong Zhu , < openstack-discuss at lists.openstack.org> Hi İzzettin, Thanks for your interest. Complete log link is at below. It is happening > when i try to deploy or update an environment. > > http://paste.openstack.org/show/792444/ > I've encountered the error AttributeError: 'method' object has no attribute '__yaql_function__' in our environment recently. I believe it started after we upgraded to the Stein release. I haven't tracked it down yet, but calls to the YAQL function: std:Project.getEnvironmentOwner() in a Murano app seem to generate it. It's good to know that at least someone else has reproduced this. cheers, Andy ---------- Gönderen: İzzettin Erdem Date: 2 May 2020 Cmt, 16:08 To: Andy Botting Hi Andy, I solved this problem, it is happening when you try to create or update an environment on existing network. If you change the network with "create new" parameter, it stuck on "creating -or updating- environment" state and after that throws connection error. So i searched Murano network errors and i found this: https://docs.oracle.com/cd/E73172_01/E73173/html/issue-21976631.html. I applied these steps and i created successfully an environment just only with "create new" network parameter. After all these, i realized my devstack environment has no internet connection and because of this murano-agent could not install. So i tried to install Murano on my OSA stable/train test environment and i came with success until last step. Murano-agent installed but this time agent could not connect to rabbitmq instance. I discussed this with OSA IRC channel and i think i have to struggle with this for a long time. Apologize for this long mail but i wanted to inform you. Thanks. Regards. ---------- Gönderen: Andy Botting Date: 3 May 2020 Paz, 14:24 To: İzzettin Erdem Hi İzzettin, Glad you got it sorted. I think you've probably just worked around the issue. Using your new method, you're probably just avoiding the std:Project.getEnvironmentOwner YAQL call now. I've also worked around it in our environment, but I'd like to solve it properly. When I get some time to look into it properly, I'll let you know. cheers, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon May 4 10:33:14 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 4 May 2020 12:33:14 +0200 Subject: [puppet] Configuring Debian's uwsgi for each service In-Reply-To: <1588537076621.82076@binero.com> References: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> <1588537076621.82076@binero.com> Message-ID: <31d6ecf7-be47-6425-b6dc-75184bc29c42@debian.org> On 5/3/20 10:17 PM, Tobias Urdin wrote: > Hello Thomas, > Thanks for starting this thread! > > I've been thinking about this as well, but due to other reason, how we can continue to abstract the service layer > used in the Puppet modules. > > The most deployed way right now is the ::wsgi::apache (using puppetlabs-apache). > > Do you think there is a way we could move uWSGI into it's own ::wsgi::uwsgi layer and then use an > upstream uWSGI module to do the actual configuration? Since uWSGI isn't in the OpenStack scope, same as > Apache configuration hence the usage of puppetlabs-apache. > > I've been thinking about a similar concept with other service layers, like Docker with Kolla containers etc even > though it's kind of out of scope for the Puppet OpenStack project, just thoughts. > > Best regards > Tobias Hi Tobias, What you must know, is that uwsgi is not in main in Ubuntu (it is in "universe", meaning it's just imported from Debian Sid when they release the LTS). This means that Canonical don't provide support for it at all. If one day, there's a security issue on it, Canonical will claim it is "community maintained" and therefore, they will put no effort on upgrading it. The consequence is that it's not reasonable to use uwsgi in Ubuntu, unless the situation with this package changes. We could still support it, but I wouldn't recommend it for security and maintainability reasons. In Debian, the situation is very different. All OpenStack services are using uwsgi directly, and don't need to be configured in any way to use uwsgi. You just install -api and it runs over UWSGI. So, in terms of puppet, for Debian, the only thing we need to do is to use something like this: service{ 'foo-api':} which looks like what we used to do with Eventlet. Now, I don't know what the situation is on CentOS/Red Hat. If you think it's valuable to support uwsgi there, then why not. But I'm not interested in contributing it. What I care right now, is being able to configure the uwsgi thing for the API. Namely, being able to configure the "processes" directive is the most important bit. Just having the puppet provider could be enough, though integrating it with the API's ::api "workers" parameter would be nice. Your thoughts? Cheers, Thomas Goirand (zigo) From amoralej at redhat.com Mon May 4 10:49:33 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Mon, 4 May 2020 12:49:33 +0200 Subject: cross distro Virtual PTG planning In-Reply-To: <20200426172845.wy6evtwrbslyeuct@mthode.org> References: <833d3d49-ac6b-dff4-0a21-bef2701115c7@debian.org> <20200426172845.wy6evtwrbslyeuct@mthode.org> Message-ID: I'm glad to participate from RDO side (maybe others may join too) although i'm not sure about the agenda. We can start discussion about topics in ehterpad. Regards, Alfredo On Sun, Apr 26, 2020 at 7:32 PM Matthew Thode wrote: > On 20-04-26 18:54:24, Thomas Goirand wrote: > > Dear package maintainers, > > > > We used to have a cross-distro discussion during each PTG (well, > > summits, since I didn't attend any PTG since the split). > > > > I was wondering if people from other distros would be interested in > > having such a session again. Please tell if you hare as a follow-up > > here, then maybe we can add ourselves in the ethercalc. > > > > I used to know everyone, like Haikel, people from SuSE, etc. Nowadays, I > > only know people from Canonical. I'd love to meet the new guys around. > > > > I'd be up for something (gentoo packager). Though I'm not too sure what > we'd go over. I target upper-constraints every six months (speaking of, > have to start soon), so not much to do about it. > > -- > Matthew Thode > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon May 4 13:32:24 2020 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 4 May 2020 09:32:24 -0400 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: Message-ID: Hi Mark, On Mon, May 4, 2020 at 5:02 AM Mark Goddard wrote: > Thanks for sharing this Kevin & Emilien. I have mixed feelings about > it. Of course it is sad to see a large consumer of the Kolla images > move on and build something new, rather than improving common tooling. > Please keep in mind that we're not doing this work because we're seeking for work (we're already pretty busy with some other topics); although the container images discussion has been on the table for a very long time and I think it's the right time to finally take some actions. If we decide to "leave Kolla"; you can count on us if help is needed for anything. For the "build something new rather than improving common tooling"; Alex answered that much better than I can do: "(...) Trying to solve for all the things ends up having a negative effect on the UX because of the complexity required to handle all the cases. (...)" The way Kevin and I worked is simple: we met with the folks who actually built the containers for TripleO and acknowledged the gap between the upstream tooling and how it's being consumed downstream. There is a high amount of complexity in the middle that we aim to solve in our proposed solution. For example, one condition of satisfaction of the new tooling is that it has to be directly consumable without having to override anything and the code / configs are 100% upstream. With the Kolla tooling, we are far from it: - TripleO overrides in tripleo-common upstream - overrides in tripleo-common downstream - multiple hacks in tooling to build containers for OSP Not great uh? Yes, someone could blame Red Hat for not contributing all the changes back to Kolla but this isn't that easy if you have been involved in these efforts yourself. For once, we have a strong desire to make it 100% public with no hacks, and it should never affect our TripleO consumers (upstream or downstream). In fact, it'll enable more integration to patch containers from within our Deployment Framework and solve other problems not covered in that thread (hotfix, etc). OTOH, I can see the reasons for doing it, and it has been clear since > Martin Andre left the core team that there was not too much interest > from Red Hat in pushing the Kolla project forward. > To be fair, no. The review velocity in Kolla is not bad at all and you folks are always nice to work with. Really. > I will resist the urge to bikeshed on image size :) > So talking about image size in the thread was my idea and I shouldn't have done that. I'll say it again here: the image size wasn't our main driver for that change. It has always been a bonus only. I rebuilt images with our new tooling last night; based on centos8 and I got the same sizes as the Kolla images. So unless we go with ubi8 (which I know could have been done with Kolla as well I'm sure); the image size won't be any smaller. I will make an alternative proposal, although I expect it has at least > one fatal flaw. Kolla is composed mainly of two parts - a kolla-build > Python tool (the aforementioned *glorious* templating engine) and a > set of Dockerfile templates. It's quite possible to host your own > collection of Dockerfile templates, and pass these to kolla-build via > --docker-dir. Tripleo could maintain its own set, and cut out the > uninteresting parts over time. The possibly fatal flaw? It wouldn't > support the buildah shell format. It is just another template format > though, perhaps it could be made to work with minimal changes. The > benefit would be that you get to keep the template override format > that is (presumably) exposed to users. > This should _at very least_ be a documented alternative in the spec proposed by Kevin. I agree we should take a look and I just discussed with Kevin and we'll do it this week and report back. If you do decide to go (I expect that decision has already been made), > please keep us in the loop. No decision has been made. To be fully transparent, Kevin and I worked on a prototype during 6 days, proposed a spec the 7th day and here we are. We'll do this in the open and again in full transparency; we acknowledge that we have put ourselves in that position but we aim to fix it. > We will of course continue to support Tripleo for as long as necessary, > but if you could provide us with a > timeframe it will help us to plan life after Tripleo. We're also > looking for ways to streamline and simplify, and as Alex mentioned, > this opens up some new possibilities. At the very least we can drop > the tripleoclient image :) > re: tripleoclient image: let's remove it now. It was never useful. I'll propose a patch this week. Yes we will keep you in the loop and thanks for your willingness to maintain Kolla during our transition. Note that this is a two-side thing. We are also happy to keep working with you, maybe on some other aspects (maintaining centos8, CI, etc). Finally, if you could share any analysis that ends up being done on > the images, and outcomes from it (e.g. drop these packages), that > would be a nice parting gift. > Yes I have a bunch of things that I'm not sure they are useful anymore. I'll make sure we document it in etherpad and share it with you asap. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekultails at gmail.com Mon May 4 13:37:35 2020 From: ekultails at gmail.com (Luke Short) Date: Mon, 4 May 2020 09:37:35 -0400 Subject: [puppet] Configuring Debian's uwsgi for each service In-Reply-To: <31d6ecf7-be47-6425-b6dc-75184bc29c42@debian.org> References: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> <1588537076621.82076@binero.com> <31d6ecf7-be47-6425-b6dc-75184bc29c42@debian.org> Message-ID: Hey folks, >From a RHEL/CentOS perspective, uWSGI is also not packaged in the main repositories meaning it is effectively unsupported as well. It is in the Extra Packages for Enterprise Linux (EPEL) repository but we do not support mixing our RDO (OpenStack) packages with EPEL as it will almost always guarantee issues with dependencies. In the TripleO community, because of this, we have had to stick with using Apache's WSGI module for serving the OpenStack API endpoints. I do think the proposal sounds great and if anyone wants to work on adding the Puppet bits to support uWSGI then I do not believe anyone will stop you. The real-world testing and usage of it by other distributions may be minimal due to these packaging and support restraints. Alternatively you may also want to look into OpenStack-Ansible as they heavily use uWSGI with Ubuntu based containers managed by Ansible. Sincerely, Luke Short On Mon, May 4, 2020 at 6:35 AM Thomas Goirand wrote: > On 5/3/20 10:17 PM, Tobias Urdin wrote: > > Hello Thomas, > > Thanks for starting this thread! > > > > I've been thinking about this as well, but due to other reason, how we > can continue to abstract the service layer > > used in the Puppet modules. > > > > The most deployed way right now is the ::wsgi::apache (using > puppetlabs-apache). > > > > Do you think there is a way we could move uWSGI into it's own > ::wsgi::uwsgi layer and then use an > > upstream uWSGI module to do the actual configuration? Since uWSGI isn't > in the OpenStack scope, same as > > Apache configuration hence the usage of puppetlabs-apache. > > > > I've been thinking about a similar concept with other service layers, > like Docker with Kolla containers etc even > > though it's kind of out of scope for the Puppet OpenStack project, just > thoughts. > > > > Best regards > > Tobias > > Hi Tobias, > > What you must know, is that uwsgi is not in main in Ubuntu (it is in > "universe", meaning it's just imported from Debian Sid when they release > the LTS). This means that Canonical don't provide support for it at all. > If one day, there's a security issue on it, Canonical will claim it is > "community maintained" and therefore, they will put no effort on > upgrading it. > > The consequence is that it's not reasonable to use uwsgi in Ubuntu, > unless the situation with this package changes. We could still support > it, but I wouldn't recommend it for security and maintainability reasons. > > In Debian, the situation is very different. All OpenStack services are > using uwsgi directly, and don't need to be configured in any way to use > uwsgi. You just install -api and it runs over UWSGI. So, in terms > of puppet, for Debian, the only thing we need to do is to use something > like this: > > service{ 'foo-api':} > > which looks like what we used to do with Eventlet. > > Now, I don't know what the situation is on CentOS/Red Hat. If you think > it's valuable to support uwsgi there, then why not. But I'm not > interested in contributing it. > > What I care right now, is being able to configure the uwsgi thing for > the API. Namely, being able to configure the "processes" directive is > the most important bit. Just having the puppet provider could be enough, > though integrating it with the API's ::api "workers" parameter > would be nice. > > Your thoughts? > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon May 4 14:36:47 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 4 May 2020 16:36:47 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: Hi, Somewhat related: do the new jobs finally allow choosing which tests are run rather than blindly run all smoke tests available? It has been a huge problem for ironic since the introduction of upgrade testing, and we're discussing migrating away from grenade because of that. Dmitry On Sat, May 2, 2020 at 2:41 AM Ghanshyam Mann wrote: > Hello Everyone, > > Finally, after so many cycles, grenade base job has been migrated to > zuulv3 native[1]. This is > merged in the Ussuri branch. > > Thanks to tosky for keep working on this and finish it!! > > New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and > 'grenade-forward' are > zuulv3 native now and we kept old job name 'grenade-py3' as an alias to > 'grenade' so that it would > not break the gate using 'grenade-py3'. > > * Start using this new base job for your projects specific grenade jobs. > > * Migration to new job name: > The integrated template and projects are using old job name 'grenade-py3' > need to move to > a new job name. Job is defined in the integrated template and also in the > project's zuul.yaml for irrelevant-files. > So we need to switch both places at the same time otherwise you will be > seeing the two jobs running on your gate > (master as well as on Ussuri). > > Integrated service specific template has switched to new job[2] which > means you might see two jobs running > 'grenade' & 'grenade-py3' running on your gate until you change > .zuul.yaml. Example: Nova did for master as well as Ussuri gate - > https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d > > It needs to be done in Ussuri also as Tempest where integrated templates > are present is branchless and apply the change for Ussuri job also. > > [1] https://review.opendev.org/#/c/548936/ > [2] https://review.opendev.org/#/c/722551/ > > -gmann > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 4 15:24:38 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 May 2020 10:24:38 -0500 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <7812278.T7Z3S40VBb@whitebase.usersys.redhat.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> <7812278.T7Z3S40VBb@whitebase.usersys.redhat.com> Message-ID: <171e04a01f3.e3797a8051350.2255238652151694672@ghanshyammann.com> ---- On Mon, 04 May 2020 03:03:22 -0500 Luigi Toscano wrote ---- > On Monday, 4 May 2020 09:26:12 CEST Carlos Goncalves wrote: > > Hey Ghanshyam, > > > > This is pretty good news! Congratulations to the whole team! > > > > I posted a patch to migrate the Octavia grenade job to native Zuul v3 [1]. > > It was really easy and simple to do it, so thanks again. > > > > One question. The job runs Tempest some tests (Nova, Neutron, ...) right > > after Grenade. From an Octavia perspective, it is not something we really > > don't care about but rather would prefer running (a subset of) Octavia > > tests or none. Is this possible? > > Yes, you can definitely check the subset of tests. > That behavior is driven by the same variable as in devstack-tempest jobs, > which is tempest_test_regex. > You can check an example here: > https://review.opendev.org/#/c/638390/41/.zuul.yaml There are two run of Tempest in grenade job: 1. smoke test run on old node before upgrade. Those are just to verify the installation and optional to run. You can disable those by BASE_RUN_SMOKE=False in your job. I will say if that takes time then disable it like we discussed for Ironic case. 2. upgrade tests. These tests run after the upgrade. The default set of test run are smoke tests (tempest + installed plugins) and can be overridden by tox_envlist and tempest_test_regex (example shown by tosky ) the same way devstack job does. -gmann > > > > > > In patch set 6 [2], I tried to disable Tempest completely. I did so by > > disabling "tempest" in devstack_services and set ENABLE_TEMPEST=False in > > the Grenade plugin settings file. Yet, Tempest tests still ran. Looking at > > the grenade job definition, it includes the tempest role -- is this > > correct? I would expect Tempest to run only if ENABLE_TEMPEST=True. > > That variable only contronls the smoke tests executed post-upgrade (currently > disabled), not the job-specific tempest tests. I realized now I have a long > standing note to fix that, but it's not critical anyway in order to implement > your requirement. > > Please remember to use the "native-zuulv3-migration" topic for those kind of > changes (they are part of the more general "removal of legacy jobs" goal for > Victoria). > > Ciao > -- > Luigi > > > > From cgoncalves at redhat.com Mon May 4 15:31:16 2020 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Mon, 4 May 2020 17:31:16 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171e04a01f3.e3797a8051350.2255238652151694672@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> <7812278.T7Z3S40VBb@whitebase.usersys.redhat.com> <171e04a01f3.e3797a8051350.2255238652151694672@ghanshyammann.com> Message-ID: On Mon, May 4, 2020 at 5:24 PM Ghanshyam Mann wrote: > ---- On Mon, 04 May 2020 03:03:22 -0500 Luigi Toscano < > ltoscano at redhat.com> wrote ---- > > On Monday, 4 May 2020 09:26:12 CEST Carlos Goncalves wrote: > > > Hey Ghanshyam, > > > > > > This is pretty good news! Congratulations to the whole team! > > > > > > I posted a patch to migrate the Octavia grenade job to native Zuul v3 > [1]. > > > It was really easy and simple to do it, so thanks again. > > > > > > One question. The job runs Tempest some tests (Nova, Neutron, ...) > right > > > after Grenade. From an Octavia perspective, it is not something we > really > > > don't care about but rather would prefer running (a subset of) Octavia > > > tests or none. Is this possible? > > > > Yes, you can definitely check the subset of tests. > > That behavior is driven by the same variable as in devstack-tempest > jobs, > > which is tempest_test_regex. > > You can check an example here: > > https://review.opendev.org/#/c/638390/41/.zuul.yaml > > > There are two run of Tempest in grenade job: > 1. smoke test run on old node before upgrade. Those are just to verify the > installation and optional to run. > You can disable those by BASE_RUN_SMOKE=False in your job. I will say if > that takes time then disable it like > we discussed for Ironic case. > > 2. upgrade tests. These tests run after the upgrade. The default set of > test run are smoke tests (tempest + installed plugins) and > can be overridden by tox_envlist and tempest_test_regex (example shown by > tosky ) the same way devstack job does. > Luigi helped me out on #openstack-qa earlier today and reviewed https://review.opendev.org/#/c/725098/ -- thanks! > -gmann > > > > > > > > > > > > In patch set 6 [2], I tried to disable Tempest completely. I did so by > > > disabling "tempest" in devstack_services and set ENABLE_TEMPEST=False > in > > > the Grenade plugin settings file. Yet, Tempest tests still ran. > Looking at > > > the grenade job definition, it includes the tempest role -- is this > > > correct? I would expect Tempest to run only if ENABLE_TEMPEST=True. > > > > That variable only contronls the smoke tests executed post-upgrade > (currently > > disabled), not the job-specific tempest tests. I realized now I have a > long > > standing note to fix that, but it's not critical anyway in order to > implement > > your requirement. > > > > Please remember to use the "native-zuulv3-migration" topic for those > kind of > > changes (they are part of the more general "removal of legacy jobs" > goal for > > Victoria). > > > > Ciao > > -- > > Luigi > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon May 4 16:06:23 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 4 May 2020 18:06:23 +0200 Subject: [ironic][ptg] Virtual PTG - Victoria In-Reply-To: References: Message-ID: Hello Ironicers! We will be using the following slots for our discussions: From Jun 02 till Jun 05, starting at 14 UTC till 16 UTC (Room 17- Queens) You can check the slots in [1]. Thank you! [1] https://ethercalc.openstack.org/126u8ek25noy Em sex., 24 de abr. de 2020 às 11:25, Iury Gregory escreveu: > Hello Ironicers! > > We need to choose the slots for the Virtual PTG[1] on the official > schedule [2]. > So, if you are interested in participating you need to do two things: > 1- Add your name and topics to the etherpad [3]. > 2- Fill out the doodle in [4]. > > Thank you! > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.html > [2] https://ethercalc.openstack.org/126u8ek25noy > [3] https://etherpad.opendev.org/p/Ironic-VictoriaPTG-Planning > [4] https://doodle.com/poll/mbpr2x7z3t5hqec6 > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 4 17:44:22 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 May 2020 12:44:22 -0500 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: <171e0c9edd3.c6a6849157319.1080633731079190778@ghanshyammann.com> ---- On Mon, 04 May 2020 09:36:47 -0500 Dmitry Tantsur wrote ---- > Hi, > Somewhat related: do the new jobs finally allow choosing which tests are run rather than blindly run all smoke tests available? It has been a huge problem for ironic since the introduction of upgrade testing, and we're discussing migrating away from grenade because of that. It does for upgrade tests in the same way you do for devstack job. You can use the tox_envlist and tempest_test_regex. But the pre-upgrade tests, tests are not configurable. We discussed those but then left those as it is because of they are just to verify the installation and not really needed to run by project-specific job. I think we can disable pre-upgrade tests by default and let projects enabled on a need basis via BASE_RUN_SMOKE. Will that help? -gmann > Dmitry > > On Sat, May 2, 2020 at 2:41 AM Ghanshyam Mann wrote: > Hello Everyone, > > Finally, after so many cycles, grenade base job has been migrated to zuulv3 native[1]. This is > merged in the Ussuri branch. > > Thanks to tosky for keep working on this and finish it!! > > New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and 'grenade-forward' are > zuulv3 native now and we kept old job name 'grenade-py3' as an alias to 'grenade' so that it would > not break the gate using 'grenade-py3'. > > * Start using this new base job for your projects specific grenade jobs. > > * Migration to new job name: > The integrated template and projects are using old job name 'grenade-py3' need to move to > a new job name. Job is defined in the integrated template and also in the project's zuul.yaml for irrelevant-files. > So we need to switch both places at the same time otherwise you will be seeing the two jobs running on your gate > (master as well as on Ussuri). > > Integrated service specific template has switched to new job[2] which means you might see two jobs running > 'grenade' & 'grenade-py3' running on your gate until you change .zuul.yaml. Example: Nova did for master as well as Ussuri gate - https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d > > It needs to be done in Ussuri also as Tempest where integrated templates are present is branchless and apply the change for Ussuri job also. > > [1] https://review.opendev.org/#/c/548936/ > [2] https://review.opendev.org/#/c/722551/ > > -gmann > > From ltoscano at redhat.com Mon May 4 17:46:49 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 04 May 2020 19:46:49 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: <1810761.CQOukoFCf9@whitebase.usersys.redhat.com> On Monday, 4 May 2020 16:36:47 CEST Dmitry Tantsur wrote: > Hi, > > Somewhat related: do the new jobs finally allow choosing which tests are > run rather than blindly run all smoke tests available? It has been a huge > problem for ironic since the introduction of upgrade testing, and we're > discussing migrating away from grenade because of that. > The set of smoke tests is unchanged, mirroring the old behavior (tox -esmoke). It was not part of the migration which, apart from using a more Zuul-like workflow, tried to not change everything else. You can change the set of jobs executed after the end of grenade, the non- smoke ones. If tuning the set of smoke tests could be useful, it could be discussed and planned for later. Ciao -- Luigi From allison at openstack.org Mon May 4 18:07:24 2020 From: allison at openstack.org (Allison Price) Date: Mon, 4 May 2020 13:07:24 -0500 Subject: OpenStack Ussuri Community Meetings Message-ID: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> Hi everyone, The OpenStack Ussuri release is only a week and a half away! Members of the TC and project teams are holding two community meetings: one on Wednesday, May 13 and Thursday, May 14. Here, they will share some of the release highlights and project features. Join us: Wednesday, May 13 at 1400 UTC Moderator Mohammed Naser, TC Presenters / Open for Questions Slawek Kaplonski, Neutron Michael Johnson, Octavia Goutham Pacha Ravi, Manila Mark Goddard, Kolla Balazs Gibizer, Nova Brian Rosmaita, Cinder Thursday, May 14 at 0200 UTC Moderator: Rico Lin, TC Presenters / Open for Questions Michael Johnson, Octavia Goutham Pacha Ravi, Manila Rico Lin, Heat Feilong Wang, Magnum Brian Rosmaita, Cinder See you there! Allison Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: iCal-20200501-161046.ics Type: text/calendar Size: 1984 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: iCal-20200501-161115.ics Type: text/calendar Size: 1985 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri May 1 16:55:55 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 1 May 2020 18:55:55 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> Message-ID: Thanks, have a nice long weekend Ignazio Il Ven 1 Mag 2020, 18:47 Sean Mooney ha scritto: > On Fri, 2020-05-01 at 18:34 +0200, Ignazio Cassano wrote: > > Hello Sean, > > to be honest I did not understand what is the difference between the > first > > and second patch but it is due to my poor skill and my poor english. > no worries. the first patch is the actual change to add the new config > option. > the second patch is just a change to force our ci jobs to enable the > config option > we proably dont want to do that permently which is why i have marked it > [DNM] or "do not merge" > so it just there to prove the first patch is correct. > > > Anycase I would like to test it. I saw I can download files : > > workaround.py and neutron.py and there is a new option > > force_legacy_port_binding. > > How can I test? > > I must enable the new option under workaround section in the in nova.conf > > on compute nodes setting it to true? > yes that is correct if you apply the first patch you need to set the new > config > option in the workarouds section in the nova.conf on the > contoler.specifcally the conductor > needs to have this set. i dont think this is needed on the compute nodes > at least it should not > need to be set in the compute node nova.conf for the live migration issue. > > > The files downloaded (from first or secondo patch?) must be copied on > > compute nodes under /usr/lib/python2.7/site_packages > > nova/conf/workaround.py and nova/network/neutron.py and then restart nova > > compute service? > > once we have merged this in master ill backport it to the different > openstack version back to rocky > if you want to test it before then the simpelest thing to do is just > manually make the same change > unless you are using devstack in which case you could cherry pick the > cange to whatever branch you are testing. > > > It should work only for new instances or also for running instances? > > it will apply to all instances. what the cange is doing is disabling our > detection of > neutron supprot for the multiple port binding workflow. we still have > compatibility code for supporting old version of > neutron. we proably shoudl remove that at some point but when the config > option is set we will ignore if you are using > old or new neutorn and just fall back to how we did things before rocky. > > in principal that should make live migration have more packet loss but > since people have reproted it actully fixes the > issue in this case i have written the patch so you can opt in to the old > behaviour. > > if that work for you in your testing we can continue to keep the > workaround and old compatibility code until we resolve > the issue when using the multiple port binding flow. > > Sorry for disturbing. > > dont be sorry it fine to ask questions although just be aware its a long > weekend so i will not be working monday > but i should be back on tuseday ill update the patch then with a release > note and a unit test and hopefully i can get > some cores to review it. > > Best Regards > > Ignazio > > > > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney ha scritto: > > > > > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > > > Many thanks. > > > > Please keep in touch. > > > > > > here are the two patches. > > > the first https://review.opendev.org/#/c/724386/ is the actual change > to > > > add the new config opition > > > this needs a release note and some tests but it shoudl be functional > hence > > > the [WIP] > > > i have not enable the workaround in any job in this patch so the ci run > > > will assert this does not break > > > anything in the default case > > > > > > the second patch is https://review.opendev.org/#/c/724387/ which > enables > > > the workaround in the multi node ci jobs > > > and is testing that live migration exctra works when the workaround is > > > enabled. > > > > > > this should work as it is what we expect to happen if you are using a > > > moderne nova with an old neutron. > > > its is marked [DNM] as i dont intend that patch to merge but if the > > > workaround is useful we migth consider enableing > > > it for one of the jobs to get ci coverage but not all of the jobs. > > > > > > i have not had time to deploy a 2 node env today but ill try and test > this > > > locally tomorow. > > > > > > > > > > > > > Ignazio > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < > smooney at redhat.com > > > > > > > > ha scritto: > > > > > > > > > so bing pragmatic i think the simplest path forward given my other > > > > > > patches > > > > > have not laned > > > > > in almost 2 years is to quickly add a workaround config option to > > > > > > disable > > > > > mulitple port bindign > > > > > which we can backport and then we can try and work on the actual > fix > > > > > > after. > > > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that > > > > > > shoudl > > > > > serve as a workaround > > > > > for thos that hav this issue but its a regression in functionality. > > > > > > > > > > i can create a patch that will do that in an hour or so and > submit a > > > > > followup DNM patch to enabel the > > > > > workaound in one of the gate jobs that tests live migration. > > > > > i have a meeting in 10 mins and need to finish the pacht im > currently > > > > > updating but ill submit a poc once that is done. > > > > > > > > > > im not sure if i will be able to spend time on the actul fix which > i > > > > > proposed last year but ill see what i can do. > > > > > > > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > > > PS > > > > > > I have testing environment on queens,rocky and stein and I can > make > > > > > > test > > > > > > as you need. > > > > > > Ignazio > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > > > > > > > Hello Sean, > > > > > > > the following is the configuration on my compute nodes: > > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > > > > > > > As far as firewall driver > > > > > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on > stein > > > > > > > > > > testing > > > > > > > environment and the > > > > > > > same firewall driver. > > > > > > > Live migration on provider network on queens works fine. > > > > > > > It does not work fine on rocky and stein (vm lost connection > after > > > > > > it > > > > > > > > > > is > > > > > > > migrated and start to respond only when the vm send a network > > > > > > packet , > > > > > > > > > > for > > > > > > > example when chrony pools the time server). > > > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > > > > > smooney at redhat.com> > > > > > > > ha scritto: > > > > > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > > > Hello, some updated about this issue. > > > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp > must be > > > > > > > > > > sent by > > > > > > > > > qemu during live miration. > > > > > > > > > If this is true, this means on rocky/stein the > qemu/libvirt are > > > > > > > > > > bugged. > > > > > > > > > > > > > > > > it is not correct. > > > > > > > > qemu/libvir thas alsway used RARP which predates GARP to > serve as > > > > > > > > > > its mac > > > > > > > > learning frames > > > > > > > > instead > > > > > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > > > but was fixed by > > > > > > > > > > > > > > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > > > can you confirm you are not using the broken 2.6.0 release > and > > > > > > are > > > > > > > > > > using > > > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > > > > > libvirt/qemu > > > > > > > > > packages I installed on queens (I updated compute and > > > > > > controllers > > > > > > > > > > node > > > > > > > > > > > > > > > > on > > > > > > > > > queens for obtaining same libvirt/qemu version deployed on > > > > > > rocky > > > > > > > > > > and > > > > > > > > > > > > > > > > stein). > > > > > > > > > > > > > > > > > > On queens live migration on provider network continues to > work > > > > > > > > > > fine. > > > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > > > > > openstack > > > > > > > > > components . > > > > > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly > > > > > > assumes > > > > > > > > that the port binding details wont > > > > > > > > change when it does a live migration and does not update the > xml > > > > > > for > > > > > > > > > > the > > > > > > > > netwrok interfaces. > > > > > > > > > > > > > > > > the port binding is updated after the migration is complete > in > > > > > > > > post_livemigration > > > > > > > > in rocky+ neutron optionally uses the multiple port bindings > > > > > > flow to > > > > > > > > prebind the port to the destiatnion > > > > > > > > so it can update the xml if needed and if post copy live > > > > > > migration is > > > > > > > > enable it will asyconsly activate teh dest port > > > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > > > > > if you are using the iptables firewall os-vif will have > > > > > > precreated > > > > > > > > > > the > > > > > > > > ovs port and intermediate linux bridge before the > > > > > > > > migration started which will allow neutron to wire it up > (put it > > > > > > on > > > > > > > > > > the > > > > > > > > correct vlan and install security groups) before > > > > > > > > the vm completes the migraton. > > > > > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates > teh ovs > > > > > > > > > > port > > > > > > > > but libvirt deletes it and recreats it too. > > > > > > > > as a result there is a race when using openvswitch firewall > that > > > > > > can > > > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > > > > > smooney at redhat.com> > > > > > > > > > ha scritto: > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > > > > > iptables_hybrid > > > > > > > > > > > firewall. > > > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but > rocky > > > > > > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > > > spec intoduced teh ablity to shorten the outage by pre > > > > > > biding the > > > > > > > > > > > > > > > > port and > > > > > > > > > > activating it when > > > > > > > > > > the vm is resumed on the destiation host before we get > to pos > > > > > > > > > > live > > > > > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully > > > > > > elimiated > > > > > > > > > > as > > > > > > > > > > > > > > > > some > > > > > > > > > > level of packet loss is > > > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but > be > > > > > > aware > > > > > > > > > > that > > > > > > > > > > > > > > > > if a > > > > > > > > > > network partion happens > > > > > > > > > > during a post copy live migration the vm will crash and > need > > > > > > to > > > > > > > > > > be > > > > > > > > > > restarted. > > > > > > > > > > it is generally safe to use and will imporve the > migration > > > > > > > > > > performace > > > > > > > > > > > > > > > > but > > > > > > > > > > unlike pre copy migration if > > > > > > > > > > the guess resumes on the dest and the mempry page has not > > > > > > been > > > > > > > > > > copied > > > > > > > > > > > > > > > > yet > > > > > > > > > > then it must wait for it to be copied > > > > > > > > > > and retrive it form the souce host. if the connection > too the > > > > > > > > > > souce > > > > > > > > > > > > > > > > host > > > > > > > > > > is intrupted then the vm cant > > > > > > > > > > do that and the migration will fail and the instance will > > > > > > crash. > > > > > > > > > > if > > > > > > > > > > > > > > > > you > > > > > > > > > > are using precopy migration > > > > > > > > > > if there is a network partaion during the migration the > > > > > > > > > > migration will > > > > > > > > > > fail but the instance will continue > > > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just > good to > > > > > > be > > > > > > > > > > aware > > > > > > > > > > > > > > > > of > > > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney < > smooney at redhat.com> > > > > > > ha > > > > > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano > wrote: > > > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm > > > > > > migrate > > > > > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > > > > > node > > > > > > > > > > > > > to another I cannot ping it for several minutes. > If in > > > > > > the > > > > > > > > > > vm I > > > > > > > > > > > > > > > > put a > > > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > > > > > migration > > > > > > > > > > > > > > > > works > > > > > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > I can ping it. Why this happens ? I read something > > > > > > about > > > > > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an > > > > > > older > > > > > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > > > > > called > > > > > > > > > > > > RARP > > > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you > > > > > > using > > > > > > > > > > > > > > > > iptables > > > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we > can > > > > > > do > > > > > > > > > > until > > > > > > > > > > > > > > > > we > > > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > > > currently libvirt handels interface plugging for > kernel > > > > > > ovs > > > > > > > > > > when > > > > > > > > > > > > > > > > using > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress > that > > > > > > > > > > but it > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > > neutron patch are > > > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out > dated. > > > > > > > > > > while > > > > > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > > > > > is > > > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > > > a race condition where the RARP packets sent by qemu > and > > > > > > > > > > then mac > > > > > > > > > > > > > > > > > > > > learning > > > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have > > > > > > opnestack > > > > > > > > > > > > > > > > rock or > > > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > > > it should reduce the downtime. in this conficution > we do > > > > > > not > > > > > > > > > > have > > > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > > > > > race > > > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon May 4 21:00:11 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 4 May 2020 23:00:11 +0200 Subject: [puppet] Configuring Debian's uwsgi for each service In-Reply-To: References: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> <1588537076621.82076@binero.com> <31d6ecf7-be47-6425-b6dc-75184bc29c42@debian.org> Message-ID: <8324a82a-a69e-2c22-9a8c-886888b86c8b@debian.org> On 5/4/20 3:37 PM, Luke Short wrote: > Hey folks, > > From a RHEL/CentOS perspective, uWSGI is also not packaged in the main > repositories meaning it is effectively unsupported as well. It is in the > Extra Packages for Enterprise Linux (EPEL) repository but we do not > support mixing our RDO (OpenStack) packages with EPEL as it will almost > always guarantee issues with dependencies. In the TripleO community, > because of this, we have had to stick with using Apache's WSGI module > for serving the OpenStack API endpoints. > > I do think the proposal sounds great and if anyone wants to work on > adding the Puppet bits to support uWSGI then I do not believe anyone > will stop you. The real-world testing and usage of it by other > distributions may be minimal due to these packaging and support > restraints. Alternatively you may also want to look into > OpenStack-Ansible as they heavily use uWSGI > with Ubuntu based > containers managed by Ansible. > > Sincerely, >     Luke Short Hi, Ok, so we may as well forget having support for Red Hat, unless the situation changes there too. IMO, it's a shame that uwsgi doesn't have enough support in downstream distros. :( Anyway, let's move forward, since it looks like everybody is happy about my proposal. One of the problems is that I've added some unit tests, but they can't be launch because currently there's no Debian testing activated. Hopefully, I'll be able to work again on having Debian to gate on the OpenDev CI again, and have a colleague to co-maintain it. Does anyone have an idea on how to fix the situation before this happens? Though I've tested locally this patch: https://review.opendev.org/725065 tweaking the amount of processes with this code: neutron_uwsgi_config { 'uwsgi/processes': value => 12; } and it worked as expected (ie: the number of processes switches from 8 to 12, which is what I wanted). The only problem is that I would expect the neutron-api service to be restarted if one of the neutron_uwsgi_config resource is changed, and this didn't happen. How can I fix this? Also, how can I make sure resources are defined only once, like with neutron_config? Cheersm Thomas Goirand (zigo) From kendall at openstack.org Mon May 4 21:38:45 2020 From: kendall at openstack.org (Kendall Waters) Date: Mon, 4 May 2020 16:38:45 -0500 Subject: [all] Virtual PTG Registration Live Message-ID: <846A4E3E-2877-4C99-8527-FCEFF08F0C7F@openstack.org> Hey everyone, Registration for the June 2020 Virtual PTG is now live! It is free but its still extremely important that you register as this is primarily so we have a way to contact attendees with information about the event as we figure the rest of the details out. So please take a minute to register: https://virtualptgjune2020.eventbrite.com Let us know if you have any questions! -the Kendalls (diablo_rojo & wendallkaters) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon May 4 22:14:19 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 4 May 2020 15:14:19 -0700 Subject: [all][PTL][TC] PTG Signup Last Call Message-ID: Hello! In an effort to get a schedule set and posted on the website and so that you can all block your calendars for the event, we ask that *teams finish signing up for time by Sunday, May 10th at 7:00 UTC*. We have done some consolidation to save room resources (as we will have to set up/pay for rooms ahead of time) so if you are in the list below, please note that your room may have changed, but the timing should be the same. Also note, some teams may be set to switch rooms throughout the week. Currently Signed Up Teams: Airship Automation SIG Cinder Edge Computing Group First Contact SIG Interop WG Ironic Glance Heat Horizon Kata Containers Kolla Manila Monasca Multi-Arch SIG Neutron Nova Octavia OpenDev OpenStackAnsible OpenStackAnsibleModules OpenStack-Helm Oslo QA Scientific SIG Security SIG Tacker TripleO Thanks! -The Kendalls(diablo_rojo & wendallkaters) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon May 4 22:15:35 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 May 2020 22:15:35 +0000 Subject: [puppet] Configuring Debian's uwsgi for each service In-Reply-To: <8324a82a-a69e-2c22-9a8c-886888b86c8b@debian.org> References: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> <1588537076621.82076@binero.com> <31d6ecf7-be47-6425-b6dc-75184bc29c42@debian.org> <8324a82a-a69e-2c22-9a8c-886888b86c8b@debian.org> Message-ID: <20200504221535.x6ix2zgcf2pxhxuu@yuggoth.org> On 2020-05-04 23:00:11 +0200 (+0200), Thomas Goirand wrote: [...] > One of the problems is that I've added some unit tests, but they can't > be launch because currently there's no Debian testing activated. > Hopefully, I'll be able to work again on having Debian to gate on the > OpenDev CI again, and have a colleague to co-maintain it. Does anyone > have an idea on how to fix the situation before this happens? [...] Assistance keeping the image builds working as things change in relevant distros is of course highly appreciated, and even necessary. That said, we have debian-buster and debian-buster-arm64 images currently you can run jobs on, the image updates seems to be in working order. We can probably get some debian-bullseye or debian-sid images implemented too if there's sufficient demand. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From marcin.juszkiewicz at linaro.org Tue May 5 06:08:31 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 5 May 2020 08:08:31 +0200 Subject: OpenStack Ussuri Community Meetings In-Reply-To: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> Message-ID: <92450300-0f32-af04-5131-a96d1b24fcf9@linaro.org> W dniu 04.05.2020 o 20:07, Allison Price pisze: > The OpenStack Ussuri release is only a week and a half away! Members > of the TC and project teams are holding two community meetings: one > on Wednesday, May 13 and Thursday, May 14. Here, they will share some > of the release highlights and project features. Would be nice to post meeting details/links as well. From tim.bell at cern.ch Tue May 5 07:02:43 2020 From: tim.bell at cern.ch (Tim Bell) Date: Tue, 5 May 2020 09:02:43 +0200 Subject: OpenStack Ussuri Community Meetings In-Reply-To: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> Message-ID: <422BE6AE-ACCC-427D-8C97-5142F5FACA70@cern.ch> Thanks for organising this. Will recordings / slides be made available ? Tim > On 4 May 2020, at 20:07, Allison Price wrote: > > Hi everyone, > > The OpenStack Ussuri release is only a week and a half away! Members of the TC and project teams are holding two community meetings: one on Wednesday, May 13 and Thursday, May 14. Here, they will share some of the release highlights and project features. > > Join us: > > Wednesday, May 13 at 1400 UTC > Moderator > Mohammed Naser, TC > Presenters / Open for Questions > Slawek Kaplonski, Neutron > Michael Johnson, Octavia > Goutham Pacha Ravi, Manila > Mark Goddard, Kolla > Balazs Gibizer, Nova > Brian Rosmaita, Cinder > > Thursday, May 14 at 0200 UTC > Moderator: Rico Lin, TC > Presenters / Open for Questions > Michael Johnson, Octavia > Goutham Pacha Ravi, Manila > Rico Lin, Heat > Feilong Wang, Magnum > Brian Rosmaita, Cinder > > > See you there! > Allison > > > > > > Allison Price > OpenStack Foundation > allison at openstack.org > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue May 5 07:13:49 2020 From: ralonsoh at redhat.com (Rodolfo Alonso) Date: Tue, 05 May 2020 08:13:49 +0100 Subject: [neutron][QoS] IRC meeting cancelled on May 5 Message-ID: <884dfaee27178d11987ff0adf243c40774583ede.camel@redhat.com> Hello: Due to the lack of agenda, I'm going to cancel the Neutron QoS meeting today. Regards. Rodolfo Alonso (ralonsoh) From aj at suse.com Tue May 5 07:35:58 2020 From: aj at suse.com (Andreas Jaeger) Date: Tue, 5 May 2020 09:35:58 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: On 02.05.20 02:39, Ghanshyam Mann wrote: > Hello Everyone, > > Finally, after so many cycles, grenade base job has been migrated to zuulv3 native[1]. This is > merged in the Ussuri branch. > > Thanks to tosky for keep working on this and finish it!! > > New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and 'grenade-forward' are > zuulv3 native now and we kept old job name 'grenade-py3' as an alias to 'grenade' so that it would > not break the gate using 'grenade-py3'. > > * Start using this new base job for your projects specific grenade jobs. > > * Migration to new job name: > The integrated template and projects are using old job name 'grenade-py3' need to move to > a new job name. Job is defined in the integrated template and also in the project's zuul.yaml for irrelevant-files. > So we need to switch both places at the same time otherwise you will be seeing the two jobs running on your gate > (master as well as on Ussuri). > > Integrated service specific template has switched to new job[2] which means you might see two jobs running > 'grenade' & 'grenade-py3' running on your gate until you change .zuul.yaml. Example: Nova did for master as well as Ussuri gate - https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d > > It needs to be done in Ussuri also as Tempest where integrated templates are present is branchless and apply the change for Ussuri job also. The template integrated-gate-py3 has been introduced in stein cycle as far as I can see, so we do need to update the extra grenade-py3 lines to grenade where they exist in stein and train as well to avoid the duplicated runs, don't we? Andreas > [1] https://review.opendev.org/#/c/548936/ > [2] https://review.opendev.org/#/c/722551/ > > -gmann > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From dtantsur at redhat.com Tue May 5 09:05:50 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 5 May 2020 11:05:50 +0200 Subject: [ironic][ptg] Virtual PTG - Victoria In-Reply-To: References: Message-ID: On Mon, May 4, 2020 at 6:22 PM Iury Gregory wrote: > Hello Ironicers! > > We will be using the following slots for our discussions: From Jun 02 till > Jun 05, starting at 14 UTC till 16 UTC (Room 17- Queens) > A correction: it seems that we've been moved to the Liberty room with the same time slots. Dmitry > > You can check the slots in [1]. > > Thank you! > > [1] https://ethercalc.openstack.org/126u8ek25noy > > Em sex., 24 de abr. de 2020 às 11:25, Iury Gregory > escreveu: > >> Hello Ironicers! >> >> We need to choose the slots for the Virtual PTG[1] on the official >> schedule [2]. >> So, if you are interested in participating you need to do two things: >> 1- Add your name and topics to the etherpad [3]. >> 2- Fill out the doodle in [4]. >> >> Thank you! >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.html >> [2] https://ethercalc.openstack.org/126u8ek25noy >> [3] https://etherpad.opendev.org/p/Ironic-VictoriaPTG-Planning >> [4] https://doodle.com/poll/mbpr2x7z3t5hqec6 >> >> -- >> >> >> *Att[]'sIury Gregory Melo Ferreira * >> *MSc in Computer Science at UFCG* >> *Software Engineer at Red Hat Czech* >> *Social*: https://www.linkedin.com/in/iurygregory >> *E-mail: iurygregory at gmail.com * >> > > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Tue May 5 09:18:21 2020 From: licanwei_cn at 163.com (licanwei) Date: Tue, 5 May 2020 17:18:21 +0800 (GMT+08:00) Subject: [Watcher] No IRC meeting tomorrow Message-ID: <29f55135.5431.171e4210579.Coremail.licanwei_cn@163.com> | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue May 5 09:35:07 2020 From: aj at suse.com (Andreas Jaeger) Date: Tue, 5 May 2020 11:35:07 +0200 Subject: [all][docs] docs.openstack.org: Check Ussuri Index pages Message-ID: <29b2e02b-bad7-c12c-1ac6-71de225e7d2d@suse.com> The Ussuri index pages [1] are now published, the default on docs.o.o to Ussuri will change with release. The index pages only list entries that existed yesterday, so if a project was branched but did not merge any change, then no Ussuri content might exist and thus the project is not listed in the index pages. In such a case: Merge the usual bot updates, wait for the content to be published, and then propose a change to [3]. For details see [2]. Andreas [1] https://docs.openstack.org/ussuri/ [2] https://docs.openstack.org/doc-contrib-guide/doc-index.html [3] https://opendev.org/openstack/openstack-manuals/src/branch/master/www/project-data/ussuri.yaml -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From ignaziocassano at gmail.com Tue May 5 11:39:17 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 5 May 2020 13:39:17 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> Message-ID: Hello Sean, if you do not want to spend you time for configuring a test openstack environment I am available for schedule a call where I can share my desktop and we could test on rocky and stein . Let me know if you can. Best Regards Ignazio Il giorno sab 2 mag 2020 alle ore 17:40 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello Sean, I am continuing my test (so you we'll have to read a lot :-) ) > If I understood well file neutron.py contains a patch for > /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py for reading > the configuration force_legacy_port_binding. > If it is true it returns false. > I patched the api.py and inserting a LOG.info call I saw it reads the > variable but it seems do nothing and the migrate instance stop to respond. > Best Regards > Ignazio > > > > Il giorno sab 2 mag 2020 alle ore 10:43 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> Hello Sean, >> I modified the no workloads.py to add the consoleauth code, so now it >> does note returns errors during live migration phase, as I wrote in my last >> email. >> Keep in mind my stein is from an upgrade. >> Sorry if I am not sending all email history here, but if message body is >> too big the email needs the moderator approval. >> Anycase, I added the following code: >> >> cfg.BoolOpt( >> 'enable_consoleauth', >> default=False, >> deprecated_for_removal=True, >> deprecated_since="18.0.0", >> deprecated_reason=""" >> This option has been added as deprecated originally because it is used >> for avoiding a upgrade issue and it will not be used in the future. >> See the help text for more details. >> """, >> help=""" >> Enable the consoleauth service to avoid resetting unexpired consoles. >> >> Console token authorizations have moved from the ``nova-consoleauth`` >> service >> to the database, so all new consoles will be supported by the database >> backend. >> With this, consoles that existed before database backend support will be >> reset. >> For most operators, this should be a minimal disruption as the default >> TTL of a >> console token is 10 minutes. >> >> Operators that have much longer token TTL configured or otherwise wish to >> avoid >> immediately resetting all existing consoles can enable this flag to >> continue >> using the ``nova-consoleauth`` service in addition to the database >> backend. >> Once all of the old ``nova-consoleauth`` supported console tokens have >> expired, >> this flag should be disabled. For example, if a deployment has configured >> a >> token TTL of one hour, the operator may disable the flag, one hour after >> deploying the new code during an upgrade. >> >> .. note:: Cells v1 was not converted to use the database backend for >> console token authorizations. Cells v1 console token authorizations will >> continue to be supported by the ``nova-consoleauth`` service and use of >> the ``[workarounds]/enable_consoleauth`` option does not apply to >> Cells v1 users. >> >> Related options: >> >> * ``[consoleauth]/token_ttl`` >> """), >> >> Now the live migration starts e the instance is moved but the it >> continues to be unreachable after live migration. >> It starts to respond only when it starts a connection (for example a >> polling to ntp server). >> If I disable chrony in the instance, it stop to respond for ever. >> Best Regards >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue May 5 12:32:18 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 5 May 2020 21:32:18 +0900 Subject: [neutron] bug deputy report (4/27-5/3) Message-ID: Hi, I was a bug deputy last week. The last week was relatively a quiet week. Higher priority bugs have assignees. They are not release blockers, while it is nice to have them backported. A bug on DVR with flat network needs to be covered by L3 team. ## Untriaged * https://bugs.launchpad.net/neutron/+bug/1876092 DVR with Flat network result in ICMP reply DUP New, Undecided It needs to be checked by DVR folks. I am not sure DVR with flat network is not supported by design or it is just a bug. * https://bugs.launchpad.net/neutron/+bug/1876021 create multiple reserved_dhcp_port doesn't work as expected I will try to reproduce it after the holiday week in my side is over. ## Assigned * https://bugs.launchpad.net/neutron/+bug/1875849 not ensure default security group exists when filter by project_id Medium, In Progress, the reporter is working on the fix ## In Progress * https://bugs.launchpad.net/nova/+bug/1863021 eventlet monkey patch results in assert `len(_active) == 1` AssertionError High, In Progress, a fixed is in the gate * https://bugs.launchpad.net/neutron/+bug/1876148 OVNL3RouterPlugin should add_router_interface as in RouterPluginBase Medium, In Progress * https://bugs.launchpad.net/neutron/+bug/1876094 Dnsmasq 2.81 broke neutron's DHCP service High, In Progress ## New RFEs * https://bugs.launchpad.net/neutron/+bug/1875516 [RFE] Allow sharing security groups as read-only will be discussed in coming drivers meetings * https://bugs.launchpad.net/neutron/+bug/1875852 [RFE] [OVN] SRIOV routing on VLAN Tenant networks ## Fix Released * https://bugs.launchpad.net/neutron/+bug/1875865 SRIOV OVN metadata namespaces not cleaned up after ports are unbounded * https://bugs.launchpad.net/bugs/1875344 Cleanup in neutron_tempest_plugin.api.admin.test_external_network_extension.ExternalNetworksRBACTestJSON may fail in dvr deployments From jonathan.rosser at rd.bbc.co.uk Tue May 5 13:50:00 2020 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Tue, 5 May 2020 14:50:00 +0100 Subject: [openstack-ansible] dropping suse support In-Reply-To: References: Message-ID: <95eb3d95-df82-4222-2b1a-078a7ce92771@rd.bbc.co.uk> On 08/03/2019 16:36, Mohammed Naser wrote: > Hi everyone, > > I've been trying to avoid writing this email for the longest time > ever. It's really painful to have to reach this state, but I think > we've hit a point where we can no longer maintain support for SUSE > within OpenStack Ansible. > > Unfortunately, with the recent layoffs, the OSA team has taken a huge > hit in terms of contributors which means that there is far less > contributors within the project. This means that we have less > resources to go around and it forces us to focus on a more functional, > reliable and well tested set of scenarios. > > Over the past few cycles, there has been effort to add SUSE support to > OpenStack Ansible, during the time that we had maintainers, it was > great and SUSE issues were being fixed promptly. In addition, due to > the larger team at the time, we found ourselves having some extra time > where we can help unbreak other gates. Jesse used to call this a > "labour of love", which I admired at the time and hoped we continue to > do as much as we can of. > > However, the lack of a committed maintainer for OpenSUSE has resulted > in constantly failing jobs[1][2] (which were moved to non-voting, > wasting CI resources as no one fixed them). In addition, it's causing > several gate blocks for a few times with no one really finding the > time to clean them up. > > We are resource constrained at this point and we need the resource to > go towards making the small subset of supported features functional > (i.e. CentOS/Ubuntu). We struggle with that enough, and there seems > to be no deployers that are running SUSE in real life at the moment > based on bugs submitted. > > With that, I propose that we drop SUSE support this cycle. If anyone > would like to volunteer to maintain it, we can review that option, but > that would require a serious commitment as we've had maintainers step > off and it hurts the velocity of the project as no one can merge code > anymore. > > ..Really wish I didn't have to write this email > Mohammed > > [1]: http://zuul.opendev.org/t/openstack/builds?job_name=openstack-ansible-functional-opensuse-150 > [2]: http://zuul.opendev.org/t/openstack/builds?job_name=openstack-ansible-functional-opensuse-423 > > It's now over 12 months since Mohammed posted this and the situation remains as described. It feels like time to reduce the OSA CI surface area to enable patches to merge and release contributor cycles for new work such as support for Ubuntu Focal and Centos8. Two alternative patches are proposed, one to remove support immediately [1] and another to move the current jobs to non-voting with a view to removing them entirely for the Victoria cycle [2]. [1] https://review.opendev.org/725541 [2] https://review.opendev.org/725598 Jon. From ekultails at gmail.com Tue May 5 13:53:21 2020 From: ekultails at gmail.com (Luke Short) Date: Tue, 5 May 2020 09:53:21 -0400 Subject: [TripleO][Rdo][Train] Deploy OpenStack using only provisioning network In-Reply-To: References: Message-ID: Hey Ruslanas, You are be able to customize the networks to all use the same CIDR and set different IP allocation pools from within the it. Have a look at this Create Network Environment File for reference of the parameters you want to change. That has the disadvantage of allocating many IPs from the same subnet when, in theory, you should technically be able to use one. I am not sure if/how that is possible in TripleO. The ask we hear from most of our operators is usually to allow more separation of networks (not less). I hope this helps point you in the right direction! Sincerely, Luke Short On Sun, May 3, 2020 at 6:21 PM Ruslanas Gžibovskis wrote: > Hi all, > > I am doing some testing and will do some deployment on some remote hosts. > > Remote hosts will use provider network only specific for each compute. > > I was thinking, do I really need all the External, InternalAPI, Storage, > StorageManagemnt, Tenant networks provided to all of the nodes? Maybe I > could use a Provision network for all of that, and make swift/glance copy > on all computes to provide local images. > > I understand, if I do not have tenant network, all VM's in same project > but in different sites, will not see each other, but it is ok at > the moment. > > Thank you for your help > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Tue May 5 13:56:01 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 5 May 2020 15:56:01 +0200 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: <20200502133714.awqr4e35u5feotin@yuggoth.org> Message-ID: <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> Let's for a minute imagine that each of the raised concerns is addressable. And as a thought experiment, let's put here WHAT has to be addressed for Kolla w/o the need of abandoning it for a custom tooling: On 03.05.2020 21:26, Alex Schultz wrote: > On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley wrote: >> >> On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: >>> As you may have seen, the TripleO project has been testing the >>> idea of building container images using a simplified toolchain >> [...] >> >> Is there an opportunity to collaborate around the proposed plan for >> publishing basic docker-image-based packages for OpenStack services? >> >> https://review.opendev.org/720107 >> >> Obviously you're aiming at solving this for a comprehensive >> deployment rather than at a packaging level, just wondering if >> there's a way to avoid having an explosion of different images for >> the same services if they could ultimately use the same building >> blocks. (A cynical part of me worries that distro "party lines" will >> divide folks on what the source of underlying files going into >> container images should be, but I'm sure our community is better >> than that, after all we're all in this together.) >> > > I think this assumes we want an all-in-one system to provide > containers. And we don't. That I think is the missing piece that > folks don't understand about containers and what we actually need. > > I believe the issue is that the overall process to go from zero to an > application in the container is something like the following: > > 1) input image (centos/ubi0/ubuntu/clear/whatever) * support ubi8 base images > 2) Packaging method for the application (source/rpm/dpkg/magic) * abstract away all the packaging methods (at least above the base image) to some (better?) DSL perhaps > 3) dependencies provided depending on item #1 & 2 > (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) * abstract away all the dependencies (atm I can only think of go.mod & Go's vendor packages example, sorry) to some extra DSL & CLI tooling may be > 4) layer dependency declaration (base -> nova-base -> nova-api, > nova-compute, etc) * is already fully covered above, I suppose > 5) How configurations are provided to the application (at run time or at build) (what is missing for the run time, almost a perfection yet?) * for the build time, is already fully covered above, i.e. extra DSL & CLI tooling (my biased example: go mod tidy?) > 6) How application is invoked when container is ultimately launched > (via docker/podman/k8s/etc) * have a better DSL to abstract away all the container runtime & orchestration details beneath > 7) Container build method (docker/buildah/other) * support for buildah (more fancy abstractions and DSL extenstions ofc!) > > The answer to each one of these is dependent on the expectations of > the user or application consuming these containers. Additionally this > has to be declared for each dependent application as well > (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost > because it needs to support any number of combinations for each of * have better modularity: offload some of the "combinations" to interested 3rd party maintainers (split repos into pluggable modules) and their own CI/CD. > these. Today TripleO doesn't use the build method provided by Kolla > anymore because we no longer support docker. This means we only use > Kolla to generate Dockerfiles as inputs to other processes. It should NOTE: there is also kolla startup/config APIs on which TripleO will *have to* rely for the next 3-5 years or so. Its compatibility shall not be violated. > be noted that we also only want Dockerfiles for the downstream because > they get rebuilt with yet another different process. So for us, we > don't want the container and we want a method for generating the > contents of the container. * and again, have better pluggability to abstract away all the downstream vs upstream specifics (btw, I'm not bought on the new custom tooling can solve this problem in a different way but still using better/simpler DSL & tooling) > > IMHO containers are just glorified packaging (yet again and one that > lacks ways of expressing dependencies which is really not beneficial > for OpenStack). I do not believe you can or should try to unify the > entire container declaration and building into a single application. > You could rally around a few different sets of tooling that could > provide you the pieces for consumption. e.g. A container file > templating engine, a building engine, and a way of > expressing/consuming configuration+execution information. > > I applaud the desire to try and unify all the things, but as we've So the final call: have pluggable and modular design, adjust DSL and tooling to meet those goals for Kolla. So that one who doesn't chase for unification, just sets up his own module and plugs it into build pipeline. Hint: that "new simpler tooling for TripleO" may be that pluggable module! > seen time and time again when it comes to deployment, configuration > and use cases. Trying to solve for all the things ends up having a > negative effect on the UX because of the complexity required to handle > all the cases (look at tripleo for crying out loud). I believe it's > time to stop trying to solve all the things with a giant hammer and > work on a bunch of smaller nails and let folks construct their own > hammer. > > Thanks, > -Alex > > > > >> Either way, if they can both make use of the same speculative >> container building workflow pioneered in Zuul/OpenDev, that seems >> like a huge win (and I gather the Kolla "krew" are considering >> redoing their CI jobs along those same lines as well). >> -- >> Jeremy Stanley > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From ekultails at gmail.com Tue May 5 14:18:54 2020 From: ekultails at gmail.com (Luke Short) Date: Tue, 5 May 2020 10:18:54 -0400 Subject: [TripleO][Rdo][Train] Deploy OpenStack using only provisioning network In-Reply-To: References: Message-ID: Hey Ruslanas, I have a Train deployment using pre-deployed nodes and I can verify it actually only uses the control plane network for all services and has a single IP allocated per node. I am using the net-config-static-bridge for my interfaces. Results my vary with a Nova/Ironic provisioned nodes. Sincerely, Luke Short On Tue, May 5, 2020 at 9:53 AM Luke Short wrote: > Hey Ruslanas, > > You are be able to customize the networks to all use the same CIDR and set > different IP allocation pools from within the it. Have a look at this Create > Network Environment File > > for reference of the parameters you want to change. > > That has the disadvantage of allocating many IPs from the same subnet > when, in theory, you should technically be able to use one. I am not sure > if/how that is possible in TripleO. The ask we hear from most of our > operators is usually to allow more separation of networks (not less). I > hope this helps point you in the right direction! > > Sincerely, > Luke Short > > On Sun, May 3, 2020 at 6:21 PM Ruslanas Gžibovskis > wrote: > >> Hi all, >> >> I am doing some testing and will do some deployment on some remote hosts. >> >> Remote hosts will use provider network only specific for each compute. >> >> I was thinking, do I really need all the External, InternalAPI, Storage, >> StorageManagemnt, Tenant networks provided to all of the nodes? Maybe I >> could use a Provision network for all of that, and make swift/glance copy >> on all computes to provide local images. >> >> I understand, if I do not have tenant network, all VM's in same project >> but in different sites, will not see each other, but it is ok at >> the moment. >> >> Thank you for your help >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 5 14:27:38 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 05 May 2020 09:27:38 -0500 Subject: [tc][uc][all] Starting community-wide goals ideas for V series In-Reply-To: References: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> <69323bb6-c236-9634-14dd-e93736428795@debian.org> <20200430115135.hrgnwj272srpdbgc@skaplons-mac> <84ca2e53987a80ff5c04832fc31ba38120261711.camel@redhat.com> <30302c9b-7004-0389-1759-b5de0ce90d38@debian.org> <0fd8f300-5648-ca7b-578b-81a623dd2e9d@nemebean.com> Message-ID: <171e53c2c00.cd029ef994067.4828401185913293419@ghanshyammann.com> ---- On Thu, 30 Apr 2020 12:59:37 -0500 Thomas Goirand wrote ---- > Thanks for your helpful reply. > > On 4/30/20 5:43 PM, Ben Nemec wrote: > > Note that we already have systemd notification support in oslo.service: > > https://github.com/openstack/oslo.service/blob/master/oslo_service/systemd.py > > > > > > It looks like it should behave sanely without systemd as long as you > > don't set NOTIFY_SOCKET in the service env. > > Oh, so it means downstream packages must set an env var to support it? > > The other issue I have is that there's zero documentation on who's > supporting it... :/ This is issue, can you file a bug for that and we can track/fix a bug. -gmann > > > It is. Check out the doc for the debug option in Nova: > > https://docs.openstack.org/nova/latest/configuration/sample-config.html > > Right. There's really not a lot of them... :( > > Cheers, > > Thomas Goirand (zigo) > > From pierre at stackhpc.com Tue May 5 14:41:36 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 5 May 2020 16:41:36 +0200 Subject: [blazar] Updating blazar-core and blazar-release groups In-Reply-To: References: Message-ID: On Thu, 30 Apr 2020 at 23:32, Sylvain Bauza wrote: > > > > Le jeu. 30 avr. 2020 à 19:03, Pierre Riteau a écrit : >> >> Hello, >> >> I noticed in a recent blazar-specs patch [1] that lots of historical >> contributors were requested as reviewers. I assume this is because the >> blazar-core group was used (this is not a feature that I've used >> before on Gerrit). To avoid spamming people who haven't worked on >> Blazar for years, I propose to remove the following members of the >> blazar-core group: >> >> Dina Belova >> Hiroaki Kobayashi >> Nikolay Starodubtsev >> Pablo Andres Fuente >> Ryota MIBU >> Swann Croiset >> Sylvain Bauza >> > > Sadly but sure, you can remove me ;-) > Long live Climate^H^H^H^H^H^H^HBlazar ! > > Hope you are all well ! > -Sylvain > >> I would also remove Hiroaki Kobayashi from the blazar-release group. >> >> Please let me know if you have any objections. >> >> [1] https://review.opendev.org/#/c/723115/ I have just applied this change on Gerrit. Thank you all for your contributions to Climate / Blazar! From smooney at redhat.com Tue May 5 14:45:58 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 May 2020 15:45:58 +0100 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> Message-ID: <3f5fc27b6d0651124fa83de226fa7ca516afc9b8.camel@redhat.com> On Tue, 2020-05-05 at 15:56 +0200, Bogdan Dobrelya wrote: > Let's for a minute imagine that each of the raised concerns is > addressable. And as a thought experiment, let's put here WHAT has to be > addressed for Kolla w/o the need of abandoning it for a custom tooling: > > On 03.05.2020 21:26, Alex Schultz wrote: > > On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley wrote: > > > > > > On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: > > > > As you may have seen, the TripleO project has been testing the > > > > idea of building container images using a simplified toolchain > > > > > > [...] > > > > > > Is there an opportunity to collaborate around the proposed plan for > > > publishing basic docker-image-based packages for OpenStack services? > > > > > > https://review.opendev.org/720107 > > > > > > Obviously you're aiming at solving this for a comprehensive > > > deployment rather than at a packaging level, just wondering if > > > there's a way to avoid having an explosion of different images for > > > the same services if they could ultimately use the same building > > > blocks. (A cynical part of me worries that distro "party lines" will > > > divide folks on what the source of underlying files going into > > > container images should be, but I'm sure our community is better > > > than that, after all we're all in this together.) > > > > > > > I think this assumes we want an all-in-one system to provide > > containers. And we don't. That I think is the missing piece that > > folks don't understand about containers and what we actually need. > > > > I believe the issue is that the overall process to go from zero to an > > application in the container is something like the following: > > > > 1) input image (centos/ubi0/ubuntu/clear/whatever) > > * support ubi8 base images > > > 2) Packaging method for the application (source/rpm/dpkg/magic) > > * abstract away all the packaging methods (at least above the base > image) to some (better?) DSL perhaps im not sure a custome dsl is the anser to any of the issues. > > > 3) dependencies provided depending on item #1 & 2 > > (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) > > * abstract away all the dependencies (atm I can only think of go.mod & > Go's vendor packages example, sorry) to some extra DSL & CLI tooling may be we have bindeps which is ment to track all binary depenices in a multi disto way. so that is the solution to package depencies > > > 4) layer dependency declaration (base -> nova-base -> nova-api, > > nova-compute, etc) > > * is already fully covered above, I suppose > > > 5) How configurations are provided to the application (at run time or at build) > > (what is missing for the run time, almost a perfection yet?) > * for the build time, is already fully covered above, i.e. extra DSL & > CLI tooling (my biased example: go mod tidy?) kolla does it all outside the container si this is a non issue really. its done at runtime by design wich is the most flexible desing and the the correct one IMO > > > 6) How application is invoked when container is ultimately launched > > (via docker/podman/k8s/etc) > > * have a better DSL to abstract away all the container runtime & > orchestration details beneath this has nothing to do with kolla. this is a consern for ooo or kolla ansible but kolla just proivdes image it does not execute them. a dsl is not useful to adress this. > > > 7) Container build method (docker/buildah/other) > > * support for buildah (more fancy abstractions and DSL extenstions ofc!) buildah support the docker file format as far as i am aware so no we dont need a dsl againg i stongly think that is the wrong approch. we just need to ad a config option to the kolla-build binary and add a second module that will invoke buildah instead of docker. so we would just have to modify https://github.com/openstack/kolla/blob/master/kolla/image/build.py we shoudl not need to modify the docker file templates to support this in any way. if kolla ansible wanted to also support podman it would jsut need to reimplment https://github.com/openstack/kolla-ansible/blob/master/ansible/library/kolla_docker.py to provide the same interface but invoke podman. simiarly you caoudl add a config option to select which module to invoke in a task+ > > > > > The answer to each one of these is dependent on the expectations of > > the user or application consuming these containers. Additionally this > > has to be declared for each dependent application as well > > (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost > > because it needs to support any number of combinations for each of > > * have better modularity: offload some of the "combinations" to > interested 3rd party maintainers (split repos into pluggable modules) > and their own CI/CD. we had talked about having a kolla-infra repo for non openstack service in the past but the effort was deamed not worth it. so non openstack continer like rabbit mariadb and openvswitch could be split out or we could use comunity provided contaienr but im not sure this is needed. the source vs binary disticion only appplies ot openstack services it does not apply to infra contianers. regardless of the name those are always built using disto binary packages. > > > these. Today TripleO doesn't use the build method provided by Kolla > > anymore because we no longer support docker. This means we only use > > Kolla to generate Dockerfiles as inputs to other processes. It should > > NOTE: there is also kolla startup/config APIs on which TripleO will > *have to* rely for the next 3-5 years or so. Its compatibility shall not > be violated. > > > be noted that we also only want Dockerfiles for the downstream because > > they get rebuilt with yet another different process. So for us, we > > don't want the container and we want a method for generating the > > contents of the container. > > * and again, have better pluggability to abstract away all the > downstream vs upstream specifics (btw, I'm not bought on the new custom > tooling can solve this problem in a different way but still using > better/simpler DSL & tooling) plugablity wont help. the downstream issue is just that we need to build the contianer using downstream repos which have patches and in some case we want to add or remove depencices based on what is supported in the product. if we use bindeps correctly. we can achive this witout needing to add plugabilty and the complexity that would invovle. if we simpley have a list of bindep lables to install per image and then update all bindep files in the complent repos to have lable per backend we could use a single template with defaults labes that when building downstream could simple be overriden to use the labels we support. for example if nova had say libvirt,vmware,xen,ceph we could install all of them by default installing the bindeps for ceph libvirt vmware and ceph. downstream we coudl jsut enable libvirt and ceph since we dont support xen or vmware you woudl do this via the build config file with a list of lables per image. in the docker file you would just loop over the lables doing "bindep | install ..." to contorl what got installed. that could be abstracted behind a macro fairly simply by either extending the existing source config opts with a labels section https://github.com/openstack/kolla/blob/master/kolla/common/config.py#L288-L292 or creating a similar one. we woud need to create a comuntiy goal to have all service adopt and use bindeps to discribe the deps for the different backends they support but that would be a good goal apart for this discussio > > > > > IMHO containers are just glorified packaging (yet again and one that > > lacks ways of expressing dependencies which is really not beneficial > > for OpenStack). I do not believe you can or should try to unify the > > entire container declaration and building into a single application. > > You could rally around a few different sets of tooling that could > > provide you the pieces for consumption. e.g. A container file > > templating engine, a building engine, and a way of > > expressing/consuming configuration+execution information. > > > > I applaud the desire to try and unify all the things, but as we've > > So the final call: have pluggable and modular design, adjust DSL and > tooling to meet those goals for Kolla. So that one who doesn't chase for > unification, just sets up his own module and plugs it into build > pipeline. Hint: that "new simpler tooling for TripleO" may be that > pluggable module! i dont think this is the right direction but that said im not going to be working on ooo or kolla in either case to implement my alternitive. modeularity and plugablity is not the aswer in this case in my view. unifying and simplifying build system so that it can be used downstream with no overrides and minimal configuration cannot be achive by plugins and modules. > > > seen time and time again when it comes to deployment, configuration > > and use cases. Trying to solve for all the things ends up having a > > negative effect on the UX because of the complexity required to handle > > all the cases (look at tripleo for crying out loud). I believe it's > > time to stop trying to solve all the things with a giant hammer and > > work on a bunch of smaller nails and let folks construct their own > > hammer. > > > > Thanks, > > -Alex > > > > > > > > > > > Either way, if they can both make use of the same speculative > > > container building workflow pioneered in Zuul/OpenDev, that seems > > > like a huge win (and I gather the Kolla "krew" are considering > > > redoing their CI jobs along those same lines as well). > > > -- > > > Jeremy Stanley > > > > > > From marcin.juszkiewicz at linaro.org Tue May 5 14:57:03 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 5 May 2020 16:57:03 +0200 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> Message-ID: W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze: > Let's for a minute imagine that each of the raised concerns is > addressable. And as a thought experiment, let's put here WHAT has to > be addressed for Kolla w/o the need of abandoning it for a custom > tooling: >> 1) input image (centos/ubi0/ubuntu/clear/whatever) > > * support ubi8 base images "kolla-build --base-image ubi8" has you covered. Or you can provide a patch which will switch to ubi8 for some of existing targets. Easy, really. >> 2) Packaging method for the application (source/rpm/dpkg/magic) > > * abstract away all the packaging methods (at least above the base > image) to some (better?) DSL perhaps What is DSL? Digital Subscriber Line? Did Something Likeable? Droids Supporting Legacy? I decided to not follow rest of discussion. Let you guys invent something interesting and working. I can just follow then. From smooney at redhat.com Tue May 5 15:20:49 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 May 2020 16:20:49 +0100 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> Message-ID: <8ce67da78e2e1186a62c4d20d38c4c5d6f06e4af.camel@redhat.com> On Tue, 2020-05-05 at 16:57 +0200, Marcin Juszkiewicz wrote: > W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze: > > Let's for a minute imagine that each of the raised concerns is > > addressable. And as a thought experiment, let's put here WHAT has to > > be addressed for Kolla w/o the need of abandoning it for a custom > > tooling: > > > 1) input image (centos/ubi0/ubuntu/clear/whatever) > > > > * support ubi8 base images > > "kolla-build --base-image ubi8" has you covered. Or you can provide a > patch which will switch to ubi8 for some of existing targets. Easy, > really. > > > > 2) Packaging method for the application (source/rpm/dpkg/magic) > > > > * abstract away all the packaging methods (at least above the base > > image) to some (better?) DSL perhaps > > What is DSL? Digital Subscriber Line? Did Something Likeable? Droids > Supporting Legacy? domain specific language there was a propsoal a few years ago before we started to use macros and other features in jinja to create a DSL for kolla uitlising jinja which is a DSL iteslf was seen as less of a learning curve then creating a custom dsl for kolla. the docker file format is also a DSL. > > I decided to not follow rest of discussion. Let you guys invent > something interesting and working. I can just follow then. > From aschultz at redhat.com Tue May 5 15:23:35 2020 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 5 May 2020 09:23:35 -0600 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> Message-ID: On Tue, May 5, 2020 at 9:12 AM Marcin Juszkiewicz wrote: > > W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze: > > Let's for a minute imagine that each of the raised concerns is > > addressable. And as a thought experiment, let's put here WHAT has to > > be addressed for Kolla w/o the need of abandoning it for a custom > > tooling: > > >> 1) input image (centos/ubi0/ubuntu/clear/whatever) > > > > * support ubi8 base images > > "kolla-build --base-image ubi8" has you covered. Or you can provide a > patch which will switch to ubi8 for some of existing targets. Easy, > really. > Yea my list wasn't saying kolla had deficiencies for anything, but rather the core concepts that are needed for something like this. Infact kolla does tick the boxes for all of these however may be opinionated in some areas (e.g. building has to use docker) which may not make sense for others to consume. It might also not be in the best interest of the project to actually push support for alternative solutions if there isn't a larger demand from the community. > >> 2) Packaging method for the application (source/rpm/dpkg/magic) > > > > * abstract away all the packaging methods (at least above the base > > image) to some (better?) DSL perhaps > > What is DSL? Digital Subscriber Line? Did Something Likeable? Droids > Supporting Legacy? > domain-specific language. The proposal kinda includes something to that effect that lets us define a yaml with a specific structure which gets turned into a dockerfile equivalent. > I decided to not follow rest of discussion. Let you guys invent > something interesting and working. I can just follow then. > Yea I feel like we're going in circles now. Feel free to follow the spec and if it makes sense to contribute it elsewhere or move it we can discuss that later. Right now we're working on something that we think addresses our specific needs without having to re-write significant portions of other projects and impacting everyone. It may not make sense for everyone, but we're investigating it for V. Thanks, -Alex From mark at stackhpc.com Tue May 5 15:42:50 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 5 May 2020 16:42:50 +0100 Subject: [tripleo] Container image tooling roadmap In-Reply-To: References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> Message-ID: On Tue, 5 May 2020 at 16:24, Alex Schultz wrote: > > On Tue, May 5, 2020 at 9:12 AM Marcin Juszkiewicz > wrote: > > > > W dniu 05.05.2020 o 15:56, Bogdan Dobrelya pisze: > > > Let's for a minute imagine that each of the raised concerns is > > > addressable. And as a thought experiment, let's put here WHAT has to > > > be addressed for Kolla w/o the need of abandoning it for a custom > > > tooling: > > > > >> 1) input image (centos/ubi0/ubuntu/clear/whatever) > > > > > > * support ubi8 base images > > > > "kolla-build --base-image ubi8" has you covered. Or you can provide a > > patch which will switch to ubi8 for some of existing targets. Easy, > > really. > > > > Yea my list wasn't saying kolla had deficiencies for anything, but > rather the core concepts that are needed for something like this. > Infact kolla does tick the boxes for all of these however may be > opinionated in some areas (e.g. building has to use docker) which may > not make sense for others to consume. It might also not be in the best > interest of the project to actually push support for alternative > solutions if there isn't a larger demand from the community. Support for buildah (and podman in kolla-ansible) comes up as a request at most design sessions. It wouldn't be too hard to add, and wouldn't see resistance from me at least. > > > >> 2) Packaging method for the application (source/rpm/dpkg/magic) > > > > > > * abstract away all the packaging methods (at least above the base > > > image) to some (better?) DSL perhaps > > > > What is DSL? Digital Subscriber Line? Did Something Likeable? Droids > > Supporting Legacy? > > > > domain-specific language. The proposal kinda includes something to > that effect that lets us define a yaml with a specific structure which > gets turned into a dockerfile equivalent. > > > I decided to not follow rest of discussion. Let you guys invent > > something interesting and working. I can just follow then. > > > > Yea I feel like we're going in circles now. Feel free to follow the > spec and if it makes sense to contribute it elsewhere or move it we > can discuss that later. Right now we're working on something that we > think addresses our specific needs without having to re-write > significant portions of other projects and impacting everyone. It may > not make sense for everyone, but we're investigating it for V. Clearly a large rewrite of kolla would be likely to get some pushback, but I expect there are a number of places where Tripleo has worked around kolla rather than with it. The idea that Tripleo requirements should not impact kolla is wrong - it is one of two main consumers (the other being kolla-ansible), and I would like to think we would accommodate your requirements where possible, if resources are provided to implement the changes. > > Thanks, > -Alex > > From gmann at ghanshyammann.com Tue May 5 15:43:47 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 05 May 2020 10:43:47 -0500 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> Message-ID: <171e581e45b.e5107f4c98638.4361236318560452020@ghanshyammann.com> ---- On Tue, 05 May 2020 02:35:58 -0500 Andreas Jaeger wrote ---- > On 02.05.20 02:39, Ghanshyam Mann wrote: > > Hello Everyone, > > > > Finally, after so many cycles, grenade base job has been migrated to zuulv3 native[1]. This is > > merged in the Ussuri branch. > > > > Thanks to tosky for keep working on this and finish it!! > > > > New jobs 'grenade', 'grenade-multinode', 'grenade-postgresql' and 'grenade-forward' are > > zuulv3 native now and we kept old job name 'grenade-py3' as an alias to 'grenade' so that it would > > not break the gate using 'grenade-py3'. > > > > * Start using this new base job for your projects specific grenade jobs. > > > > * Migration to new job name: > > The integrated template and projects are using old job name 'grenade-py3' need to move to > > a new job name. Job is defined in the integrated template and also in the project's zuul.yaml for irrelevant-files. > > So we need to switch both places at the same time otherwise you will be seeing the two jobs running on your gate > > (master as well as on Ussuri). > > > > Integrated service specific template has switched to new job[2] which means you might see two jobs running > > 'grenade' & 'grenade-py3' running on your gate until you change .zuul.yaml. Example: Nova did for master as well as Ussuri gate - https://review.opendev.org/#/q/I212692905a1d645cd911c2a161c13c794c0e0f4d > > > > It needs to be done in Ussuri also as Tempest where integrated templates are present is branchless and apply the change for Ussuri job also. > > The template integrated-gate-py3 has been introduced in stein cycle as > far as I can see, so we do need to update the extra grenade-py3 lines to > grenade where they exist in stein and train as well to avoid the > duplicated runs, don't we? We do not need until we backport the grenade new job to stable/train. For stable/train gate, only old job is running because new job from a template is not found ?(if I am not wrong). I tested it on nova stable/train and only single job run. -gmann > > Andreas > > > [1] https://review.opendev.org/#/c/548936/ > > [2] https://review.opendev.org/#/c/722551/ > > > > -gmann > > > > > -- > Andreas Jaeger aj at suse.com Twitter: jaegerandi > SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg > (HRB 36809, AG Nürnberg) GF: Felix Imendörffer > GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB > From bdobreli at redhat.com Tue May 5 15:50:11 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 5 May 2020 17:50:11 +0200 Subject: [tripleo] Container image tooling roadmap In-Reply-To: <3f5fc27b6d0651124fa83de226fa7ca516afc9b8.camel@redhat.com> References: <20200502133714.awqr4e35u5feotin@yuggoth.org> <5036f13c-aca0-a604-3905-cb1c53e2c835@redhat.com> <3f5fc27b6d0651124fa83de226fa7ca516afc9b8.camel@redhat.com> Message-ID: On 05.05.2020 16:45, Sean Mooney wrote: > On Tue, 2020-05-05 at 15:56 +0200, Bogdan Dobrelya wrote: >> Let's for a minute imagine that each of the raised concerns is >> addressable. And as a thought experiment, let's put here WHAT has to be >> addressed for Kolla w/o the need of abandoning it for a custom tooling: >> >> On 03.05.2020 21:26, Alex Schultz wrote: >>> On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley wrote: >>>> >>>> On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote: >>>>> As you may have seen, the TripleO project has been testing the >>>>> idea of building container images using a simplified toolchain >>>> >>>> [...] >>>> >>>> Is there an opportunity to collaborate around the proposed plan for >>>> publishing basic docker-image-based packages for OpenStack services? >>>> >>>> https://review.opendev.org/720107 >>>> >>>> Obviously you're aiming at solving this for a comprehensive >>>> deployment rather than at a packaging level, just wondering if >>>> there's a way to avoid having an explosion of different images for >>>> the same services if they could ultimately use the same building >>>> blocks. (A cynical part of me worries that distro "party lines" will >>>> divide folks on what the source of underlying files going into >>>> container images should be, but I'm sure our community is better >>>> than that, after all we're all in this together.) >>>> >>> >>> I think this assumes we want an all-in-one system to provide >>> containers. And we don't. That I think is the missing piece that >>> folks don't understand about containers and what we actually need. >>> >>> I believe the issue is that the overall process to go from zero to an >>> application in the container is something like the following: >>> >>> 1) input image (centos/ubi0/ubuntu/clear/whatever) >> >> * support ubi8 base images >> >>> 2) Packaging method for the application (source/rpm/dpkg/magic) >> >> * abstract away all the packaging methods (at least above the base >> image) to some (better?) DSL perhaps > im not sure a custome dsl is the anser to any of the issues. >> >>> 3) dependencies provided depending on item #1 & 2 >>> (venv/rpm/dpkg/RDO/ubuntu-cloud/custom) >> >> * abstract away all the dependencies (atm I can only think of go.mod & >> Go's vendor packages example, sorry) to some extra DSL & CLI tooling may be > we have bindeps which is ment to track all binary depenices in a multi disto > way. so that is the solution to package depencies >> >>> 4) layer dependency declaration (base -> nova-base -> nova-api, >>> nova-compute, etc) >> >> * is already fully covered above, I suppose >> >>> 5) How configurations are provided to the application (at run time or at build) >> >> (what is missing for the run time, almost a perfection yet?) >> * for the build time, is already fully covered above, i.e. extra DSL & >> CLI tooling (my biased example: go mod tidy?) > kolla does it all outside the container si this is a non issue really. its done at > runtime by design wich is the most flexible desing and the the correct one IMO >> >>> 6) How application is invoked when container is ultimately launched >>> (via docker/podman/k8s/etc) >> >> * have a better DSL to abstract away all the container runtime & >> orchestration details beneath > this has nothing to do with kolla. this is a consern for ooo or kolla ansible but kolla > just proivdes image it does not execute them. a dsl is not useful to adress this. > >> >>> 7) Container build method (docker/buildah/other) >> >> * support for buildah (more fancy abstractions and DSL extenstions ofc!) > buildah support the docker file format as far as i am aware so no we dont need a dsl > againg i stongly think that is the wrong approch. > we just need to ad a config option to the kolla-build binary and add a second module > that will invoke buildah instead of docker. so we would just have to modify > https://github.com/openstack/kolla/blob/master/kolla/image/build.py > we shoudl not need to modify the docker file templates to support this in any way. > > if kolla ansible wanted to also support podman it would jsut need to reimplment > https://github.com/openstack/kolla-ansible/blob/master/ansible/library/kolla_docker.py > to provide the same interface but invoke podman. simiarly you caoudl add a config > option to select which module to invoke in a task+ > >> >>> >>> The answer to each one of these is dependent on the expectations of >>> the user or application consuming these containers. Additionally this >>> has to be declared for each dependent application as well >>> (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost >>> because it needs to support any number of combinations for each of >> >> * have better modularity: offload some of the "combinations" to >> interested 3rd party maintainers (split repos into pluggable modules) >> and their own CI/CD. > we had talked about having a kolla-infra repo for non openstack service in the > past but the effort was deamed not worth it. so non openstack continer like rabbit mariadb and openvswitch > could be split out or we could use comunity provided contaienr but im not sure this is needed. > > the source vs binary disticion only appplies ot openstack services > it does not apply to infra contianers. regardless of the name those are always built > using disto binary packages. >> >>> these. Today TripleO doesn't use the build method provided by Kolla >>> anymore because we no longer support docker. This means we only use >>> Kolla to generate Dockerfiles as inputs to other processes. It should >> >> NOTE: there is also kolla startup/config APIs on which TripleO will >> *have to* rely for the next 3-5 years or so. Its compatibility shall not >> be violated. >> >>> be noted that we also only want Dockerfiles for the downstream because >>> they get rebuilt with yet another different process. So for us, we >>> don't want the container and we want a method for generating the >>> contents of the container. >> >> * and again, have better pluggability to abstract away all the >> downstream vs upstream specifics (btw, I'm not bought on the new custom >> tooling can solve this problem in a different way but still using >> better/simpler DSL & tooling) > plugablity wont help. > the downstream issue is just that we need to build the contianer using downstream > repos which have patches and in some case we want to add or remove depencices based on > what is supported in the product. > > if we use bindeps correctly. we can achive this witout needing to add plugabilty and the > complexity that would invovle. > > if we simpley have a list of bindep lables to install per image and then update all bindep files > in the complent repos to have lable per backend we could use a single template with defaults labes > that when building downstream could simple be overriden to use the labels we support. > > for example if nova had say > > libvirt,vmware,xen,ceph we could install all of them by default installing the bindeps for ceph libvirt vmware and ceph. > downstream we coudl jsut enable libvirt and ceph since we dont support xen or vmware > > you woudl do this via the build config file with a list of lables per image. > > in the docker file you would just loop over the lables > doing "bindep | install ..." > to contorl what got installed. > > that could be abstracted behind a macro fairly simply by either extending the existing > > source config opts with a labels section > https://github.com/openstack/kolla/blob/master/kolla/common/config.py#L288-L292 > or creating a similar one. > > we woud need to create a comuntiy goal to have all service adopt and use bindeps > to discribe the deps for the different backends they support but that would be a good goal > apart for this discussio >> >>> >>> IMHO containers are just glorified packaging (yet again and one that >>> lacks ways of expressing dependencies which is really not beneficial >>> for OpenStack). I do not believe you can or should try to unify the >>> entire container declaration and building into a single application. >>> You could rally around a few different sets of tooling that could >>> provide you the pieces for consumption. e.g. A container file >>> templating engine, a building engine, and a way of >>> expressing/consuming configuration+execution information. >>> >>> I applaud the desire to try and unify all the things, but as we've >> >> So the final call: have pluggable and modular design, adjust DSL and >> tooling to meet those goals for Kolla. So that one who doesn't chase for >> unification, just sets up his own module and plugs it into build >> pipeline. Hint: that "new simpler tooling for TripleO" may be that >> pluggable module! > > i dont think this is the right direction but that said im not going to be working > on ooo or kolla in either case to implement my alternitive. > modeularity and plugablity is not the aswer in this case in my view. > unifying and simplifying build system so that it can be used downstream with no > overrides and minimal configuration cannot be achive by plugins and modules. That I meant is hiding downstream vs upstream differences into configurable parameters that fit into some (probably versioned) schema. And having those YAML files, for example, sitting in a 3rd side repo with custom build image CI jobs set. Wouldn't that help at all?.. Anyway, my intention was only give a few examples and naive suggestions to illustrate the idea. I wasn't aiming to sound right and win all prizes with a 1st hit. But iterate collaboratively to clarify the real problem scope and possible alternatives, at least for the subject spec in TripleO, at most for Kolla roadmap as well. >> >>> seen time and time again when it comes to deployment, configuration >>> and use cases. Trying to solve for all the things ends up having a >>> negative effect on the UX because of the complexity required to handle >>> all the cases (look at tripleo for crying out loud). I believe it's >>> time to stop trying to solve all the things with a giant hammer and >>> work on a bunch of smaller nails and let folks construct their own >>> hammer. >>> >>> Thanks, >>> -Alex >>> >>> >>> >>> >>>> Either way, if they can both make use of the same speculative >>>> container building workflow pioneered in Zuul/OpenDev, that seems >>>> like a huge win (and I gather the Kolla "krew" are considering >>>> redoing their CI jobs along those same lines as well). >>>> -- >>>> Jeremy Stanley >>> >>> >> >> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From whayutin at redhat.com Tue May 5 15:50:52 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 5 May 2020 09:50:52 -0600 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Ruslanas, If you still have the environment handy can you please file a launchpad bug with the appropriate log files attached. Thank you very much!!! On Sat, May 2, 2020 at 6:13 AM Ruslanas Gžibovskis wrote: > Hi all, > > Just for your info: > the solution was: > su - c "yum downgrade tripleo-ansible.noarch" > yum list | grep -i tripleo-ansible > tripleo-ansible.noarch 0.4.1-1.el7 > @centos-openstack-train > tripleo-ansible.noarch 0.5.0-1.el7 > centos-openstack-train > > What is wrong with *tripleo-ansible.noarch > 0.5.0-1.el7 centos-openstack-train* ??? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue May 5 16:01:17 2020 From: aj at suse.com (Andreas Jaeger) Date: Tue, 5 May 2020 18:01:17 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171e581e45b.e5107f4c98638.4361236318560452020@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> <171e581e45b.e5107f4c98638.4361236318560452020@ghanshyammann.com> Message-ID: <22e55069-461b-983a-cabb-200aa402e741@suse.com> On 05.05.20 17:43, Ghanshyam Mann wrote: > [...] > Andreas Jaeger wrote: > > The template integrated-gate-py3 has been introduced in stein cycle as > > far as I can see, so we do need to update the extra grenade-py3 lines to > > grenade where they exist in stein and train as well to avoid the > > duplicated runs, don't we? > > We do not need until we backport the grenade new job to stable/train. For stable/train gate, > only old job is running because new job from a template is not found ?(if I am not wrong). nova uses integrated-gate-compute which is defined in tempest on a branch, so the train version of the template is used in train. But integrated-gate-py3 is used on *all* branches (openstack-zuul-jobs is branch less). Let me test on glance with https://review.opendev.org/725640 , Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From allison at openstack.org Tue May 5 16:03:01 2020 From: allison at openstack.org (Allison Price) Date: Tue, 5 May 2020 11:03:01 -0500 Subject: OpenStack Ussuri Community Meetings In-Reply-To: <92450300-0f32-af04-5131-a96d1b24fcf9@linaro.org> References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> <92450300-0f32-af04-5131-a96d1b24fcf9@linaro.org> Message-ID: Hi Marcin, Of course, below is the information for each community call. Cheers, Allison Wednesday, May 13 at 1400 UTC Join Zoom Meeting https://zoom.us/j/92127254792?pwd=cW8vZnhxbDZXV1Q2ZHhVL3ZEcU9Ydz09 Meeting ID: 921 2725 4792 Password: ussuri One tap mobile +13462487799,,92127254792# US (Houston) +12532158782,,92127254792# US (Tacoma) Dial by your location +1 346 248 7799 US (Houston) +1 253 215 8782 US (Tacoma) +1 669 900 6833 US (San Jose) +1 312 626 6799 US (Chicago) +1 646 876 9923 US (New York) +1 301 715 8592 US (Germantown) Meeting ID: 921 2725 4792 Find your local number: https://zoom.us/u/agkrnP34e Thursday, May 14 at 0200 UTC Description:Join Zoom Meeting https://zoom.us/j/97231763704?pwd=eFJJbitKUmxDUUFGL1VkMlYyMDROUT09 Meeting ID: 972 3176 3704 Password: ussuri One tap mobile +13462487799,,97231763704# US (Houston) +12532158782,,97231763704# US (Tacoma) Dial by your location +1 346 248 7799 US (Houston) +1 253 215 8782 US (Tacoma) +1 669 900 6833 US (San Jose) +1 646 876 9923 US (New York) +1 301 715 8592 US (Germantown) +1 312 626 6799 US (Chicago) Meeting ID: 972 3176 3704 Find your local number: https://zoom.us/u/aegklt6Q7w > On May 5, 2020, at 1:08 AM, Marcin Juszkiewicz wrote: > > W dniu 04.05.2020 o 20:07, Allison Price pisze: >> The OpenStack Ussuri release is only a week and a half away! Members >> of the TC and project teams are holding two community meetings: one >> on Wednesday, May 13 and Thursday, May 14. Here, they will share some >> of the release highlights and project features. > > Would be nice to post meeting details/links as well. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Tue May 5 16:03:33 2020 From: allison at openstack.org (Allison Price) Date: Tue, 5 May 2020 11:03:33 -0500 Subject: OpenStack Ussuri Community Meetings In-Reply-To: <422BE6AE-ACCC-427D-8C97-5142F5FACA70@cern.ch> References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> <422BE6AE-ACCC-427D-8C97-5142F5FACA70@cern.ch> Message-ID: <017C2735-2FB7-4AD3-B5AF-30730EDBFC43@openstack.org> Hi Tim, Yes, both the slides and recordings will be shared on the mailing list after the meetings. Thanks, Allison > On May 5, 2020, at 2:02 AM, Tim Bell wrote: > > Thanks for organising this. > > Will recordings / slides be made available ? > > Tim > >> On 4 May 2020, at 20:07, Allison Price > wrote: >> >> Hi everyone, >> >> The OpenStack Ussuri release is only a week and a half away! Members of the TC and project teams are holding two community meetings: one on Wednesday, May 13 and Thursday, May 14. Here, they will share some of the release highlights and project features. >> >> Join us: >> >> Wednesday, May 13 at 1400 UTC >> Moderator >> Mohammed Naser, TC >> Presenters / Open for Questions >> Slawek Kaplonski, Neutron >> Michael Johnson, Octavia >> Goutham Pacha Ravi, Manila >> Mark Goddard, Kolla >> Balazs Gibizer, Nova >> Brian Rosmaita, Cinder >> >> Thursday, May 14 at 0200 UTC >> Moderator: Rico Lin, TC >> Presenters / Open for Questions >> Michael Johnson, Octavia >> Goutham Pacha Ravi, Manila >> Rico Lin, Heat >> Feilong Wang, Magnum >> Brian Rosmaita, Cinder >> >> >> See you there! >> Allison >> >> >> >> >> >> Allison Price >> OpenStack Foundation >> allison at openstack.org >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue May 5 16:06:42 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 5 May 2020 12:06:42 -0400 Subject: [ops] ops meetups team meeting 2020-5-5 Message-ID: Hey all, The openstack ops meetups team has been fairly dormant recently for obvious reasons. However with the realisation that in-person meetups, conferences, summits etc are simply not coming back any time soon, we've started to regroup to discuss next steps. We are also seeing some requests for virtual meetups from some operators via the OpsMeetup twitter account. Thus we had a quick IRC meeting today, and also a trial run at an open source based video conference meeting using jitsi via an instance running on infra provided by Erik McCormick. This seems to be promising. We'll look into trialling some ops related events leveraging this. We also agreed to try and have an operators session during the upcoming virtual PTG (replacing the Vancouver event now cancelled). More info will be shared as and when we can. I hope you're all doing as well as can be expected. Chris - on behalf of the ops meetups team Meeting minutes: Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-05-05-14.06.html 12:01 PM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-05-05-14.06.txt 12:01 PM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-05-05-14.06.log.html -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 5 16:08:53 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 05 May 2020 11:08:53 -0500 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <22e55069-461b-983a-cabb-200aa402e741@suse.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> <171e581e45b.e5107f4c98638.4361236318560452020@ghanshyammann.com> <22e55069-461b-983a-cabb-200aa402e741@suse.com> Message-ID: <171e598dff3.100043f9099820.3506995131104136377@ghanshyammann.com> ---- On Tue, 05 May 2020 11:01:17 -0500 Andreas Jaeger wrote ---- > On 05.05.20 17:43, Ghanshyam Mann wrote: > > [...] > > Andreas Jaeger wrote: > > > The template integrated-gate-py3 has been introduced in stein cycle as > > > far as I can see, so we do need to update the extra grenade-py3 lines to > > > grenade where they exist in stein and train as well to avoid the > > > duplicated runs, don't we? > > > > We do not need until we backport the grenade new job to stable/train. For stable/train gate, > > only old job is running because new job from a template is not found ?(if I am not wrong). > > nova uses integrated-gate-compute which is defined in tempest on a > branch, so the train version of the template is used in train. > > But integrated-gate-py3 is used on *all* branches (openstack-zuul-jobs > is branch less). > > Let me test on glance with https://review.opendev.org/725640 , Tempest is also branchless too so template is there for all branches. glance also use tempest-integrated-storage not integrated-gate-py3. 'grenade' job is available on branch-specific only as grenade is branched so new job even added in the template is not present in train so it is not run. If we plan to backport grenade new job to train then new job will start running on train too. We talked about it in qa office hour today and if that is easy to backport then it is fine otherwise no urgency on this. -gmann > > Andreas > -- > Andreas Jaeger aj at suse.com Twitter: jaegerandi > SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg > (HRB 36809, AG Nürnberg) GF: Felix Imendörffer > GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB > From kevin at cloudnull.com Tue May 5 16:15:17 2020 From: kevin at cloudnull.com (Carter, Kevin) Date: Tue, 5 May 2020 11:15:17 -0500 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Ruslanas, do you know if any other dependencies were downgraded when you ran the mentioned command? In looking through module code, its pulling in paunch which has the `load_config` attribute [0] in the stable/train branch. So I'm curious, what packages were impacted by the downgrade, and do you know what version of paunch installed on the system? As Wes mentioned, a Launchpad bug with this information would be most helpful. [0] https://github.com/openstack/paunch/blob/stable/train/paunch/utils/common.py#L81 -- Kevin Carter IRC: Cloudnull On Tue, May 5, 2020 at 10:55 AM Wesley Hayutin wrote: > Ruslanas, > > If you still have the environment handy can you please file a > launchpad bug with the appropriate log files attached. > Thank you very much!!! > > On Sat, May 2, 2020 at 6:13 AM Ruslanas Gžibovskis > wrote: > >> Hi all, >> >> Just for your info: >> the solution was: >> su - c "yum downgrade tripleo-ansible.noarch" >> yum list | grep -i tripleo-ansible >> tripleo-ansible.noarch 0.4.1-1.el7 >> @centos-openstack-train >> tripleo-ansible.noarch 0.5.0-1.el7 >> centos-openstack-train >> >> What is wrong with *tripleo-ansible.noarch >> 0.5.0-1.el7 centos-openstack-train* ??? >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue May 5 16:18:45 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 5 May 2020 12:18:45 -0400 Subject: [tc] ptg etherpad Message-ID: Hi everyone, Please have a look at the following Etherpad and try to fill it out with any suggested topics as well as attendance in order to gauge how much time we will need at the PTG: https://etherpad.opendev.org/p/tc-victoria-ptg Thank you, Mohammed -- Mohammed Naser VEXXHOST, Inc. From aj at suse.com Tue May 5 16:20:34 2020 From: aj at suse.com (Andreas Jaeger) Date: Tue, 5 May 2020 18:20:34 +0200 Subject: [all][qa] grenade zuulv3 native job available now and its migration plan In-Reply-To: <171e598dff3.100043f9099820.3506995131104136377@ghanshyammann.com> References: <171d2d2c89d.11ed37fb090857.8266695798252877295@ghanshyammann.com> <171e581e45b.e5107f4c98638.4361236318560452020@ghanshyammann.com> <22e55069-461b-983a-cabb-200aa402e741@suse.com> <171e598dff3.100043f9099820.3506995131104136377@ghanshyammann.com> Message-ID: <516e2816-aa42-fd8d-9e5a-a0bed0c5fccb@suse.com> On 05.05.20 18:08, Ghanshyam Mann wrote: > ---- On Tue, 05 May 2020 11:01:17 -0500 Andreas Jaeger wrote ---- > > On 05.05.20 17:43, Ghanshyam Mann wrote: > > > [...] > > > Andreas Jaeger wrote: > > > > The template integrated-gate-py3 has been introduced in stein cycle as > > > > far as I can see, so we do need to update the extra grenade-py3 lines to > > > > grenade where they exist in stein and train as well to avoid the > > > > duplicated runs, don't we? > > > > > > We do not need until we backport the grenade new job to stable/train. For stable/train gate, > > > only old job is running because new job from a template is not found ?(if I am not wrong). > > > > nova uses integrated-gate-compute which is defined in tempest on a > > branch, so the train version of the template is used in train. > > > > But integrated-gate-py3 is used on *all* branches (openstack-zuul-jobs > > is branch less). > > > > Let me test on glance with https://review.opendev.org/725640 , > > Tempest is also branchless too so template is there for all branches. glance > also use tempest-integrated-storage not integrated-gate-py3. On train it uses integrated-gate-py3. > 'grenade' job is available on branch-specific only as grenade is branched so > new job even added in the template is not present in train so it is not > run. Yes, indeed. > > If we plan to backport grenade new job to train then new job will start running > on train too. We talked about it in qa office hour today and if that is easy to > backport then it is fine otherwise no urgency on this. and once you backport, both grenade and grenade-py3 will run. So, for now all is working since the projects list grenade-py3 explicitely, Andreas > > -gmann > > > > > Andreas > > -- > > Andreas Jaeger aj at suse.com Twitter: jaegerandi > > SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg > > (HRB 36809, AG Nürnberg) GF: Felix Imendörffer > > GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB > > > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From openstack at nemebean.com Tue May 5 16:35:55 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 5 May 2020 11:35:55 -0500 Subject: [tc][uc][all] Starting community-wide goals ideas for V series In-Reply-To: References: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> <69323bb6-c236-9634-14dd-e93736428795@debian.org> <20200430115135.hrgnwj272srpdbgc@skaplons-mac> <84ca2e53987a80ff5c04832fc31ba38120261711.camel@redhat.com> <30302c9b-7004-0389-1759-b5de0ce90d38@debian.org> <0fd8f300-5648-ca7b-578b-81a623dd2e9d@nemebean.com> Message-ID: <1dc4fdbc-b7c0-a285-2d96-e58dd83eb919@nemebean.com> On 4/30/20 12:59 PM, Thomas Goirand wrote: > Thanks for your helpful reply. > > On 4/30/20 5:43 PM, Ben Nemec wrote: >> Note that we already have systemd notification support in oslo.service: >> https://github.com/openstack/oslo.service/blob/master/oslo_service/systemd.py >> >> >> It looks like it should behave sanely without systemd as long as you >> don't set NOTIFY_SOCKET in the service env. > > Oh, so it means downstream packages must set an env var to support it? I've never used this, but according to https://askubuntu.com/questions/1120023/how-to-use-systemd-notify it looks like you just set the service type to notify and then systemd does what needs to be done. > > The other issue I have is that there's zero documentation on who's > supporting it... :/ Yeah, oslo.service doesn't really have any user docs. We could also probably test this now that devstack is using systemd. > >> It is. Check out the doc for the debug option in Nova: >> https://docs.openstack.org/nova/latest/configuration/sample-config.html > > Right. There's really not a lot of them... :( The vast majority of requests for reloadable config were the debug option, which is why that was the focus of the community goal. Others can be added, but each one needs to be verified by the project to ensure that they can be safely reloaded. > > Cheers, > > Thomas Goirand (zigo) > From amy at demarco.com Tue May 5 16:47:50 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 5 May 2020 11:47:50 -0500 Subject: [openstack-ansible] dropping suse support In-Reply-To: <95eb3d95-df82-4222-2b1a-078a7ce92771@rd.bbc.co.uk> References: <95eb3d95-df82-4222-2b1a-078a7ce92771@rd.bbc.co.uk> Message-ID: I agree, without support for it I think it's well more then time to remove it. Amy (spotz) On Tue, May 5, 2020 at 8:52 AM Jonathan Rosser wrote: > > > On 08/03/2019 16:36, Mohammed Naser wrote: > > Hi everyone, > > > > I've been trying to avoid writing this email for the longest time > > ever. It's really painful to have to reach this state, but I think > > we've hit a point where we can no longer maintain support for SUSE > > within OpenStack Ansible. > > > > Unfortunately, with the recent layoffs, the OSA team has taken a huge > > hit in terms of contributors which means that there is far less > > contributors within the project. This means that we have less > > resources to go around and it forces us to focus on a more functional, > > reliable and well tested set of scenarios. > > > > Over the past few cycles, there has been effort to add SUSE support to > > OpenStack Ansible, during the time that we had maintainers, it was > > great and SUSE issues were being fixed promptly. In addition, due to > > the larger team at the time, we found ourselves having some extra time > > where we can help unbreak other gates. Jesse used to call this a > > "labour of love", which I admired at the time and hoped we continue to > > do as much as we can of. > > > > However, the lack of a committed maintainer for OpenSUSE has resulted > > in constantly failing jobs[1][2] (which were moved to non-voting, > > wasting CI resources as no one fixed them). In addition, it's causing > > several gate blocks for a few times with no one really finding the > > time to clean them up. > > > > We are resource constrained at this point and we need the resource to > > go towards making the small subset of supported features functional > > (i.e. CentOS/Ubuntu). We struggle with that enough, and there seems > > to be no deployers that are running SUSE in real life at the moment > > based on bugs submitted. > > > > With that, I propose that we drop SUSE support this cycle. If anyone > > would like to volunteer to maintain it, we can review that option, but > > that would require a serious commitment as we've had maintainers step > > off and it hurts the velocity of the project as no one can merge code > > anymore. > > > > ..Really wish I didn't have to write this email > > Mohammed > > > > [1]: > http://zuul.opendev.org/t/openstack/builds?job_name=openstack-ansible-functional-opensuse-150 > > [2]: > http://zuul.opendev.org/t/openstack/builds?job_name=openstack-ansible-functional-opensuse-423 > > > > > > It's now over 12 months since Mohammed posted this and the situation > remains as described. > > It feels like time to reduce the OSA CI surface area to enable patches > to merge and release contributor cycles for new work such as support for > Ubuntu Focal and Centos8. > > Two alternative patches are proposed, one to remove support immediately > [1] and another to move the current jobs to non-voting with a view to > removing them entirely for the Victoria cycle [2]. > > [1] https://review.opendev.org/725541 > [2] https://review.opendev.org/725598 > > Jon. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue May 5 16:49:06 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 5 May 2020 16:49:06 +0000 Subject: [ops][infra] Meetpad (was: ops meetups team meeting 2020-5-5) In-Reply-To: References: Message-ID: <20200505164906.igthboyfjiureuf2@yuggoth.org> On 2020-05-05 12:06:42 -0400 (-0400), Chris Morgan wrote: [...] > we had a quick IRC meeting today, and also a trial run at an open > source based video conference meeting using jitsi via an instance > running on infra provided by Erik McCormick. This seems to be > promising. We'll look into trialling some ops related events > leveraging this. [...] It's probably been flying under the radar a bit so far, but the OpenDev community has put together an Etherpad-integrated Jitsi-Meet service at https://meetpad.opendev.org/ which you're free to try out as well. We'd love feedback and help tuning it. Also if you want to reuse anything we've done to set it up, the Ansible playbook we use is here: https://opendev.org/opendev/system-config/src/branch/master/playbooks/service-meetpad.yaml It utilizes this role to install and configure jitsi-meet containers with docker-compose (mostly official docker.io/jitsi images, though we build our own jitsi-meet-web published under docker.io/opendevorg which applies our Etherpad integration patch): https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/jitsi-meet We're not making any stability or reusability guarantees on the Ansible orchestration (or our custom image which will hopefully disappear once https://github.com/jitsi/jitsi-meet/pull/5270 is accepted upstream), but like everything we run in OpenDev we publish it for the sake of transparency, in case anyone else wants to help us or take some ideas for their own efforts. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mihalis68 at gmail.com Tue May 5 17:23:33 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 5 May 2020 13:23:33 -0400 Subject: [ops][infra] Meetpad (was: ops meetups team meeting 2020-5-5) In-Reply-To: <20200505164906.igthboyfjiureuf2@yuggoth.org> References: <20200505164906.igthboyfjiureuf2@yuggoth.org> Message-ID: this is super cool, thanks Jeremy, will give it a go! Chris On Tue, May 5, 2020 at 12:57 PM Jeremy Stanley wrote: > On 2020-05-05 12:06:42 -0400 (-0400), Chris Morgan wrote: > [...] > > we had a quick IRC meeting today, and also a trial run at an open > > source based video conference meeting using jitsi via an instance > > running on infra provided by Erik McCormick. This seems to be > > promising. We'll look into trialling some ops related events > > leveraging this. > [...] > > It's probably been flying under the radar a bit so far, but the > OpenDev community has put together an Etherpad-integrated Jitsi-Meet > service at https://meetpad.opendev.org/ which you're free to try out > as well. We'd love feedback and help tuning it. Also if you want to > reuse anything we've done to set it up, the Ansible playbook we use > is here: > > > https://opendev.org/opendev/system-config/src/branch/master/playbooks/service-meetpad.yaml > > It utilizes this role to install and configure jitsi-meet containers > with docker-compose (mostly official docker.io/jitsi images, though > we build our own jitsi-meet-web published under docker.io/opendevorg > which applies our Etherpad integration patch): > > > https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/jitsi-meet > > We're not making any stability or reusability guarantees on the > Ansible orchestration (or our custom image which will hopefully > disappear once https://github.com/jitsi/jitsi-meet/pull/5270 is > accepted upstream), but like everything we run in OpenDev we publish > it for the sake of transparency, in case anyone else wants to help > us or take some ideas for their own efforts. > -- > Jeremy Stanley > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue May 5 17:30:06 2020 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 5 May 2020 12:30:06 -0500 Subject: OpenDev Events Update - we need your feedback! Message-ID: Hi everyone, Thanks so much for your feedback that’s continually helping us shape these virtual OpenDev events[1]. Keep reading this email for some important updates on all three events! 1. OpenDev: Large-scale Usage of Open Infrastructure Software - June 29 - July 1 (1300 - 1600 UTC each day) - Registration is open[2]! - High-level schedule coming soon; Find a brief topic summary and more information here[1] 2. OpenDev: Hardware Automation - July 20 - 22 - Help us pick a time - visit this etherpad[3] and +1 the times you’re available and the topics you would like to cover - please provide your input by this Thursday, March 7. 3. OpenDev: Containers in Production - August 10 - 12 - Help us pick a time - visit this etherpad[4] and +1 the times you’re available and the topics you would like to cover - please provide your input by this Thursday, March 7. Find more information and updates on all of these events here[1]. [1] https://www.openstack.org/events/opendev-2020 [2] https://opendev_largescale.eventbrite.com [3] https://etherpad.opendev.org/p/OpenDev_HardwareAutomation [4] https://etherpad.opendev.org/p/OpenDev_ContainersInProduction Thanks! Ashlee Ashlee Ferguson Community & Events Coordinator OpenStack Foundation From gmann at ghanshyammann.com Tue May 5 18:03:59 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 05 May 2020 13:03:59 -0500 Subject: [tc][uc][all] Starting community-wide goals ideas for V series In-Reply-To: <69323bb6-c236-9634-14dd-e93736428795@debian.org> References: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> <69323bb6-c236-9634-14dd-e93736428795@debian.org> Message-ID: <171e602423a.de63edb72241.7523160282331774635@ghanshyammann.com> ---- On Thu, 30 Apr 2020 06:20:29 -0500 Thomas Goirand wrote ---- > On 2/5/20 7:39 PM, Ghanshyam Mann wrote: > > Hello everyone, > > > > We are in R14 week of Ussuri cycle which means It's time to start the > > discussions about community-wide goals ideas for the V series. > > > > Community-wide goals are important in term of solving and improving a technical > > area across OpenStack as a whole. It has lot more benefits to be considered from > > users as well from a developers perspective. See [1] for more details about > > community-wide goals and process. > > > > We have the Zuulv3 migration goal already accepted and pre-selected for v cycle. > > If you are interested in proposing a goal, please write down the idea on this etherpad[2] > > - https://etherpad.openstack.org/p/YVR-v-series-goals > > > > Accordingly, we will start the separate ML discussion over each goal idea. > > > > Also, you can refer to the backlogs of community-wide goals from this[3] and ussuri > > cycle goals[4]. > > > > NOTE: TC has defined the goal process schedule[5] to streamlined the process and > > ready with goals for projects to plan/implement at the start of the cycle. I am > > hoping to start that schedule for W cycle goals. > > > > [1] https://governance.openstack.org/tc/goals/index.html > > [2] https://etherpad.openstack.org/p/YVR-v-series-goals > > [3] https://etherpad.openstack.org/p/community-goals > > [4] https://etherpad.openstack.org/p/PVG-u-series-goals > > [5] https://governance.openstack.org/tc/goals/#goal-selection-schedule > > > > -gmann > > I've added 3 major paint points to the etherpad which I think are very > important for operators: > > 8. Get all services to systemd-notify > > 9. Make it possible to reload service configurations dynamically without > restarting daemons > > 10. All API to provide a /healthcheck URL (like the Keystone one...). > > I don't have the time to implement all of this, but that's still super > useful things to have. Does anyone have the time to work on this? #10 looks interacting to me and useful from the user's point of view. I can help with this. Key thing will be do we need more generic backends than file existence one, for example, DB checks or service-based backends. But we can discuss all those details in separate threads, thanks for bringing this. -gmann > > Cheers, > > Thomas Goirand (zigo) > > From hjensas at redhat.com Tue May 5 18:16:54 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 05 May 2020 20:16:54 +0200 Subject: [TripleO][Rdo][Train] Deploy OpenStack using only provisioning network In-Reply-To: References: Message-ID: On Mon, 2020-05-04 at 00:20 +0200, Ruslanas Gžibovskis wrote: > Hi all, > > I am doing some testing and will do some deployment on some remote > hosts. > > Remote hosts will use provider network only specific for each > compute. > > I was thinking, do I really need all the External, InternalAPI, > Storage, StorageManagemnt, Tenant networks provided to all of the > nodes? Maybe I could use a Provision network for all of that, and > make swift/glance copy on all computes to provide local images. > > I understand, if I do not have tenant network, all VM's in same > project but in different sites, will not see each other, but it is ok > at the moment. > > Thank you for your help > I use tripleo to deploy a single node aio with only 1 network interface as a lab at home. You can see the configuration here: https://github.com/hjensas/homelab/tree/master/overcloud Basically I use a an empty network data file, and removed the 'networks' section in my custom role data file. With no networks defined everything is placed on the 'ctlplane' (i.e provisioning network). Same thing you are asking for? I think you can do the same thing. For the provider networks I believe you will need per-role NeutronBridgeMappings i.e something like: ControllerParameters: NeutronBridgeMappings: br-ex:provider0 ComputeSite1: NeutronBridgeMappings: br-foo:provider1 ComputeSite2: NeutronBridgeMappings: br-bar:provider2 -- Harald From reza.b2008 at gmail.com Tue May 5 18:55:09 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Tue, 5 May 2020 23:25:09 +0430 Subject: [TripleO] External network on compute node Message-ID: Hi all. The default way of compute node for accessing Internet if through undercloud. I'm going to assign an IP from External network to each compute node with default route. But the deployment can't assign an IP to br-ex and fails with: " raise AddrFormatError('invalid IPNetwork %s' % addr)", "netaddr.core.AddrFormatError: invalid IPNetwork ", Actually 'ip_netmask': '' is empty during deployment for compute nodes. I've added external network to compute node role: External: subnet: external_subnet and for network interface: - type: ovs_bridge name: bridge_name mtu: get_param: ExternalMtu dns_servers: get_param: DnsServers use_dhcp: false addresses: - ip_netmask: get_param: ExternalIpSubnet routes: list_concat_unique: - get_param: ExternalInterfaceRoutes - - default: true next_hop: get_param: ExternalInterfaceDefaultRoute members: - type: interface name: nic3 mtu: get_param: ExternalMtu use_dhcp: false primary: true Any suggestion would be grateful. Regards, Reza -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue May 5 19:17:27 2020 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 5 May 2020 14:17:27 -0500 Subject: OpenDev Events Update - we need your feedback! In-Reply-To: References: Message-ID: <3F23D422-4DD9-4C1B-892E-BA07D51EF61D@openstack.org> The deadline for providing input on those etherpads is *May 7.* Months are difficult these days! Ashlee > On May 5, 2020, at 12:30 PM, Ashlee Ferguson wrote: > > Hi everyone, > > Thanks so much for your feedback that’s continually helping us shape these virtual OpenDev events[1]. Keep reading this email for some important updates on all three events! > > 1. OpenDev: Large-scale Usage of Open Infrastructure Software > - June 29 - July 1 (1300 - 1600 UTC each day) > - Registration is open[2]! > - High-level schedule coming soon; Find a brief topic summary and more information here[1] > > 2. OpenDev: Hardware Automation > - July 20 - 22 > - Help us pick a time - visit this etherpad[3] and +1 the times you’re available and the topics you would like to cover - please provide your input by this Thursday, March 7. > > 3. OpenDev: Containers in Production > - August 10 - 12 > - Help us pick a time - visit this etherpad[4] and +1 the times you’re available and the topics you would like to cover - please provide your input by this Thursday, March 7. > > Find more information and updates on all of these events here[1]. > > [1] https://www.openstack.org/events/opendev-2020 > [2] https://opendev_largescale.eventbrite.com > [3] https://etherpad.opendev.org/p/OpenDev_HardwareAutomation > [4] https://etherpad.opendev.org/p/OpenDev_ContainersInProduction > > > Thanks! > Ashlee > > Ashlee Ferguson > Community & Events Coordinator > OpenStack Foundation > From ruslanas at lpic.lt Tue May 5 20:40:42 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 5 May 2020 22:40:42 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: I will file a launchpad bug, Tomorrow, today almost sleeping ;) nope, only tripleo-ansible, I have 3 versions available, 0.4.0 0.4.1 and 0.5.0. 0.4.1 - works, 0.5.0 do not. [stack at remote-u ~]$ paunch --version paunch 5.3.1 [stack at remote-u ~]$ rpm -qa | grep -i paunch python2-paunch-5.3.1-1.el7.noarch paunch-services-5.3.1-1.el7.noarch [stack at remote-u ~]$ uname -a Linux remote-u.tecloud.sample.xxx 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux [stack at remote-u ~]$ Would be very helpful if you could share or howto, or steps to place that bug report :) https://bugs.launchpad.net/openstack/+filebug ? This one? Which project? tripleo? How to name it, so it would be understandable? Which logs would you need? would be suficient with ansible.log and ansible-playbook-2 -vvvv .... last output, with error message? I also can provide a diff for what is changed in that exact file. And package contents from 0.4.1 and 0.5.0 if you need more or less, just mention here, or after I launch a bug report, I can recreate VM and try setting it up. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Tue May 5 21:02:27 2020 From: kevin at cloudnull.com (Carter, Kevin) Date: Tue, 5 May 2020 16:02:27 -0500 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: The bug report should go here: https://bugs.launchpad.net/tripleo/+filebug * Fill in the summary, click next, and then provide everything you can within the body. Once the bug is created, you can upload files (like the mentioned diff) to the bug report. Any and all information you can share is helpful in tracking down what is going on; please also include repos enabled so we can try and recreate the issue locally. Sorry for the issues you've run into, but thanks again for reporting them. If you have any questions let us know. -- Kevin Carter IRC: Cloudnull On Tue, May 5, 2020 at 3:40 PM Ruslanas Gžibovskis wrote: > I will file a launchpad bug, Tomorrow, today almost sleeping ;) > nope, only tripleo-ansible, I have 3 versions available, 0.4.0 0.4.1 and > 0.5.0. > 0.4.1 - works, 0.5.0 do not. > > [stack at remote-u ~]$ paunch --version > paunch 5.3.1 > [stack at remote-u ~]$ rpm -qa | grep -i paunch > python2-paunch-5.3.1-1.el7.noarch > paunch-services-5.3.1-1.el7.noarch > [stack at remote-u ~]$ uname -a > Linux remote-u.tecloud.sample.xxx 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 > 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux > [stack at remote-u ~]$ > > Would be very helpful if you could share or howto, or steps to place that > bug report :) > https://bugs.launchpad.net/openstack/+filebug ? This one? Which project? > tripleo? > How to name it, so it would be understandable? > Which logs would you need? would be suficient with ansible.log and > ansible-playbook-2 -vvvv .... last output, with error message? > > I also can provide a diff for what is changed in that exact file. And > package contents from 0.4.1 and 0.5.0 > > if you need more or less, just mention here, or after I launch a bug > report, I can recreate VM and try setting it up. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.bujack at desy.de Tue May 5 13:52:42 2020 From: stefan.bujack at desy.de (Bujack, Stefan) Date: Tue, 5 May 2020 15:52:42 +0200 (CEST) Subject: [Octavia] Help with bridge interfaces with openstack queens in a existing neutron openvswitch dvr snat environment Message-ID: <1190536439.8791862.1588686762904.JavaMail.zimbra@desy.de> Hello, I am trying to install octavia in our existing Openstack Queens Cloud and have problems with the lb-mgmt network ports. Is there anybody who could please help me with this issue? Thanks in advance, Greets Stefan Bujack From zigo at debian.org Tue May 5 21:59:12 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 5 May 2020 23:59:12 +0200 Subject: [puppet] Testing OCI in the OpenDev gate [was: Configuring Debian's uwsgi for each service] In-Reply-To: <20200504221535.x6ix2zgcf2pxhxuu@yuggoth.org> References: <9f524f08-9f03-f670-b5bc-47806b0a7f60@debian.org> <1588537076621.82076@binero.com> <31d6ecf7-be47-6425-b6dc-75184bc29c42@debian.org> <8324a82a-a69e-2c22-9a8c-886888b86c8b@debian.org> <20200504221535.x6ix2zgcf2pxhxuu@yuggoth.org> Message-ID: On 5/5/20 12:15 AM, Jeremy Stanley wrote: > On 2020-05-04 23:00:11 +0200 (+0200), Thomas Goirand wrote: > [...] >> One of the problems is that I've added some unit tests, but they can't >> be launch because currently there's no Debian testing activated. >> Hopefully, I'll be able to work again on having Debian to gate on the >> OpenDev CI again, and have a colleague to co-maintain it. Does anyone >> have an idea on how to fix the situation before this happens? > [...] > > Assistance keeping the image builds working as things change in > relevant distros is of course highly appreciated, and even > necessary. That said, we have debian-buster and debian-buster-arm64 > images currently you can run jobs on, the image updates seems to be > in working order. We can probably get some debian-bullseye or > debian-sid images implemented too if there's sufficient demand. Hi Jeremy, Thanks for always being welcoming and nice, though the issue here is not really having a Buster image (or Sid, or Bullseye) in the infra. I know it's there. The problem is making puppet-openstack CI itself to work with Debian. This means that I must fully upstream my patches, including (and mainly) in the puppet-openstack-integration repository where tests are actually happening. That's a lot of work, which I currently don't have enough time for. So I keep postponing it, and just test why my local environment. The other thing which I should work on is having OCI integrated with the OpenDev CI. I believe we talked about it briefly during fossdem, not sure if you remember. Currently, testing half-manually takes a lot of my time, and I'd love to automate it fully. Though I'm having a hard time figuring out how I could do. In my current environment, I have a big machine with 256 GB of RAM, where I install 20 VMs that I can install OpenStack on. It wouldn't be hard to scale this down by half, but I hardly see how I could work with 8GB of RAM, or how I could make all of this tested. My tool includes bare-metal provisioning, where VMs are PXE-booting and receive some IPMI commands (I use ipmi_sim...). I don't know how that would work in a multi-VM thing. Cheers, Thomas Goirand (zigo) From johnsomor at gmail.com Tue May 5 22:33:37 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 5 May 2020 15:33:37 -0700 Subject: [Octavia] Help with bridge interfaces with openstack queens in a existing neutron openvswitch dvr snat environment In-Reply-To: <1190536439.8791862.1588686762904.JavaMail.zimbra@desy.de> References: <1190536439.8791862.1588686762904.JavaMail.zimbra@desy.de> Message-ID: Hi Stefan, The Octavia team us usually available in the #openstack-lbaas IRC channel on Freenode if you would like to chat with us. The lb-mgmt network is a neutron network used for management of the Octavia amphora. Some deployments use shared public networks for this, some use a private special purpose network for it. The tricky part is making that network accessible to the Octavia controllers (worker, health manager, and housekeeping). There are many ways to do that, such as using a neutron provider network, bridging a port out of neutron, or creating a routed path. Michael On Tue, May 5, 2020 at 2:59 PM Bujack, Stefan wrote: > > Hello, > > I am trying to install octavia in our existing Openstack Queens Cloud and have problems with the lb-mgmt network ports. > > Is there anybody who could please help me with this issue? > > Thanks in advance, > > Greets Stefan Bujack > From hjensas at redhat.com Wed May 6 00:57:05 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 06 May 2020 02:57:05 +0200 Subject: [TripleO] External network on compute node In-Reply-To: References: Message-ID: On Tue, 2020-05-05 at 23:25 +0430, Reza Bakhshayeshi wrote: > Hi all. > The default way of compute node for accessing Internet if through > undercloud. > I'm going to assign an IP from External network to each compute node > with default route. > But the deployment can't assign an IP to br-ex and fails with: > > " raise AddrFormatError('invalid IPNetwork %s' % > addr)", > "netaddr.core.AddrFormatError: invalid IPNetwork ", > > Actually 'ip_netmask': '' is empty during deployment for compute > nodes. > I've added external network to compute node role: > External: > subnet: external_subnet > > and for network interface: > - type: ovs_bridge > name: bridge_name > mtu: > get_param: ExternalMtu > dns_servers: > get_param: DnsServers > use_dhcp: false > addresses: > - ip_netmask: > get_param: ExternalIpSubnet > routes: > list_concat_unique: > - get_param: ExternalInterfaceRoutes > - - default: true > next_hop: > get_param: ExternalInterfaceDefaultRoute > members: > - type: interface > name: nic3 > mtu: > get_param: ExternalMtu > use_dhcp: false > primary: true > > Any suggestion would be grateful. > Regards, > Reza > I think we need more information to see what the issue is. - your deploy command? - content of network_data.yaml used (unless the default) - environment files related to network-isolation, network-environment, network-isolation? -- Harald From aaronzhu1121 at gmail.com Wed May 6 07:23:51 2020 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Wed, 6 May 2020 15:23:51 +0800 Subject: [MURANO] DevStack Murano "yaql_function" Error In-Reply-To: References: Message-ID: Hi Andy and İzzetti, Could you please submit the fix to the master branch? İzzettin Erdem 于2020年5月4日 周一17:34写道: > Thanks Andy. > > Regards. > Forwarded Conversation > Subject: [MURANO] DevStack Murano "yaql_function" Error > ------------------------ > > Gönderen: Andy Botting > Date: 30 Nis 2020 Per, 14:13 > To: İzzettin Erdem > Cc: Rong Zhu , < > openstack-discuss at lists.openstack.org> > > > Hi İzzettin, > > Thanks for your interest. Complete log link is at below. It is happening >> when i try to deploy or update an environment. >> >> http://paste.openstack.org/show/792444/ >> > > I've encountered the error > > AttributeError: 'method' object has no attribute '__yaql_function__' > > in our environment recently. I believe it started after we upgraded to the > Stein release. > > I haven't tracked it down yet, but calls to the YAQL function: > > std:Project.getEnvironmentOwner() > > in a Murano app seem to generate it. > > It's good to know that at least someone else has reproduced this. > > cheers, > Andy > > > ---------- > Gönderen: İzzettin Erdem > Date: 2 May 2020 Cmt, 16:08 > To: Andy Botting > > > Hi Andy, > > I solved this problem, it is happening when you try to create or update an > environment on existing network. If you change the network with "create > new" parameter, it stuck on "creating -or updating- environment" state and > after that throws connection error. So i searched Murano network errors and > i found this: > https://docs.oracle.com/cd/E73172_01/E73173/html/issue-21976631.html. I > applied these steps and i created successfully an environment just only > with "create new" network parameter. After all these, i realized my > devstack environment has no internet connection and because of this > murano-agent could not install. So i tried to install Murano on my OSA > stable/train test environment and i came with success until last step. > Murano-agent installed but this time agent could not connect to rabbitmq > instance. I discussed this with OSA IRC channel and i think i have to > struggle with this for a long time. > > Apologize for this long mail but i wanted to inform you. > > Thanks. Regards. > > > > > ---------- > Gönderen: Andy Botting > Date: 3 May 2020 Paz, 14:24 > To: İzzettin Erdem > > > Hi İzzettin, > > Glad you got it sorted. I think you've probably just worked around the > issue. Using your new method, you're probably just avoiding the > > std:Project.getEnvironmentOwner > > YAQL call now. I've also worked around it in our environment, but I'd like to solve it properly. When I get some time to look into it properly, I'll let you know. > > cheers, > > Andy > > > > -- Thanks, Rong Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Wed May 6 07:32:27 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 6 May 2020 09:32:27 +0200 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: https://bugs.launchpad.net/tripleo/+bug/1877043 please check if anything needed, please feel free to ask, I can test if any updates you will have On Tue, 5 May 2020 at 23:02, Carter, Kevin wrote: > The bug report should go here: https://bugs.launchpad.net/tripleo/+filebug > > * Fill in the summary, click next, and then provide everything you can > within the body. Once the bug is created, you can upload files (like the > mentioned diff) to the bug report. Any and all information you can share is > helpful in tracking down what is going on; please also include repos > enabled so we can try and recreate the issue locally. > > Sorry for the issues you've run into, but thanks again for reporting them. > If you have any questions let us know. > > -- > > Kevin Carter > IRC: Cloudnull > > > On Tue, May 5, 2020 at 3:40 PM Ruslanas Gžibovskis > wrote: > >> I will file a launchpad bug, Tomorrow, today almost sleeping ;) >> nope, only tripleo-ansible, I have 3 versions available, 0.4.0 0.4.1 and >> 0.5.0. >> 0.4.1 - works, 0.5.0 do not. >> >> [stack at remote-u ~]$ paunch --version >> paunch 5.3.1 >> [stack at remote-u ~]$ rpm -qa | grep -i paunch >> python2-paunch-5.3.1-1.el7.noarch >> paunch-services-5.3.1-1.el7.noarch >> [stack at remote-u ~]$ uname -a >> Linux remote-u.tecloud.sample.xxx 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar >> 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux >> [stack at remote-u ~]$ >> >> Would be very helpful if you could share or howto, or steps to place that >> bug report :) >> https://bugs.launchpad.net/openstack/+filebug ? This one? Which project? >> tripleo? >> How to name it, so it would be understandable? >> Which logs would you need? would be suficient with ansible.log and >> ansible-playbook-2 -vvvv .... last output, with error message? >> >> I also can provide a diff for what is changed in that exact file. And >> package contents from 0.4.1 and 0.5.0 >> >> if you need more or less, just mention here, or after I launch a bug >> report, I can recreate VM and try setting it up. >> >> >> -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Wed May 6 07:35:10 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Wed, 6 May 2020 15:35:10 +0800 Subject: [Cyborg][PTG]Virtual PTG Planning References: <9F5C7038-B094-434D-A0F8-691F1FE49A05.ref@yahoo.com> Message-ID: <9F5C7038-B094-434D-A0F8-691F1FE49A05@yahoo.com> Hi team, We are now collecting topics for Virtual PTG [1] and we need to choose the slots for it on the official schedule[2]. If you are interested in participating PTG you will need to do two things: 1. Please add your name and topics to the etherpad[3], we are still collecting topics. 2. Please help to choose a time by filling out the doodle poll[4]. We now have 12 topics, so we will need 2 or 3 hours for 3 days (total 6-9 hours) or more if we have more topics. PS: The official deadline for team registration is May 10th, it will be appreciated if you can do this poll ASAP. ^ ^ Regards, Yumeng [1]http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.html [2]https://ethercalc.openstack.org/126u8ek25noy [3]https://etherpad.opendev.org/p/cyborg-victoria-goals [4]https://doodle.com/poll/wdw5cyeucayzhu9k From reza.b2008 at gmail.com Wed May 6 09:30:43 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Wed, 6 May 2020 14:00:43 +0430 Subject: [TripleO] External network on compute node In-Reply-To: References: Message-ID: here is my deploy command: openstack overcloud deploy \ --control-flavor control \ --compute-flavor compute \ --templates ~/openstack-tripleo-heat-templates \ -r /home/stack/roles_data.yaml \ -e /home/stack/containers-prepare-parameter.yaml \ -e environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml \ -e ~/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml \ -e ~/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e ~/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/openstack-tripleo-heat-templates/environments/network-environment.yaml \ --timeout 360 \ --ntp-server time.google.com -vvv network-environment.yaml: http://paste.openstack.org/show/793179/ network-isolation.yaml: http://paste.openstack.org/show/793181/ compute-dvr.yaml http://paste.openstack.org/show/793183/ I didn't modify network_data.yaml On Wed, 6 May 2020 at 05:27, Harald Jensås wrote: > On Tue, 2020-05-05 at 23:25 +0430, Reza Bakhshayeshi wrote: > > Hi all. > > The default way of compute node for accessing Internet if through > > undercloud. > > I'm going to assign an IP from External network to each compute node > > with default route. > > But the deployment can't assign an IP to br-ex and fails with: > > > > " raise AddrFormatError('invalid IPNetwork %s' % > > addr)", > > "netaddr.core.AddrFormatError: invalid IPNetwork ", > > > > Actually 'ip_netmask': '' is empty during deployment for compute > > nodes. > > I've added external network to compute node role: > > External: > > subnet: external_subnet > > > > and for network interface: > > - type: ovs_bridge > > name: bridge_name > > mtu: > > get_param: ExternalMtu > > dns_servers: > > get_param: DnsServers > > use_dhcp: false > > addresses: > > - ip_netmask: > > get_param: ExternalIpSubnet > > routes: > > list_concat_unique: > > - get_param: ExternalInterfaceRoutes > > - - default: true > > next_hop: > > get_param: ExternalInterfaceDefaultRoute > > members: > > - type: interface > > name: nic3 > > mtu: > > get_param: ExternalMtu > > use_dhcp: false > > primary: true > > > > Any suggestion would be grateful. > > Regards, > > Reza > > > > I think we need more information to see what the issue is. > - your deploy command? > - content of network_data.yaml used (unless the default) > - environment files related to network-isolation, network-environment, > network-isolation? > > > -- > Harald > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy at andybotting.com Wed May 6 10:26:19 2020 From: andy at andybotting.com (Andy Botting) Date: Wed, 6 May 2020 20:26:19 +1000 Subject: [MURANO] DevStack Murano "yaql_function" Error In-Reply-To: References: Message-ID: Hi Rong, Could you please submit the fix to the master branch? > I haven't solved the issue yet, but if I do, I'll push up a review. cheers, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed May 6 10:51:17 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 6 May 2020 12:51:17 +0200 Subject: [largescale-sig] Next meeting: May 6, 8utc In-Reply-To: <8eda3e2d-42ac-553f-6ad0-207bb0a2d7f3@openstack.org> References: <8eda3e2d-42ac-553f-6ad0-207bb0a2d7f3@openstack.org> Message-ID: Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-05-06-08.01.html Note that the Large Scale SIG has decided it does not need PTG space, and will focus on Opendev conf "large scale infrastructure" track instead. TODOs: - amorin to propose patch against Nova doc - ttx to create an empty oslo-metric repository - masahito to finalize oslo.metric POC code release Next meeting: May 20, 8:00UTC on #openstack-meeting-3 -- Thierry Carrez (ttx) From james.page at canonical.com Wed May 6 12:19:20 2020 From: james.page at canonical.com (James Page) Date: Wed, 6 May 2020 13:19:20 +0100 Subject: [charms][ptg] Victoria Cycle PTG planning Message-ID: Hi Team I've setup an etherpad: https://etherpad.opendev.org/p/charms-ptg-victoria and tentatively proposed slots for us to meet on: Wednesday 3rd June - 1400-1700 UTC Thursday 4th June - 1400-1700 UTC Please add proposed discussion topics and your name to the etherpad. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Wed May 6 12:24:26 2020 From: kevin at cloudnull.com (Carter, Kevin) Date: Wed, 6 May 2020 07:24:26 -0500 Subject: [TripleO][train][rdo] installation of undercloud fails during Run container-puppet tasks step1 In-Reply-To: References: <517bb034e9cf4d53ae248b04b62482ae@G08CNEXMBPEKD06.g08.fujitsu.local> Message-ID: Thanks Ruslanas, the bug report looks great. It looks like Rabi has been able to identify the issue and has submitted for a new upstream release, which should resolve the bug [0]. [0] https://review.opendev.org/#/c/725783 -- Kevin Carter IRC: Cloudnull On Wed, May 6, 2020 at 2:32 AM Ruslanas Gžibovskis wrote: > https://bugs.launchpad.net/tripleo/+bug/1877043 please check if anything > needed, please feel free to ask, I can test if any updates you will have > > On Tue, 5 May 2020 at 23:02, Carter, Kevin wrote: > >> The bug report should go here: >> https://bugs.launchpad.net/tripleo/+filebug >> >> * Fill in the summary, click next, and then provide everything you can >> within the body. Once the bug is created, you can upload files (like the >> mentioned diff) to the bug report. Any and all information you can share is >> helpful in tracking down what is going on; please also include repos >> enabled so we can try and recreate the issue locally. >> >> Sorry for the issues you've run into, but thanks again for reporting >> them. If you have any questions let us know. >> >> -- >> >> Kevin Carter >> IRC: Cloudnull >> >> >> On Tue, May 5, 2020 at 3:40 PM Ruslanas Gžibovskis >> wrote: >> >>> I will file a launchpad bug, Tomorrow, today almost sleeping ;) >>> nope, only tripleo-ansible, I have 3 versions available, 0.4.0 0.4.1 and >>> 0.5.0. >>> 0.4.1 - works, 0.5.0 do not. >>> >>> [stack at remote-u ~]$ paunch --version >>> paunch 5.3.1 >>> [stack at remote-u ~]$ rpm -qa | grep -i paunch >>> python2-paunch-5.3.1-1.el7.noarch >>> paunch-services-5.3.1-1.el7.noarch >>> [stack at remote-u ~]$ uname -a >>> Linux remote-u.tecloud.sample.xxx 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar >>> 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux >>> [stack at remote-u ~]$ >>> >>> Would be very helpful if you could share or howto, or steps to place >>> that bug report :) >>> https://bugs.launchpad.net/openstack/+filebug ? This one? Which >>> project? tripleo? >>> How to name it, so it would be understandable? >>> Which logs would you need? would be suficient with ansible.log and >>> ansible-playbook-2 -vvvv .... last output, with error message? >>> >>> I also can provide a diff for what is changed in that exact file. And >>> package contents from 0.4.1 and 0.5.0 >>> >>> if you need more or less, just mention here, or after I launch a bug >>> report, I can recreate VM and try setting it up. >>> >>> >>> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Wed May 6 12:39:59 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 06 May 2020 14:39:59 +0200 Subject: [TripleO] External network on compute node In-Reply-To: References: Message-ID: <98236409f2ea0a5f0a7bffd3eb1054ef5399ba69.camel@redhat.com> On Wed, 2020-05-06 at 14:00 +0430, Reza Bakhshayeshi wrote: > here is my deploy command: > > openstack overcloud deploy \ > --control-flavor control \ > --compute-flavor compute \ > --templates ~/openstack-tripleo-heat-templates \ > -r /home/stack/roles_data.yaml \ > -e /home/stack/containers-prepare-parameter.yaml \ > -e environment.yaml \ > -e /usr/share/openstack-tripleo-heat- > templates/environments/services/octavia.yaml \ This is not related, but: Why use '/usr/share/openstack-tripleo-heat-templates/' and not '~/openstack-tripleo-heat-templates/' here? > -e ~/openstack-tripleo-heat-templates/environments/services/neutron- > ovn-dvr-ha.yaml \ > -e ~/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e ~/openstack-tripleo-heat-templates/environments/network- > isolation.yaml \ > -e ~/openstack-tripleo-heat-templates/environments/network- > environment.yaml \ Hm, I'm not sure network-isolation.yaml and network-environment.yaml contains what you expect. Can you do a plan export? openstack overcloud plan export --output-file oc-plan.tar.gz overcloud Then have a look at `environments/network-isolation.yaml` and `environments/network-environment.yaml` in the plan? I think you may want to copy these two files out of the templates tree and use the out of tree copies instead. > --timeout 360 \ > --ntp-server time.google.com -vvv > > network-environment.yaml: > http://paste.openstack.org/show/793179/ > > network-isolation.yaml: > http://paste.openstack.org/show/793181/ > > compute-dvr.yaml > http://paste.openstack.org/show/793183/ > > I didn't modify network_data.yaml > > -- Harald > On Wed, 6 May 2020 at 05:27, Harald Jensås > wrote: > > On Tue, 2020-05-05 at 23:25 +0430, Reza Bakhshayeshi wrote: > > > Hi all. > > > The default way of compute node for accessing Internet if through > > > undercloud. > > > I'm going to assign an IP from External network to each compute > > node > > > with default route. > > > But the deployment can't assign an IP to br-ex and fails with: > > > > > > " raise AddrFormatError('invalid IPNetwork %s' > > % > > > addr)", > > > "netaddr.core.AddrFormatError: invalid IPNetwork > > ", > > > > > > Actually 'ip_netmask': '' is empty during deployment for compute > > > nodes. > > > I've added external network to compute node role: > > > External: > > > subnet: external_subnet > > > > > > and for network interface: > > > - type: ovs_bridge > > > name: bridge_name > > > mtu: > > > get_param: ExternalMtu > > > dns_servers: > > > get_param: DnsServers > > > use_dhcp: false > > > addresses: > > > - ip_netmask: > > > get_param: ExternalIpSubnet > > > routes: > > > list_concat_unique: > > > - get_param: ExternalInterfaceRoutes > > > - - default: true > > > next_hop: > > > get_param: > > ExternalInterfaceDefaultRoute > > > members: > > > - type: interface > > > name: nic3 > > > mtu: > > > get_param: ExternalMtu > > > use_dhcp: false > > > primary: true > > > > > > Any suggestion would be grateful. > > > Regards, > > > Reza > > > > > > > I think we need more information to see what the issue is. > > - your deploy command? > > - content of network_data.yaml used (unless the default) > > - environment files related to network-isolation, network- > > environment, > > network-isolation? > > > > > > -- > > Harald > > > > From dtantsur at redhat.com Wed May 6 15:58:43 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 6 May 2020 17:58:43 +0200 Subject: [all] [qa] [oslo] Usage of testtools? Message-ID: Hi folks, Most OpenStack projects use testtools/fixtures/unittest2 for unit tests. It was immensely helpful in Python 2 times, but I wonder if we should migrate away now. I've hit at least these three bugs: https://github.com/testing-cabal/testtools/issues/235 https://github.com/testing-cabal/testtools/issues/275 https://github.com/testing-cabal/testtools/issues/144 We can invest time in fixing them, but I wonder if we should just migrate to the standard unittest (and move oslotest to it) now that we require Python >= 3.6. Thoughts? Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed May 6 16:16:08 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 May 2020 18:16:08 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <7edad2593018ef020872188f207df1f725875b6e.camel@redhat.com> Message-ID: Hello, first of all sorry for my insitence but I have would to know if also openstack train is affected by this bug. Thanks & Regards Ignazio Il Mar 5 Mag 2020, 13:39 Ignazio Cassano ha scritto: > Hello Sean, if you do not want to spend you time for configuring a test > openstack environment I am available for schedule a call where I can share > my desktop and we could test on rocky and stein . > Let me know if you can. > Best Regards > Ignazio > > Il giorno sab 2 mag 2020 alle ore 17:40 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> Hello Sean, I am continuing my test (so you we'll have to read a lot :-) ) >> If I understood well file neutron.py contains a patch for >> /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py for reading >> the configuration force_legacy_port_binding. >> If it is true it returns false. >> I patched the api.py and inserting a LOG.info call I saw it reads the >> variable but it seems do nothing and the migrate instance stop to respond. >> Best Regards >> Ignazio >> >> >> >> Il giorno sab 2 mag 2020 alle ore 10:43 Ignazio Cassano < >> ignaziocassano at gmail.com> ha scritto: >> >>> Hello Sean, >>> I modified the no workloads.py to add the consoleauth code, so now it >>> does note returns errors during live migration phase, as I wrote in my last >>> email. >>> Keep in mind my stein is from an upgrade. >>> Sorry if I am not sending all email history here, but if message body is >>> too big the email needs the moderator approval. >>> Anycase, I added the following code: >>> >>> cfg.BoolOpt( >>> 'enable_consoleauth', >>> default=False, >>> deprecated_for_removal=True, >>> deprecated_since="18.0.0", >>> deprecated_reason=""" >>> This option has been added as deprecated originally because it is used >>> for avoiding a upgrade issue and it will not be used in the future. >>> See the help text for more details. >>> """, >>> help=""" >>> Enable the consoleauth service to avoid resetting unexpired consoles. >>> >>> Console token authorizations have moved from the ``nova-consoleauth`` >>> service >>> to the database, so all new consoles will be supported by the database >>> backend. >>> With this, consoles that existed before database backend support will be >>> reset. >>> For most operators, this should be a minimal disruption as the default >>> TTL of a >>> console token is 10 minutes. >>> >>> Operators that have much longer token TTL configured or otherwise wish >>> to avoid >>> immediately resetting all existing consoles can enable this flag to >>> continue >>> using the ``nova-consoleauth`` service in addition to the database >>> backend. >>> Once all of the old ``nova-consoleauth`` supported console tokens have >>> expired, >>> this flag should be disabled. For example, if a deployment has >>> configured a >>> token TTL of one hour, the operator may disable the flag, one hour after >>> deploying the new code during an upgrade. >>> >>> .. note:: Cells v1 was not converted to use the database backend for >>> console token authorizations. Cells v1 console token authorizations >>> will >>> continue to be supported by the ``nova-consoleauth`` service and use of >>> the ``[workarounds]/enable_consoleauth`` option does not apply to >>> Cells v1 users. >>> >>> Related options: >>> >>> * ``[consoleauth]/token_ttl`` >>> """), >>> >>> Now the live migration starts e the instance is moved but the it >>> continues to be unreachable after live migration. >>> It starts to respond only when it starts a connection (for example a >>> polling to ntp server). >>> If I disable chrony in the instance, it stop to respond for ever. >>> Best Regards >>> Ignazio >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Wed May 6 17:13:59 2020 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 6 May 2020 13:13:59 -0400 Subject: [all] [qa] [oslo] Usage of testtools? In-Reply-To: References: Message-ID: <20200506171359.GA167759@neo-zeong.kortar.org> On Wed, May 06, 2020 at 05:58:43PM +0200, Dmitry Tantsur wrote: > Hi folks, > > Most OpenStack projects use testtools/fixtures/unittest2 for unit tests. It > was immensely helpful in Python 2 times, but I wonder if we should migrate > away now. I've hit at least these three bugs: > https://github.com/testing-cabal/testtools/issues/235 > https://github.com/testing-cabal/testtools/issues/275 > https://github.com/testing-cabal/testtools/issues/144 > > We can invest time in fixing them, but I wonder if we should just migrate > to the standard unittest (and move oslotest to it) now that we require > Python >= 3.6. > > Thoughts? > It's used extensively, all the fixtures usage is based on testtools, additionally all the attachments (mainly captured stdout, stderr, and logging) rely on testtools result stream which adds the concept of attachments. None of that is in stdlib unittest. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From arne.wiebalck at cern.ch Wed May 6 17:50:27 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Wed, 6 May 2020 19:50:27 +0200 Subject: [baremetal-sig][ironic] Baremetal whitepaper round 3: doodle Message-ID: <44993048-bb11-3241-ce89-515251f001ab@cern.ch> Dear all, The white paper is taking shape ... but still needs a little more work. If you want to help and join the session(s) to discuss the current state and the next steps, please reply to the doodle before the end of this week: https://doodle.com/poll/zcprxptw6nk4diq7 Like last time, I will send out the call details once we have settled on the time slot(s). Thanks! Arne -- Arne Wiebalck CERN IT From ildiko.vancsa at gmail.com Wed May 6 18:14:42 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 6 May 2020 20:14:42 +0200 Subject: Feedback on StarlingX pilot project Message-ID: <09A029FA-5A74-4ADA-8D5B-A2A82B88587E@gmail.com> Hi, I’m reaching out to you about the StarlingX project as they will start their confirmation process with the OSF staff and Board of Directors next week with a few pre-review and early feedback calls to prepare for their confirmation review on the next Board meeting in early June. A draft of the slide deck[1] they plan to present is available for reference. Per the confirmation guidelines[2], the OSF Board of directors will take into account the feedback from representative bodies of existing confirmed Open Infrastructure Projects (Airship, Kata, OpenStack and Zuul) when evaluating StarlingX for confirmation. Particularly worth calling out, guideline #4 "Open collaboration” asserts the following: Project behaves as a good neighbor to other confirmed and pilot projects. If you have any observations/interactions with the StarlingX project which could serve as useful examples for how this project does or does not meet this and other guidelines, please provide them on the etherpad[3]. If possible, include a citation with links to substantiate your feedback. I'll assemble this feedback and send it to the Board (via the public foundation mailing list) for consideration before their review meeting with the BoD. If you could give your feedback till the end of next week that would be highly appreciated. Thanks and Best Regards, Ildikó Váncsa [1] https://drive.google.com/file/d/16FPqzLT_3UH5AbZQJGg5Y9g4CN6AsRmx/view?usp=sharing [2] https://wiki.openstack.org/wiki/Governance/Foundation/OSFProjectConfirmationGuidelines [3] https://etherpad.opendev.org/p/openstack-feedback-starlingx-confirmation From ken at jots.org Wed May 6 18:46:28 2020 From: ken at jots.org (Ken D'Ambrosio) Date: Wed, 06 May 2020 14:46:28 -0400 Subject: On-disk image much bigger than VM shows with df. Message-ID: Hey, all. I've got a VM -- this is in Juno -- which du shows as using 152 GB on the compute node's disk. In the VM, itself, it says it's using 39 GB of an 80 GB volume. So, clearly, that sparse file has gotten crazy huge with un-reclaimed stuff. I seem to recall doing snapshots and then defragging them, "back in the day," but I have no idea if there's a way to do that to an instance *without* snapshotting. So: * If I snapshot it, can I then just defrag the snapshot and then make use of it to bring my system back? * Even better: is there a way to defrag the on-disk VM files without snapshotting? Thanks... -Ken From emilien at redhat.com Wed May 6 19:35:36 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 6 May 2020 15:35:36 -0400 Subject: [tripleo] Proposing Sergii Golovatiuk and Jesse Petorius for tripleo-upgrade core In-Reply-To: References: <0da1a0c7-aab9-74da-a5ef-73ea924aa16f@redhat.com> Message-ID: Just a note for the future cores in tripleo-upgrades. They should be added to tripleo-upgrade-core group and not tripleo-core: https://review.opendev.org/#/admin/groups/1853,members I moved Sergii and Jesse to the right groups ;-) Also a friendly reminder that if you're promoted to core of a specific area of TripleO, we trust you that you'll use the +2 where it's appropriate. Thanks, On Wed, Apr 29, 2020 at 2:52 PM Wesley Hayutin wrote: > Top post... > > Welcome to the club guys :) > You are now core and your base belong to us. > > On Wed, Apr 29, 2020 at 3:54 AM Bogdan Dobrelya > wrote: > >> On 28.04.2020 11:47, Jiří Stránský wrote: >> > I'm late, but +1 :) >> >> +1 >> >> > >> > On 23. 04. 20 12:07, Jose Luis Franco Arza wrote: >> >> Hi, >> >> >> >> I would like to propose to Sergii Golovatiuk and Jesse Pretorius as >> core >> >> members for the tripleo-upgrade >> >> repository. This >> repository >> >> is being used mostly to automate the workflow to upgrade, update and >> >> fast-forward TripleO. And it is being used in our CI automation, >> >> downstream >> >> and upstream. >> >> >> >> Both, Sergii >> >> < >> https://review.opendev.org/#/q/project:openstack/tripleo-upgrade+owner:%22Sergii+Golovatiuk+%253Csgolovat%2540redhat.com%253E%22> >> >> >> >> >> and Jesse >> >> < >> https://review.opendev.org/#/q/project:openstack/tripleo-upgrade+owner:%22Jesse+Pretorius+(odyssey4me)+%253Cjesse%2540odyssey4.me%253E%22>, >> >> >> >> >> have been very active in the repository with patches and reviews. And >> >> they >> >> have a good understanding of the code, as well as its integration with >> >> tripleo-quickstart and infrared. >> >> >> >> Please, feel free to add your votes into the thread. >> >> >> >> Thanks, >> >> >> >> José Luis >> >> >> > >> > >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> >> >> -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Wed May 6 19:41:10 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 6 May 2020 14:41:10 -0500 Subject: [OSSA-2020-003] Keystone: Keystone does not check signature TTL of the EC2 credential auth method (CVE PENDING) Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ====================================================================================== OSSA-2020-003: Keystone does not check signature TTL of the EC2 credential auth method ====================================================================================== :Date: May 06, 2020 :CVE: Pending Affects ~~~~~~~ - - Keystone: <15.0.1, ==16.0.0 Description ~~~~~~~~~~~ kay reported a vulnerability with keystone's EC2 API. Keystone doesn't have a signature TTL check for AWS signature V4 and an attacker can sniff the auth header, then use it to reissue an openstack token an unlimited number of times. Patches ~~~~~~~ - - https://review.opendev.org/725385 (Rocky) - - https://review.opendev.org/725069 (Stein) - - https://review.opendev.org/724954 (Train) - - https://review.opendev.org/724746 (Ussuri) - - https://review.opendev.org/724124 (Victoria) Credits ~~~~~~~ - - kay (CVE Pending) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1872737 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=Pending Notes ~~~~~ - - The stable/rocky branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl6zEjwACgkQ56j9K3b+ vRFejhAAvzq3MBwKGXIKsJxQmwVS0RxVFifTAfnKIjBGskG3knWkQHopY0IcmwoZ 3Kv2AnRgFVBuQpZ0t9Y3S3U7KRI63FT+kzA3gy9sB+h7rdqzquxejXvljRMGJlex WRCOQwRP4prFpzpUqzBg9/bIAyWpkrjJIvz7iJ9U3z6MbrZIjV+YEZ3JIRQTdMUj MajgwJ4EDynkh8trm63n7Gyuvq8ukj1FCrG1APWJi96HhwNz6XwiqXIWci4CTaEW sY9v8luETMCyv+nY2pt9IF8wXOaJKJXPTilf6sisjN2zDq+UWgsxEC0sp3h09tnZ m6cy3OvUQeDmdJVQ/VNsfUTeRYRvYri2u44FaOUBjsNxeZca1U4MCVkAiN9BBzkg k1Xb8zgGoXaytT/lzzyr67h6ZghKm6cnSUktWnX56847byOMPi/g9q1cu0edUwwC 7SDaQ08JbsEstiXtPVBhatTLxbjlNy5eql6NaZmFQatYJAQKZsasvwV4YBv290mu OsVHUEqjmYk4b4CZNPQC2681CDtAQpiLuasYiLnxC6I+zBTwfP+6tzP0xVHW4woi 4Jhl/watZMudrtMS3YoOmwZ4iFNJRzQcDWmiAr0CZiC0NGamLjvHWHRslnvmhy92 kSGWLilaMD5vBODXVY82lQHrbl96dPRbpe8/z29sALsEs6aNFYk= =qyBV -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Wed May 6 19:48:59 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 6 May 2020 14:48:59 -0500 Subject: [OSSA-2020-004] Keystone: Keystone credential endpoints allow owner modification and are not protected from a scoped context (CVE PENDING) Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ================================================================================================================= OSSA-2020-004: Keystone credential endpoints allow owner modification and are not protected from a scoped context ================================================================================================================= :Date: May 06, 2020 :CVE: Pending Affects ~~~~~~~ - - Keystone: <15.0.1, ==16.0.0 Description ~~~~~~~~~~~ kay reported two vulnerabilities in keystone's EC2 credentials API. Any authenticated user could create an EC2 credential for themselves for a project that they have a specified role on, then perform an update to the credential user and project, allowing them to masquerade as another user. (CVE #1 PENDING) Any authenticated user within a limited scope (trust/oauth/application credential) can create an EC2 credential with an escalated permission, such as obtaining admin while the user is on a limited viewer role. (CVE #2 PENDING) Both of these vulnerabilities potentially allow a malicious user to act as admin on a project that another user has the admin role on, which can effectively grant the malicious user global admin privileges. Patches ~~~~~~~ - - https://review.opendev.org/725895 (Rocky) - - https://review.opendev.org/725893 (Stein) - - https://review.opendev.org/725891 (Train) - - https://review.opendev.org/725888 (Ussuri) - - https://review.opendev.org/725886 (Victoria) Credits ~~~~~~~ - - kay (CVE Pending) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1872733 - - https://launchpad.net/bugs/1872735 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=Pending Notes ~~~~~ - - The stable/rocky branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl6zE70ACgkQ56j9K3b+ vREQsBAAnHZLyrbjSwu7/CEdDVfb0sQZfDvyuXMttzouXQ6ZwEgLFKzc/aFWMjru loyst9jAx2pJzvxDfMYO11oU0M5tYFCFxhKsVvu+3ggbcNHeov1s25bPkxE7A2j7 IYJj9b+bbieYVj1ru3FJjDl3iTae4K73DeHNBCdxTSeahJZdya7hiboA1VJFt4p7 fNqU3+szsYt/vwspPBi7x+xnZszIMaUw8tVgxzB4KVD6YXbDR9Mp7itH77kGdn8l e3OpnURvfaIkPbK6fqE6jjwjQEL/6+Ahffaf4KqvsdjbAcdQRpK0UQrBX+n6DIWd TRwV/W7bEy64HrC16W78fcBlegRmEUUM4xNmdll3lwUS5KqfEeM3vXU4Ksfe9tQ2 8fDU1hDALcC55+2CMMrdFfmX/MBSTz0HVmP4snaGuoXBL/iQz22OmekFKC1tmXxb +vAtOUBsdzphRZn9KWvPIHOFGeuepWb9W0eN594JT2pdHfniLj6EaPrBaN63l7M/ pu0DTPygN5IdUXv6v/vquQZp50CaN59okmXDNiFkBeHsfaAqhdyjJjRaYvyU62OA apjVam8/f2HM0RC0vvpIqv0z0kU55NPCo61dlMZPg6U9JiQd2PzBqvEtDF1lyByF vz5e+r9fmtRcgCJIYr0Z7VlOlSMONpITN03oICaexieDTEXDXHc= =lSDG -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed May 6 19:50:43 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 6 May 2020 13:50:43 -0600 Subject: [tripleo] Proposing Sergii Golovatiuk and Jesse Petorius for tripleo-upgrade core In-Reply-To: References: <0da1a0c7-aab9-74da-a5ef-73ea924aa16f@redhat.com> Message-ID: On Wed, May 6, 2020 at 1:36 PM Emilien Macchi wrote: > Just a note for the future cores in tripleo-upgrades. > They should be added to tripleo-upgrade-core group and not tripleo-core: > https://review.opendev.org/#/admin/groups/1853,members > > I moved Sergii and Jesse to the right groups ;-) > Also a friendly reminder that if you're promoted to core of a specific > area of TripleO, we trust you that you'll use the +2 where it's appropriate. > > Thanks, > FYI.. I've updated the documentation on the gerrit groups maintained for TripleO https://review.opendev.org/#/c/725971/1/doc/source/contributor/contributions.rst Thanks > > On Wed, Apr 29, 2020 at 2:52 PM Wesley Hayutin > wrote: > >> Top post... >> >> Welcome to the club guys :) >> You are now core and your base belong to us. >> >> On Wed, Apr 29, 2020 at 3:54 AM Bogdan Dobrelya >> wrote: >> >>> On 28.04.2020 11:47, Jiří Stránský wrote: >>> > I'm late, but +1 :) >>> >>> +1 >>> >>> > >>> > On 23. 04. 20 12:07, Jose Luis Franco Arza wrote: >>> >> Hi, >>> >> >>> >> I would like to propose to Sergii Golovatiuk and Jesse Pretorius as >>> core >>> >> members for the tripleo-upgrade >>> >> repository. This >>> repository >>> >> is being used mostly to automate the workflow to upgrade, update and >>> >> fast-forward TripleO. And it is being used in our CI automation, >>> >> downstream >>> >> and upstream. >>> >> >>> >> Both, Sergii >>> >> < >>> https://review.opendev.org/#/q/project:openstack/tripleo-upgrade+owner:%22Sergii+Golovatiuk+%253Csgolovat%2540redhat.com%253E%22> >>> >>> >> >>> >> and Jesse >>> >> < >>> https://review.opendev.org/#/q/project:openstack/tripleo-upgrade+owner:%22Jesse+Pretorius+(odyssey4me)+%253Cjesse%2540odyssey4.me%253E%22>, >>> >>> >> >>> >> have been very active in the repository with patches and reviews. And >>> >> they >>> >> have a good understanding of the code, as well as its integration with >>> >> tripleo-quickstart and infrared. >>> >> >>> >> Please, feel free to add your votes into the thread. >>> >> >>> >> Thanks, >>> >> >>> >> José Luis >>> >> >>> > >>> > >>> >>> >>> -- >>> Best regards, >>> Bogdan Dobrelya, >>> Irc #bogdando >>> >>> >>> > > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Wed May 6 19:53:24 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 6 May 2020 14:53:24 -0500 Subject: [OSSA-2020-005] Keystone: OAuth1 request token authorize silently ignores roles parameter (CVE PENDING) Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ============================================================================== OSSA-2020-005: OAuth1 request token authorize silently ignores roles parameter ============================================================================== :Date: May 06, 2020 :CVE: Pending Affects ~~~~~~~ - - Keystone: <15.0.1, ==16.0.0 Description ~~~~~~~~~~~ kay reported a vulnerability in Keystone's OAuth1 Token API. The list of roles provided for an OAuth1 access token are ignored, so when an OAuth1 access token is used to request a keystone token, the keystone token will contain every role assignment the creator had for the project instead of the provided subset of roles. This results in the provided keystone token having more role assignments than the creator intended, possibly giving unintended escalated access. Patches ~~~~~~~ - - https://review.opendev.org/725894 (Rocky) - - https://review.opendev.org/725892 (Stein) - - https://review.opendev.org/725890 (Train) - - https://review.opendev.org/725887 (Ussuri) - - https://review.opendev.org/725885 (Victoria) Credits ~~~~~~~ - - kay (CVE Pending) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1873290 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=Pending Notes ~~~~~ - - The stable/rocky branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl6zFWsACgkQ56j9K3b+ vRFDnhAArgXdQUnCyckPQciBvxMxQvqhCEhzGH0aQNAmMLaImYUwFhFVVO0DlcNb kt/ynLQLdyi3YnCz1x4VhUXaCh4Rhi9pYkU4LKa/tvJj6anrCSLHmuDD52idkZeB sFslgkh/BGfdM4HcuPLhs4SSaZpI53ASitiOhyjBIN/DmpLUbZgmJ1iz3FfQ3cTB wtjYI4jGCCMq+4POSozWMzeYdL3JzR264jBCRrCw1ErIPjpF4KSOFaH5vqakBnzw Ot7KR7s7FmIwU7LhCuvjgLW3rxwE1g5bz+Qd/97rC1bTx/iPHklQjMP5SoGwmjta Kx1prUaQqFys5Bw93e0cj1Fwn0zNHUjqLs4LZscNbyGRyAZCPREeg2quwBxVUNk9 D6jxW3J2LYIu+ictVV5fnBQd4/+NtxM8ofLDM03QZouUpkNfCHAmW81BYqd2+Pii VbJi5Litz+DHLrAyh0O4zD/PBc5+5zxB2EXEDVEJitqaxQWfogJwJzGe89ULom0I VXMuYOvqaLV9f2JIG6SEBiKrfaUhSgoHTrmznt82KOlsOBMamQUaj5iTqDoDzPD2 LVB2WLABj1cFZsnTFAec1qKwEPXuT0p3Dsb7eyvwsq5aJYS5I2bjK6Q1WcCcqzJF 1b+v0iqW0Qu+Hk4fwvcrqqQMDZ7Q982tT+B7sU8xV4jYBtFLseQ= =iEFE -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed May 6 21:17:54 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 6 May 2020 22:17:54 +0100 Subject: [keystone] scope enforcement ready for prime time? Message-ID: Hi, I have a use case which I think could be fulfilled by scoped tokens: Allow an operator to delegate the ability to an actor to create users within a domain, without giving them the keys to the cloud. To do this, I understand I can assign a user the admin role for a domain. It seems that for this to work, I need to set [oslo_policy] enforce_scope = True in keystone.conf. The Train cycle highlights suggest this is now fully implemented in keystone, but other most projects lack support for scopes. Does this mean that in the above case, the user would have full cloud admin privileges in other services that lack support for scopes? i.e. while I expect it's safe to enable scope enforcement in keystone, is it "safe" to use it? Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed May 6 21:45:31 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 6 May 2020 17:45:31 -0400 Subject: [tc] proposed time poll Message-ID: Hi everyone, I've created the following doodle to help us pick a time for the upcoming PTG: https://doodle.com/poll/3ywuywu825hu7v23 Please post the times that work for you. Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. From sean.mcginnis at gmx.com Wed May 6 22:28:29 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 May 2020 17:28:29 -0500 Subject: [ptl][release] Ussuri final RC deadline reminder Message-ID: <25e3d935-235e-714d-7c63-2ea4a2f68f63@gmx.com> Hey everyone, We need several days before the final coordinate release to make sure no major issues are found, downstream packagers have time to get things lined up, and to make sure that things are stable for next week's Ussuri release date. To that end, and as a reminder to everyone, tomorrow is the deadline for any final RC releases before we enter the freeze. This is the last chance to get anything released and included in the initial Ussuri release. Looking through merged commits, there are a few repos that have merged code that the team's may want to consider releasing in a final RC. Things we do NOT need to get released for the final release: - .gitreview updates for the stable/ussuri branch - Upper constraints updates for the stable/ussuri branch - Tox and/or Zuul configuration changes Things you may want to include: - Translations - Bugfixes backported to stable/ussuri Also keep in mind that a stable release can be done quickly after the coordinated release and the freeze is over. So some minor issues are OK if they don't make it in. But if things are ready, it may be worth picking up anything that has merged so it is in those first official versions. See below for a listing of some repos that have things merged in stable/ussuri that look like they may be worth releasing one more final RC. Thanks! Sean [ Unreleased changes in openstack/compute-hyperv (stable/ussuri) ] Changes between 10.0.0.0rc1 and d449954 * d449954 2020-04-24 11:51:50 +0000 Address driver API changes (finish_migration) [ Unreleased changes in openstack/heat (stable/ussuri) ] Changes between 14.0.0.0rc1 and bceca6189 * 5bc6787a8 2020-04-28 07:13:57 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/keystone (stable/ussuri) ] Changes between 17.0.0.0rc1 and 0e463a78d * 8d5becbe4 2020-04-30 20:25:13 +0000 Check timestamp of signed EC2 token request * aef912187 2020-04-26 07:04:33 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/magnum (stable/ussuri) ] Changes between 10.0.0.0rc1 and c6e536ae * c6e536ae 2020-05-04 21:50:23 +0000 [k8s] Fix docker storage of Fedora CoreOS * abafef16 2020-05-01 09:15:52 +0000 Use ensure-* roles | * 45563f37 2020-05-01 18:57:43 +0000 [k8s] Upgrade k8s dashboard version to v2.0.0 |/ * c2f4b9b5 2020-04-27 23:01:40 +0000 [k8s] Fix no IP address in api_address [ Unreleased changes in openstack/manila (stable/ussuri) ] Changes between 10.0.0.0rc1 and 653092e5 * cec83f8c 2020-04-27 20:32:57 +0000 [CI] Fix grenade share networks test | * 3ea582e1 2020-04-28 18:59:44 +0000 Update share-manager behavior for shrink share operation |/ | * 541f9d8d 2020-04-30 05:28:27 +0000 [Unity] Fix unit test issue |/ * bdb955a1 2020-04-26 16:51:40 -0700 [grenade][stable/ussuri only] Switch base version [ Unreleased changes in openstack/monasca-agent (stable/ussuri) ] Changes between 3.0.0.0rc1 and 5eb3b78 * eee191a 2020-04-27 18:54:50 +0000 Do not copy /sbin/ip to /usr/bin/monasa-agent-ip [ Unreleased changes in openstack/networking-hyperv (stable/ussuri) ] Changes between 8.0.0.0rc1 and 6da239c * 6da239c 2020-04-27 06:57:06 +0000 Pick up security group RPC API changes [ Unreleased changes in openstack/neutron-vpnaas (stable/ussuri) ] Changes between 16.0.0.0rc1 and 3bf327fd5 * 4bc39fa53 2020-05-03 22:34:26 +0000 Fix unsubscriptable-object error [ Unreleased changes in openstack/neutron (stable/ussuri) ] Changes between 16.0.0.0rc1 and c06cb95b6f * 1e81440b63 2020-05-05 15:50:41 +0000 Monkey patch original current_thread _active | * 54ffa37538 2020-05-04 15:20:08 +0000 Add Octavia file in devstack/lib |/ * cca0aebb26 2020-05-01 17:23:28 +0200 Add Prelude for Ussuri release notes. * e3e9a90f75 2020-04-26 07:31:19 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/octavia-dashboard (stable/ussuri) ] Changes between 5.0.0.0rc1 and 57e9ea7 * 57e9ea7 2020-05-06 08:27:54 +0000 Imported Translations from Zanata * 97d1c6d 2020-04-28 08:47:09 +0000 Imported Translations from Zanata | * c53fa6a 2020-04-26 07:52:04 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/sahara-dashboard (stable/ussuri) ] Changes between 12.0.0.0rc1 and 0fa9e96 * 0fa9e96 2020-05-06 09:15:35 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/sahara (stable/ussuri) ] Changes between 12.0.0.0rc1 and a6ee5223 * a6ee5223 2020-05-06 13:53:41 +0000 Monkey patch original current_thread _active * 3db9f750 2020-04-26 08:17:19 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/searchlight-ui (stable/ussuri) ] Changes between 8.0.0.0rc1 and 75216df * 75216df 2020-05-06 09:32:28 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/searchlight (stable/ussuri) ] Changes between 8.0.0.0rc1 and 395df50 * 824e749 2020-04-26 08:42:41 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/senlin (stable/ussuri) ] Changes between 9.0.0.0rc1 and dd528700 * f1e20d12 2020-04-26 08:48:31 +0000 Imported Translations from Zanata [ Unreleased changes in openstack/watcher (stable/ussuri) ] Changes between 4.0.0.0rc1 and 870e6d75 * 870e6d75 2020-04-26 09:05:18 +0000 Imported Translations from Zanata From anlin.kong at gmail.com Wed May 6 22:37:40 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 7 May 2020 10:37:40 +1200 Subject: [keystone] How to create a session using trust-scoped token Message-ID: Hi keystone team, I met with an issue that can't perform openstack operations using openstack client lib by providing a trust scoped token. However, I can use that token to send HTTP request directly. Here is my code and the error message, http://dpaste.com/1N6AX0R I asked several times in #openstack-keystone IRC channel, but got no response. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 6 22:44:27 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 May 2020 22:44:27 +0000 Subject: [ptl][release] Ussuri final RC deadline reminder In-Reply-To: <25e3d935-235e-714d-7c63-2ea4a2f68f63@gmx.com> References: <25e3d935-235e-714d-7c63-2ea4a2f68f63@gmx.com> Message-ID: <20200506224427.4pbfjpalnp32ykzp@yuggoth.org> On 2020-05-06 17:28:29 -0500 (-0500), Sean McGinnis wrote: [...] > [ Unreleased changes in openstack/keystone (stable/ussuri) ] > > Changes between 17.0.0.0rc1 and 0e463a78d > > * 8d5becbe4 2020-04-30 20:25:13 +0000 Check timestamp of signed EC2 > token request > * aef912187 2020-04-26 07:04:33 +0000 Imported Translations from Zanata [...] In addition to 8d5becbe4 there are two more critical security fixes in flight for which advisories have already been distributed, and the Keystone team will most likely want these included in a 17.0.0.0rc2 so we don't release with known vulnerabilities: https://review.opendev.org/725887 https://review.opendev.org/725888 Hopefully they'll be merged by tomorrow's final RC deadline. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From knikolla at bu.edu Wed May 6 23:19:32 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Wed, 6 May 2020 23:19:32 +0000 Subject: [keystone] scope enforcement ready for prime time? In-Reply-To: References: Message-ID: <4005BE1E-DA1A-4F64-9FAA-8992242C1A50@bu.edu> Hi Mark, If the API in that OpenStack service doesn't have support for system/domain scopes and only checks for the admin role, the user would have cloud admin on that specific OpenStack API. Until all the services that you have deployed in your cloud properly support scopes, admin role on anything still gives cloud admin. Best, Kristi > On May 6, 2020, at 5:17 PM, Mark Goddard wrote: > > Hi, > > I have a use case which I think could be fulfilled by scoped tokens: > > Allow an operator to delegate the ability to an actor to create users within a domain, without giving them the keys to the cloud. > > To do this, I understand I can assign a user the admin role for a domain. It seems that for this to work, I need to set [oslo_policy] enforce_scope = True in keystone.conf. > > The Train cycle highlights suggest this is now fully implemented in keystone, but other most projects lack support for scopes. Does this mean that in the above case, the user would have full cloud admin privileges in other services that lack support for scopes? i.e. while I expect it's safe to enable scope enforcement in keystone, is it "safe" to use it? > > Cheers, > Mark From knikolla at bu.edu Wed May 6 23:30:43 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Wed, 6 May 2020 23:30:43 +0000 Subject: [keystone] How to create a session using trust-scoped token In-Reply-To: References: Message-ID: <4FBA72CC-BB5D-4FF4-8CE9-6FDB67E72D15@bu.edu> Hi Lingxian, The issue here is that keystoneauth usually treats all authentication methods the same, and uses them to authenticate and get another token. As you see the error in your log, trust scoped tokens can't create another token, so the authentication fails. Some clients provide a mechanism for you to input the token and endpoint_url directly without requiring keystoneauth. See below and try that. https://github.com/openstack/python-neutronclient/blob/master/neutronclient/client.py#L61-L62 Best, Kristi > On May 6, 2020, at 6:37 PM, Lingxian Kong wrote: > > Hi keystone team, > > I met with an issue that can't perform openstack operations using openstack client lib by providing a trust scoped token. However, I can use that token to send HTTP request directly. Here is my code and the error message, http://dpaste.com/1N6AX0R > > I asked several times in #openstack-keystone IRC channel, but got no response. > > --- > Lingxian Kong > Senior Software Engineer > Catalyst Cloud > www.catalystcloud.nz From kennelson11 at gmail.com Thu May 7 00:14:53 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 6 May 2020 17:14:53 -0700 Subject: [First Contact] [SIG] [PTG] Discussion Topic Collection Message-ID: Hello! I signed us up for some time (just an hour) on Monday and figured I should create an etherpad[1] to collect topics for us to discuss. Feel free to add your ideas! Can't wait to see you all there! -Kendall (diablo_rojo) [1] https://etherpad.opendev.org/p/FC-SIG-V-PTG -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Thu May 7 01:38:12 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 7 May 2020 13:38:12 +1200 Subject: [keystone] How to create a session using trust-scoped token In-Reply-To: <4FBA72CC-BB5D-4FF4-8CE9-6FDB67E72D15@bu.edu> References: <4FBA72CC-BB5D-4FF4-8CE9-6FDB67E72D15@bu.edu> Message-ID: Thanks for your reply, Kristi. I will try and get back to you if needed. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Thu, May 7, 2020 at 11:30 AM Nikolla, Kristi wrote: > Hi Lingxian, > > The issue here is that keystoneauth usually treats all authentication > methods the same, and uses them to authenticate and get another token. As > you see the error in your log, trust scoped tokens can't create another > token, so the authentication fails. > > Some clients provide a mechanism for you to input the token and > endpoint_url directly without requiring keystoneauth. See below and try > that. > > > https://github.com/openstack/python-neutronclient/blob/master/neutronclient/client.py#L61-L62 > > Best, > Kristi > > > On May 6, 2020, at 6:37 PM, Lingxian Kong wrote: > > > > Hi keystone team, > > > > I met with an issue that can't perform openstack operations using > openstack client lib by providing a trust scoped token. However, I can use > that token to send HTTP request directly. Here is my code and the error > message, http://dpaste.com/1N6AX0R > > > > I asked several times in #openstack-keystone IRC channel, but got no > response. > > > > --- > > Lingxian Kong > > Senior Software Engineer > > Catalyst Cloud > > www.catalystcloud.nz > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Thu May 7 05:47:26 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 7 May 2020 17:47:26 +1200 Subject: [keystone] How to create a session using trust-scoped token In-Reply-To: <4FBA72CC-BB5D-4FF4-8CE9-6FDB67E72D15@bu.edu> References: <4FBA72CC-BB5D-4FF4-8CE9-6FDB67E72D15@bu.edu> Message-ID: On Thu, May 7, 2020 at 11:30 AM Nikolla, Kristi wrote: > The issue here is that keystoneauth usually treats all authentication > methods the same, and uses them to authenticate and get another token. As > you see the error in your log, trust scoped tokens can't create another > token, so the authentication fails. > > Some clients provide a mechanism for you to input the token and > endpoint_url directly without requiring keystoneauth. See below and try > that. > > > https://github.com/openstack/python-neutronclient/blob/master/neutronclient/client.py#L61-L62 Thanks again Kristi, it does work. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu May 7 10:14:29 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 7 May 2020 11:14:29 +0100 Subject: [kolla] Kolla klub meeting today Message-ID: Hi, Just a reminder that we have the third Kolla Klub meeting today at 3pm UTC. We have Gaël Therond and Дмитрий Грачев (Dmitry Grachev) giving case studies. Meeting notes, connection info and agenda: https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw Cheers, Mark From dtantsur at redhat.com Thu May 7 10:23:16 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 7 May 2020 12:23:16 +0200 Subject: [all] [qa] [oslo] Usage of testtools? In-Reply-To: <20200506171359.GA167759@neo-zeong.kortar.org> References: <20200506171359.GA167759@neo-zeong.kortar.org> Message-ID: On Wed, May 6, 2020 at 7:17 PM Matthew Treinish wrote: > On Wed, May 06, 2020 at 05:58:43PM +0200, Dmitry Tantsur wrote: > > Hi folks, > > > > Most OpenStack projects use testtools/fixtures/unittest2 for unit tests. > It > > was immensely helpful in Python 2 times, but I wonder if we should > migrate > > away now. I've hit at least these three bugs: > > https://github.com/testing-cabal/testtools/issues/235 > > https://github.com/testing-cabal/testtools/issues/275 > > https://github.com/testing-cabal/testtools/issues/144 > > > > We can invest time in fixing them, but I wonder if we should just migrate > > to the standard unittest (and move oslotest to it) now that we require > > Python >= 3.6. > > > > Thoughts? > > > > It's used extensively, all the fixtures usage is based on testtools, > additionally all the attachments (mainly captured stdout, stderr, and > logging) > rely on testtools result stream which adds the concept of attachments. None > of that is in stdlib unittest. > Then should we invest time to fix the testtools bugs? I understand that folks use stuff that is not in stdlib, but it's pretty annoying to not be able to use stuff that IS in stdlib. Dmitry > > -Matt Treinish > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu May 7 13:29:35 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 7 May 2020 15:29:35 +0200 Subject: [ironic] II SPUC - The Sanity Preservation Un-Conference In-Reply-To: References: Message-ID: It seems that most people who voted can make it this Friday at 2pm UTC. Shall we do it? On Thu, Apr 23, 2020 at 7:15 PM Iury Gregory wrote: > Hello everyone \o/ > > We are going for the second edition of SPUC! > For those who doesn't know what is SPUC check [0]. > There is only 3 things you need to do: > 1) Fill out the doodle in [1] *, it has options for the next two weeks > since May 1st is a holiday. > 2) Brainstorm some ideas of things you might want to talk > about/present or LEARN! * > 3) Dust off your silly hats! * > > * From Julia's email =) [0] > > Thank you! > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013521.html > [1] https://doodle.com/poll/2q5zmv3g6uy2475e > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 7 13:35:22 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 7 May 2020 15:35:22 +0200 Subject: [neutron] Agenda of the 08.05.2020 Drivers meeting Message-ID: <20200507133522.35nge2cjck5i2c4e@skaplons-mac> Hi, For tomorrows drivers meeting we have 2 new RFEs to be discussed: * https://bugs.launchpad.net/neutron/+bug/1875516 - [RFE] Allow sharing security groups as read-only * https://bugs.launchpad.net/neutron/+bug/1873091/ - [RFE] Neutron ports dns_assignment does not match the designate DNS records for Neutron port See You tomorrow on the meeting. -- Slawek Kaplonski Senior software engineer Red Hat From iurygregory at gmail.com Thu May 7 13:50:40 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Thu, 7 May 2020 15:50:40 +0200 Subject: [ironic] II SPUC - The Sanity Preservation Un-Conference In-Reply-To: References: Message-ID: Yup! Let's do this \o/ Em qui., 7 de mai. de 2020 às 15:32, Dmitry Tantsur escreveu: > It seems that most people who voted can make it this Friday at 2pm UTC. > Shall we do it? > > On Thu, Apr 23, 2020 at 7:15 PM Iury Gregory > wrote: > >> Hello everyone \o/ >> >> We are going for the second edition of SPUC! >> For those who doesn't know what is SPUC check [0]. >> There is only 3 things you need to do: >> 1) Fill out the doodle in [1] *, it has options for the next two weeks >> since May 1st is a holiday. >> 2) Brainstorm some ideas of things you might want to talk >> about/present or LEARN! * >> 3) Dust off your silly hats! * >> >> * From Julia's email =) [0] >> >> Thank you! >> >> [0] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013521.html >> [1] https://doodle.com/poll/2q5zmv3g6uy2475e >> >> -- >> >> >> *Att[]'sIury Gregory Melo Ferreira * >> *MSc in Computer Science at UFCG* >> *Software Engineer at Red Hat Czech* >> *Social*: https://www.linkedin.com/in/iurygregory >> *E-mail: iurygregory at gmail.com * >> > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu May 7 14:02:26 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 7 May 2020 07:02:26 -0700 Subject: [ironic] II SPUC - The Sanity Preservation Un-Conference In-Reply-To: References: Message-ID: There was some further discussion among the team and it seems like we're going to start at 3PM. We can use my bluejeans. https://bluejeans.com/u/jkreger -Julia On Thu, May 7, 2020 at 6:53 AM Iury Gregory wrote: > > Yup! Let's do this \o/ > > Em qui., 7 de mai. de 2020 às 15:32, Dmitry Tantsur escreveu: >> >> It seems that most people who voted can make it this Friday at 2pm UTC. Shall we do it? >> >> On Thu, Apr 23, 2020 at 7:15 PM Iury Gregory wrote: >>> >>> Hello everyone \o/ >>> >>> We are going for the second edition of SPUC! >>> For those who doesn't know what is SPUC check [0]. >>> There is only 3 things you need to do: >>> 1) Fill out the doodle in [1] *, it has options for the next two weeks since May 1st is a holiday. >>> 2) Brainstorm some ideas of things you might want to talk >>> about/present or LEARN! * >>> 3) Dust off your silly hats! * >>> >>> * From Julia's email =) [0] >>> >>> Thank you! >>> >>> [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013521.html >>> [1] https://doodle.com/poll/2q5zmv3g6uy2475e >>> >>> -- >>> Att[]'s >>> Iury Gregory Melo Ferreira >>> MSc in Computer Science at UFCG >>> Software Engineer at Red Hat Czech >>> Social: https://www.linkedin.com/in/iurygregory >>> E-mail: iurygregory at gmail.com > > > > -- > Att[]'s > Iury Gregory Melo Ferreira > MSc in Computer Science at UFCG > Part of the puppet-manager-core team in OpenStack > Software Engineer at Red Hat Czech > Social: https://www.linkedin.com/in/iurygregory > E-mail: iurygregory at gmail.com From yu.chengde at 99cloud.net Thu May 7 14:04:47 2020 From: yu.chengde at 99cloud.net (YuChengDe) Date: Thu, 7 May 2020 22:04:47 +0800 (GMT+08:00) Subject: =?UTF-8?B?W29wZW5zdGFjay9yZXF1aXJlbWVudHNdIEhvcml6b24gaW1hZ2Ugd29ya3MgYWJub3JtYWwgYmVjYXVzZSBvZiB0aGUgY29uZmxpY3QgYmV3dGVlbiBteXNxbGNsaWVudCBhbmQgRGphbmdvIA==?= Message-ID: Hello: I have built a horizon image in ussuri version, but it works abnormal due to package mismatch problem. According to Django===2.2.12 from latest upper-constraint.txt, it is not compatible with mysqlclient. Using old version Django===2.1.5 can fix the problem. Could you help me to check the mismatch problem. Many thanks. -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu May 7 14:12:19 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 7 May 2020 15:12:19 +0100 Subject: [keystone] scope enforcement ready for prime time? In-Reply-To: <4005BE1E-DA1A-4F64-9FAA-8992242C1A50@bu.edu> References: <4005BE1E-DA1A-4F64-9FAA-8992242C1A50@bu.edu> Message-ID: On Thu, 7 May 2020 at 00:19, Nikolla, Kristi wrote: > > Hi Mark, > > If the API in that OpenStack service doesn't have support for system/domain scopes and only checks for the admin role, the user would have cloud admin on that specific OpenStack API. > > Until all the services that you have deployed in your cloud properly support scopes, admin role on anything still gives cloud admin. > > Best, > Kristi Thanks for the response Kristi. I think we can solve this particular use case with a custom policy and role in the mean time. Having spent a little time playing with scopes, I found it quite awkward getting the right environment variables into OSC to get the scope I wanted. e.g. Need OS_DOMAIN_* and no OS_PROJECT_* to get a domain scoped token. I suppose clouds.yaml could help. > > > On May 6, 2020, at 5:17 PM, Mark Goddard wrote: > > > > Hi, > > > > I have a use case which I think could be fulfilled by scoped tokens: > > > > Allow an operator to delegate the ability to an actor to create users within a domain, without giving them the keys to the cloud. > > > > To do this, I understand I can assign a user the admin role for a domain. It seems that for this to work, I need to set [oslo_policy] enforce_scope = True in keystone.conf. > > > > The Train cycle highlights suggest this is now fully implemented in keystone, but other most projects lack support for scopes. Does this mean that in the above case, the user would have full cloud admin privileges in other services that lack support for scopes? i.e. while I expect it's safe to enable scope enforcement in keystone, is it "safe" to use it? > > > > Cheers, > > Mark > From m2elsakha at gmail.com Thu May 7 14:32:42 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Thu, 7 May 2020 10:32:42 -0400 Subject: Uniting TC and UC Message-ID: Hi All, As you may know already, there has been an ongoing discussion to merge UC and TC under a single body. Three options were presented, along with their impact on the bylaws. http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html We had several discussions in the UC on the three options as well as the UC items that need to be well-represented under the common committee, and here’s what we propose for the merger: 1- There’s consensus on utilizing option 1, thus requiring no bylaws change and merging AUC and ATC : 1- No bylaws change As bylaws changes take a lot of time and energy, the simplest approach would be to merge the TC and UC without changing the bylaws at all. The single body (called TC) would incorporate the AUC criteria by adding all AUC members as extra-ATC. It would tackle all aspects of our community. To respect the letter of the bylaws, the TC would formally designate 5 of its members to be the 'UC' and those would select a 'UC chair'. But all tasks would be handled together. 2- Two main core UC responsibilities need to continue to be represented under TC - Representing the user community to the broader community (Board of directors, TC...) - Working with user groups worldwide to keep the community vibrant and informed. 3- The current active workgroups under UC’s governance will be moved to be under TC’s governance. 4- AUC and ATC to be merged into a single list, potentially called AC (Active Contributor) 5- We hope to have the merger finalized before the upcoming UC elections in September 2020. In addition to discussions over the mailing list, we also have the opportunity of "face-to-face" discussions at the upcoming PTG. Thanks Mohamed --melsakhawy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Thu May 7 15:45:15 2020 From: mthode at mthode.org (Matthew Thode) Date: Thu, 7 May 2020 10:45:15 -0500 Subject: [requirements][horizon] (ussuri) image works abnormaly because of the conflict bewteen mysqlclient and Django In-Reply-To: References: Message-ID: <20200507154515.za6qpjff7ejcraqr@mthode.org> On 20-05-07 22:04:47, YuChengDe wrote: > Hello: > I have built a horizon image in ussuri version, but it works abnormal due to package mismatch problem. > According to Django===2.2.12 from latest upper-constraint.txt, it is not compatible with mysqlclient. > Using old version Django===2.1.5 can fix the problem. > Could you help me to check the mismatch problem. > Many thanks. > They co-install just fine (could not merge otherwise). Do you have a log showing the problem? -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Thu May 7 15:47:13 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 May 2020 15:47:13 +0000 Subject: [User-committee] Uniting TC and UC In-Reply-To: References: Message-ID: <20200507154712.4gprgrk23vkuntkz@yuggoth.org> On 2020-05-07 10:32:42 -0400 (-0400), Mohamed Elsakhawy wrote: [...] > 4- AUC and ATC to be merged into a single list, potentially called AC > (Active Contributor) [...] The term Active Technical Contributors (ATC) is referenced officially in OSF Bylaws Appendix 4, so renaming it may not be worth tackling in the immediate future: https://www.openstack.org/legal/technical-committee-member-policy/ > In addition to discussions over the mailing list, we also have the > opportunity of "face-to-face" discussions at the upcoming PTG. Might I recommend an item #6? Move discussions to the openstack-discuss mailing list and retire the separate user-committee ML. The TC already got rid of its dedicated mailing list a couple years back when we merged other extant lists into the then-new general discussion list. I'm happy to assist with doing the same for user-committee since I handled all the others previously. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Thu May 7 16:00:16 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 7 May 2020 18:00:16 +0200 Subject: Uniting TC and UC In-Reply-To: References: Message-ID: <6f8571bf-0d23-8a3b-1225-8c2bdd95285d@openstack.org> Mohamed Elsakhawy wrote: > As you may know already, there has been an ongoing discussion to merge > UC and TC under a single body. Three options were presented, along with > their impact on the bylaws. > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html > > We had several discussions in the UC on the three options as well as the > UC items that need to be well-represented under the common committee, > and here’s what we propose for the merger: > [...] Thanks Mohamed for the detailed plan. With Jeremy's caveats, it sounds good to me. I'm happy to assist by filing the necessary changes in the various governance repositories. > [...] > In addition to discussions over the mailing list, we also have the > opportunity of "face-to-face" discussions at the upcoming PTG. I added this topic to the etherpad at: https://etherpad.opendev.org/p/tc-victoria-ptg -- Thierry Carrez (ttx) From ashlee at openstack.org Thu May 7 16:38:44 2020 From: ashlee at openstack.org (Ashlee Ferguson) Date: Thu, 7 May 2020 11:38:44 -0500 Subject: OpenDev Events Update - we need your feedback! In-Reply-To: <3F23D422-4DD9-4C1B-892E-BA07D51EF61D@openstack.org> References: <3F23D422-4DD9-4C1B-892E-BA07D51EF61D@openstack.org> Message-ID: Just a reminder to fill out your time availability on these etherpads *by end of day today* if you’re interested in OpenDev: Hardware Automation or OpenDev: Containers in Production: https://etherpad.opendev.org/p/OpenDev_HardwareAutomation https://etherpad.opendev.org/p/OpenDev_ContainersInProduction Thanks! Ashlee Ashlee Ferguson Community & Events Coordinator OpenStack Foundation > On May 5, 2020, at 2:17 PM, Ashlee Ferguson wrote: > > The deadline for providing input on those etherpads is *May 7.* Months are difficult these days! > > Ashlee > >> On May 5, 2020, at 12:30 PM, Ashlee Ferguson wrote: >> >> Hi everyone, >> >> Thanks so much for your feedback that’s continually helping us shape these virtual OpenDev events[1]. Keep reading this email for some important updates on all three events! >> >> 1. OpenDev: Large-scale Usage of Open Infrastructure Software >> - June 29 - July 1 (1300 - 1600 UTC each day) >> - Registration is open[2]! >> - High-level schedule coming soon; Find a brief topic summary and more information here[1] >> >> 2. OpenDev: Hardware Automation >> - July 20 - 22 >> - Help us pick a time - visit this etherpad[3] and +1 the times you’re available and the topics you would like to cover - please provide your input by this Thursday, March 7. >> >> 3. OpenDev: Containers in Production >> - August 10 - 12 >> - Help us pick a time - visit this etherpad[4] and +1 the times you’re available and the topics you would like to cover - please provide your input by this Thursday, March 7. >> >> Find more information and updates on all of these events here[1]. >> >> [1] https://www.openstack.org/events/opendev-2020 >> [2] https://opendev_largescale.eventbrite.com >> [3] https://etherpad.opendev.org/p/OpenDev_HardwareAutomation >> [4] https://etherpad.opendev.org/p/OpenDev_ContainersInProduction >> >> >> Thanks! >> Ashlee >> >> Ashlee Ferguson >> Community & Events Coordinator >> OpenStack Foundation >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 7 19:02:22 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 May 2020 19:02:22 +0000 Subject: [infra][qa][sigs][tc] Forming a Testing and Collaboration Tools (TaCT) SIG In-Reply-To: <20200420205853.aakada6emaxi72xw@yuggoth.org> References: <20200420205853.aakada6emaxi72xw@yuggoth.org> Message-ID: <20200507190221.ins37k7yu4cilzfg@yuggoth.org> Creation of our new TaCT SIG has been approved (thanks Thierry for the governance changes!), and I've proposed a change to add a page documenting what it's for, why it exists, and how to get involved: https://review.opendev.org/726223 I also volunteered to serve as a SIG chair, but welcome as many co-chairs as also want to serve. The responsibilities are minimal, even more so when there are multiple chairs: https://governance.openstack.org/sigs/reference/sig-guideline.html#select-sig-chairs -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Thu May 7 20:16:57 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 7 May 2020 20:16:57 +0000 Subject: [User-committee] Uniting TC and UC In-Reply-To: <6f8571bf-0d23-8a3b-1225-8c2bdd95285d@openstack.org> References: <6f8571bf-0d23-8a3b-1225-8c2bdd95285d@openstack.org> Message-ID: <8deb5f4fc05e4a0994845e0a0a6580ac@AUSX13MPS308.AMER.DELL.COM> Is that a recommendation we want to share with other project under OpenStack foundation? -----Original Message----- From: Thierry Carrez Sent: Thursday, May 7, 2020 11:00 AM To: Mohamed Elsakhawy; OpenStack Discuss; user-committee Subject: Re: [User-committee] Uniting TC and UC [EXTERNAL EMAIL] Mohamed Elsakhawy wrote: > As you may know already, there has been an ongoing discussion to merge > UC and TC under a single body. Three options were presented, along > with their impact on the bylaws. > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/0 > 12806.html > > We had several discussions in the UC on the three options as well as > the UC items that need to be well-represented under the common > committee, and here’s what we propose for the merger: > [...] Thanks Mohamed for the detailed plan. With Jeremy's caveats, it sounds good to me. I'm happy to assist by filing the necessary changes in the various governance repositories. > [...] > In addition to discussions over the mailing list, we also have the > opportunity of "face-to-face" discussions at the upcoming PTG. I added this topic to the etherpad at: https://etherpad.opendev.org/p/tc-victoria-ptg -- Thierry Carrez (ttx) _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee From feilong at catalyst.net.nz Thu May 7 20:21:27 2020 From: feilong at catalyst.net.nz (feilong) Date: Fri, 8 May 2020 08:21:27 +1200 Subject: OpenStack Ussuri Community Meetings In-Reply-To: <017C2735-2FB7-4AD3-B5AF-30730EDBFC43@openstack.org> References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> <422BE6AE-ACCC-427D-8C97-5142F5FACA70@cern.ch> <017C2735-2FB7-4AD3-B5AF-30730EDBFC43@openstack.org> Message-ID: Hi Allison, During the meeting, should we share a PPT/demo or just an oral introduction about the highlights? If a PPT is preferred, is there a template we should use? Thanks. On 6/05/20 4:03 AM, Allison Price wrote: > Hi Tim, > > Yes, both the slides and recordings will be shared on the mailing list > after the meetings.  > > Thanks, > Allison > > >> On May 5, 2020, at 2:02 AM, Tim Bell > > wrote: >> >> Thanks for organising this. >> >> Will recordings / slides be made available ? >> >> Tim >> >>> On 4 May 2020, at 20:07, Allison Price >> > wrote: >>> >>> Hi everyone,  >>> >>> The OpenStack Ussuri release is only a week and a half away! Members >>> of the TC and project teams are holding two community meetings: one >>> on Wednesday, May 13 and Thursday, May 14. Here, they will share >>> some of the release highlights and project features. >>> >>> Join us:  >>> >>> * *Wednesday, May 13 at 1400 UTC* >>> o Moderator >>> + Mohammed Naser, TC  >>> o Presenters / Open for Questions >>> + Slawek Kaplonski, Neutron >>> + Michael Johnson, Octavia >>> + Goutham Pacha Ravi, Manila >>> + Mark Goddard, Kolla >>> + Balazs Gibizer, Nova >>> + Brian Rosmaita, Cinder >>> >>> >>> * *Thursday, May 14 at 0200 UTC* >>> o Moderator: Rico Lin, TC >>> o Presenters / Open for Questions >>> + Michael Johnson, Octavia >>> + Goutham Pacha Ravi, Manila >>> + Rico Lin, Heat  >>> + Feilong Wang, Magnum >>> + Brian Rosmaita, Cinder >>> >>> >>> >>> See you there!  >>> Allison >>> >>> >>> >>> >>> >>> Allison Price >>> OpenStack Foundation >>> allison at openstack.org >>> >>> >>> >>> >> > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Thu May 7 20:27:19 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 7 May 2020 15:27:19 -0500 Subject: [Syntribos] Does anyone still use this? In-Reply-To: References: Message-ID: It's been ~10 months now with no response, with there still being no real involvement with the syntribos project since then and the last merged commit was Dec 2018, we will be moving forward with retiring the project from OpenStack. Step 1: https://review.opendev.org/#/c/726237/ On Thu, Jul 11, 2019 at 6:53 PM Gage Hugo wrote: > The Security SIG has recently been looking into updating several > sites/documentation and one part we discussed the last couple meetings was > the security tools listings[0]. One of the projects listed there, > Syntribos, doesn't appear to have much activity[1] outside of > zuul/python/docs maintenance and the idea of retiring the project was > mentioned. > > However, there does seem to be an occasional bug-fix submitted, so we are > sending this email out to see if anyone is still utilizing Syntribos. If > so, please either respond here, reach out to us in the #openstack-security > irc channel, or fill out the section in the Security SIG Agenda etherpad[2]. > > Thanks! > > [0] https://security.openstack.org/#security-tool-development > [1] https://review.opendev.org/#/q/project:openstack/syntribos > [2] https://etherpad.openstack.org/p/security-agenda > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Thu May 7 20:23:56 2020 From: allison at openstack.org (Allison Price) Date: Thu, 7 May 2020 15:23:56 -0500 Subject: OpenStack Ussuri Community Meetings In-Reply-To: References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> <422BE6AE-ACCC-427D-8C97-5142F5FACA70@cern.ch> <017C2735-2FB7-4AD3-B5AF-30730EDBFC43@openstack.org> Message-ID: <425EF78A-2155-4A4C-BEB2-93266DAA6FA7@openstack.org> Hi Feilong, If you can share a few slides with me ahead of the meeting (ideally by EOD Monday, I am going to compile them all into one presentation that we will screenshare during the meeting. This way, the meeting recording will show the slides. Attached is a powerpoint template that you can use. I recommend keeping it to 2-3 slides if possible. Thanks! Allison > On May 7, 2020, at 3:21 PM, feilong wrote: > > Hi Allison, > > During the meeting, should we share a PPT/demo or just an oral introduction about the highlights? If a PPT is preferred, is there a template we should use? Thanks. > > > > On 6/05/20 4:03 AM, Allison Price wrote: >> Hi Tim, >> >> Yes, both the slides and recordings will be shared on the mailing list after the meetings. >> >> Thanks, >> Allison >> >> >>> On May 5, 2020, at 2:02 AM, Tim Bell > wrote: >>> >>> Thanks for organising this. >>> >>> Will recordings / slides be made available ? >>> >>> Tim >>> >>>> On 4 May 2020, at 20:07, Allison Price > wrote: >>>> >>>> Hi everyone, >>>> >>>> The OpenStack Ussuri release is only a week and a half away! Members of the TC and project teams are holding two community meetings: one on Wednesday, May 13 and Thursday, May 14. Here, they will share some of the release highlights and project features. >>>> >>>> Join us: >>>> >>>> Wednesday, May 13 at 1400 UTC >>>> Moderator >>>> Mohammed Naser, TC >>>> Presenters / Open for Questions >>>> Slawek Kaplonski, Neutron >>>> Michael Johnson, Octavia >>>> Goutham Pacha Ravi, Manila >>>> Mark Goddard, Kolla >>>> Balazs Gibizer, Nova >>>> Brian Rosmaita, Cinder >>>> >>>> Thursday, May 14 at 0200 UTC >>>> Moderator: Rico Lin, TC >>>> Presenters / Open for Questions >>>> Michael Johnson, Octavia >>>> Goutham Pacha Ravi, Manila >>>> Rico Lin, Heat >>>> Feilong Wang, Magnum >>>> Brian Rosmaita, Cinder >>>> >>>> >>>> See you there! >>>> Allison >>>> >>>> >>>> >>>> >>>> >>>> Allison Price >>>> OpenStack Foundation >>>> allison at openstack.org >>>> >>>> >>>> >>>> >>> >> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: os-template-2017.potx Type: application/vnd.openxmlformats-officedocument.presentationml.template Size: 1840354 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu May 7 20:49:03 2020 From: amy at demarco.com (Amy) Date: Thu, 7 May 2020 15:49:03 -0500 Subject: [User-committee] Uniting TC and UC In-Reply-To: <8deb5f4fc05e4a0994845e0a0a6580ac@AUSX13MPS308.AMER.DELL.COM> References: <8deb5f4fc05e4a0994845e0a0a6580ac@AUSX13MPS308.AMER.DELL.COM> Message-ID: <90760AA2-76B4-414A-98B2-4AFC678D0EC1@demarco.com> OpenStack currently is the only OSF project which has a TC and UC structure. Thanks, Amy (spotz) > On May 7, 2020, at 3:17 PM, Arkady.Kanevsky at dell.com wrote: > > Is that a recommendation we want to share with other project under OpenStack foundation? > > -----Original Message----- > From: Thierry Carrez > Sent: Thursday, May 7, 2020 11:00 AM > To: Mohamed Elsakhawy; OpenStack Discuss; user-committee > Subject: Re: [User-committee] Uniting TC and UC > > > [EXTERNAL EMAIL] > > Mohamed Elsakhawy wrote: >> As you may know already, there has been an ongoing discussion to merge >> UC and TC under a single body. Three options were presented, along >> with their impact on the bylaws. >> >> http://lists.openstack.org/pipermail/openstack-discuss/2020-February/0 >> 12806.html >> >> We had several discussions in the UC on the three options as well as >> the UC items that need to be well-represented under the common >> committee, and here’s what we propose for the merger: >> [...] > > Thanks Mohamed for the detailed plan. With Jeremy's caveats, it sounds good to me. I'm happy to assist by filing the necessary changes in the various governance repositories. > >> [...] >> In addition to discussions over the mailing list, we also have the >> opportunity of "face-to-face" discussions at the upcoming PTG. > > I added this topic to the etherpad at: > https://etherpad.opendev.org/p/tc-victoria-ptg > > -- > Thierry Carrez (ttx) > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee From fungi at yuggoth.org Thu May 7 20:49:15 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 May 2020 20:49:15 +0000 Subject: [User-committee] Uniting TC and UC In-Reply-To: <8deb5f4fc05e4a0994845e0a0a6580ac@AUSX13MPS308.AMER.DELL.COM> References: <6f8571bf-0d23-8a3b-1225-8c2bdd95285d@openstack.org> <8deb5f4fc05e4a0994845e0a0a6580ac@AUSX13MPS308.AMER.DELL.COM> Message-ID: <20200507204915.aacgx6ch4mjvyv7r@yuggoth.org> On 2020-05-07 20:16:57 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: > Is that a recommendation we want to share with other project under > OpenStack foundation? [...] As far as I'm aware, no other Open Infrastructure Project maintains a separate User Committee, and they already expect their existing governance bodies to take up user-related concerns as a matter of course. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu May 7 20:52:15 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 May 2020 20:52:15 +0000 Subject: [Syntribos] Does anyone still use this? In-Reply-To: References: Message-ID: <20200507205215.uxttezutzm3mtpwn@yuggoth.org> On 2020-05-07 15:27:19 -0500 (-0500), Gage Hugo wrote: > It's been ~10 months now with no response, with there still being > no real involvement with the syntribos project since then and the > last merged commit was Dec 2018, we will be moving forward with > retiring the project from OpenStack. [...] I agree, its usefulness seems to have come to an end and nobody has really been maintaining the software as far as I can tell. Thanks for closing it down gracefully. If someone comes along later and wants to reconstitute and continue the project, all they need is a few git revert commands. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gagehugo at gmail.com Thu May 7 20:59:17 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 7 May 2020 15:59:17 -0500 Subject: [OSSA-2020-003] Keystone: Keystone does not check signature TTL of the EC2 credential auth method (CVE PENDING) In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ====================================================================================== OSSA-2020-003: Keystone does not check signature TTL of the EC2 credential auth method ====================================================================================== :Date: May 06, 2020 :CVE: CVE-2020-12692 Affects ~~~~~~~ - - Keystone: <15.0.1, ==16.0.0 Description ~~~~~~~~~~~ kay reported a vulnerability with keystone's EC2 API. Keystone doesn't have a signature TTL check for AWS signature V4 and an attacker can sniff the auth header, then use it to reissue an openstack token an unlimited number of times. Errata ~~~~~~ CVE-2020-12692 was assigned after the original publication date. Patches ~~~~~~~ - - https://review.opendev.org/725385 (Rocky) - - https://review.opendev.org/725069 (Stein) - - https://review.opendev.org/724954 (Train) - - https://review.opendev.org/724746 (Ussuri) - - https://review.opendev.org/724124 (Victoria) Credits ~~~~~~~ - - kay (CVE-2020-12692) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1872737 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12692 Notes ~~~~~ - - The stable/rocky branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. OSSA History ~~~~~~~~~~~~ - - 2020-05-07 - Errata 1 - - 2020-05-06 - Original Version -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl60dXoACgkQ56j9K3b+ vREOnxAAtrb94nekVD1bjsjmp2bJsJoN4alwIySMJzDAXp9aU2j23jS3pEixLuBN lkK6AA7BwKY5HgNtEeWrau+Ri+GOyYlhRMXZy+z+JC6+9qYxdFwcatL6yLYwkrOF pMREuwbENZMBgl3HgIotJU/RqilZXf+7OLCO9ZaciaYvXkM3e5TswxYme9S+9r57 OQ6veWVEfTTadTK+wp9tZ4RzPcgKAwiCEX2w1uYBCAMrh+GAWFBEiD4J7IEOvs2u TgnI/znFnQSb1f2CIYENGRevBFRvtILfovMI71rgwgNrof15Z6G6U3PW+yLPFaWg rqQd3wEmmUPNF/RQdOIngktTXEkQI1DsUkCg/75EZlDVBayUP1qyP1nlK/uAwRoX w0p6cPS/rREiOuCfCUKJ6tGg8e4/5o55cwbX/Bv/4KQxqCpD5W7XB1y81A0xnwsz btBZkio3KZZltCST+dNrmLIm3ZxdGQoC+wA+BweaAiMZf2HP8sSOxegDOGhWvBPm p23fH1kToH6vnGdGnp5SAIEcFg8Cu8LFVovZFHvfaN84XkRyX3Yqc+n88IauF0re pFf1iegTAArgminNCuTKKswLNgLr5J6SkKH/LTb3/hKgduRabRzKcBreP371fuvP K5/QCmXEyOT8HbQstWaEXmy9FvDh35lvmXtaKWBhB0LR8kWAY8s= =fTyp -----END PGP SIGNATURE----- On Wed, May 6, 2020 at 2:41 PM Gage Hugo wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > ====================================================================================== > OSSA-2020-003: Keystone does not check signature TTL of the EC2 credential > auth method > > ====================================================================================== > > :Date: May 06, 2020 > :CVE: Pending > > > Affects > ~~~~~~~ > - - Keystone: <15.0.1, ==16.0.0 > > > Description > ~~~~~~~~~~~ > kay reported a vulnerability with keystone's EC2 API. Keystone doesn't > have a signature TTL check for AWS signature V4 and an attacker can > sniff the auth header, then use it to reissue an openstack token an > unlimited number of times. > > > Patches > ~~~~~~~ > - - https://review.opendev.org/725385 (Rocky) > - - https://review.opendev.org/725069 (Stein) > - - https://review.opendev.org/724954 (Train) > - - https://review.opendev.org/724746 (Ussuri) > - - https://review.opendev.org/724124 (Victoria) > > > Credits > ~~~~~~~ > - - kay (CVE Pending) > > > References > ~~~~~~~~~~ > - - https://launchpad.net/bugs/1872737 > - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=Pending > > > Notes > ~~~~~ > - - The stable/rocky branch is under extended maintenance and will receive > no new > point releases, but a patch for it is provided as a courtesy. > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl6zEjwACgkQ56j9K3b+ > vRFejhAAvzq3MBwKGXIKsJxQmwVS0RxVFifTAfnKIjBGskG3knWkQHopY0IcmwoZ > 3Kv2AnRgFVBuQpZ0t9Y3S3U7KRI63FT+kzA3gy9sB+h7rdqzquxejXvljRMGJlex > WRCOQwRP4prFpzpUqzBg9/bIAyWpkrjJIvz7iJ9U3z6MbrZIjV+YEZ3JIRQTdMUj > MajgwJ4EDynkh8trm63n7Gyuvq8ukj1FCrG1APWJi96HhwNz6XwiqXIWci4CTaEW > sY9v8luETMCyv+nY2pt9IF8wXOaJKJXPTilf6sisjN2zDq+UWgsxEC0sp3h09tnZ > m6cy3OvUQeDmdJVQ/VNsfUTeRYRvYri2u44FaOUBjsNxeZca1U4MCVkAiN9BBzkg > k1Xb8zgGoXaytT/lzzyr67h6ZghKm6cnSUktWnX56847byOMPi/g9q1cu0edUwwC > 7SDaQ08JbsEstiXtPVBhatTLxbjlNy5eql6NaZmFQatYJAQKZsasvwV4YBv290mu > OsVHUEqjmYk4b4CZNPQC2681CDtAQpiLuasYiLnxC6I+zBTwfP+6tzP0xVHW4woi > 4Jhl/watZMudrtMS3YoOmwZ4iFNJRzQcDWmiAr0CZiC0NGamLjvHWHRslnvmhy92 > kSGWLilaMD5vBODXVY82lQHrbl96dPRbpe8/z29sALsEs6aNFYk= > =qyBV > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Thu May 7 21:00:17 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 7 May 2020 16:00:17 -0500 Subject: [OSSA-2020-004] Keystone: Keystone credential endpoints allow owner modification and are not protected from a scoped context (CVE PENDING) In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ================================================================================================================= OSSA-2020-004: Keystone credential endpoints allow owner modification and are not protected from a scoped context ================================================================================================================= :Date: May 06, 2020 :CVE: CVE-2020-12689, CVE-2020-12691 Affects ~~~~~~~ - - Keystone: <15.0.1, ==16.0.0 Description ~~~~~~~~~~~ kay reported two vulnerabilities in keystone's EC2 credentials API. Any authenticated user could create an EC2 credential for themselves for a project that they have a specified role on, then perform an update to the credential user and project, allowing them to masquerade as another user. (CVE-2020-12691) Any authenticated user within a limited scope (trust/oauth/application credential) can create an EC2 credential with an escalated permission, such as obtaining admin while the user is on a limited viewer role. (CVE-2020-12689) Both of these vulnerabilities potentially allow a malicious user to act as admin on a project that another user has the admin role on, which can effectively grant the malicious user global admin privileges. Errata ~~~~~~ CVE-2020-12689 and CVE-2020-12691 were assigned after the original publication date. Patches ~~~~~~~ - - https://review.opendev.org/725895 (Rocky) - - https://review.opendev.org/725893 (Stein) - - https://review.opendev.org/725891 (Train) - - https://review.opendev.org/725888 (Ussuri) - - https://review.opendev.org/725886 (Victoria) Credits ~~~~~~~ - - kay (CVE-2020-12689, CVE-2020-12691) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1872733 - - https://launchpad.net/bugs/1872735 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12689 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12691 Notes ~~~~~ - - The stable/rocky branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. OSSA History ~~~~~~~~~~~~ - - 2020-05-07 - Errata 1 - - 2020-05-06 - Original Version -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl60dYUACgkQ56j9K3b+ vRESOw//YJGlVKCPz7HkUtmyu6RWnpGzSPMoWhzP0HyLLpStMlrFXUKNZsgfXAw3 90vFD6zWSSWn2abJxlyW4JFDtOALKdGEZ0Ml68WSREDdupyOyd+G/ucT01Y95wB2 6nHkoHVvKbhPAI1OeV2haNGp02UUROSLGBT/FtvFnnCAcfAiUfI7+kBbLQgeG50q /MNQlfaWi0uBxCt/HZg0YqZ3QXIE/LuS2MgFkaQ2+Yr4r9V1M58Wi2pYA1Dkhz6e J7q/2hDJ1Nn7P4LHUuZEXupR3Ztjrnh5uIO8yr2jSK/r4DawCmRMqT24r7ebS5ZA /p+JhvV0+StujicmhfPSyY3A24kNHRQCSCOlFn0xF8aN+/VEFT82SOIf+NVuutZb 04wzrp4D3KIrSoulIbXVebAX+lj21qvlaYGwPAkmT8/p7kmj8mGWMlWhqBrCBJIC OiGd9pUe2GQcRSvBPj2Bex4WZCedvehSkPAiWh1MXFmUAUb2T7iNXNP7BlMd7LZA gdM4gW6HeFUEysj0vQfSCF+Mu+cB1PAjKZgqgHX7twgu+sOzlCKDlFkQuuzbma3M abGlfPwVl1v7X/xZ0U7xAwViFCAI+gpqA+Yi1hmMirxzyotUWn/J17AtvhOk3Hms mwUZiGr41oJhGhX3uSB2Jn0TulA+qhapncuMxG5qDk9Y/ijcpmQ= =ddr5 -----END PGP SIGNATURE----- On Wed, May 6, 2020 at 2:48 PM Gage Hugo wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > ================================================================================================================= > OSSA-2020-004: Keystone credential endpoints allow owner modification and > are not protected from a scoped context > > ================================================================================================================= > > :Date: May 06, 2020 > :CVE: Pending > > > Affects > ~~~~~~~ > - - Keystone: <15.0.1, ==16.0.0 > > > Description > ~~~~~~~~~~~ > kay reported two vulnerabilities in keystone's EC2 credentials API. > Any authenticated user could create an EC2 credential for themselves > for a project that they have a specified role on, then perform an > update to the credential user and project, allowing them to masquerade > as another user. (CVE #1 PENDING) Any authenticated user within a > limited scope (trust/oauth/application credential) can create an EC2 > credential with an escalated permission, such as obtaining admin while > the user is on a limited viewer role. (CVE #2 PENDING) Both of these > vulnerabilities potentially allow a malicious user to act as admin on > a project that another user has the admin role on, which can > effectively grant the malicious user global admin privileges. > > > Patches > ~~~~~~~ > - - https://review.opendev.org/725895 (Rocky) > - - https://review.opendev.org/725893 (Stein) > - - https://review.opendev.org/725891 (Train) > - - https://review.opendev.org/725888 (Ussuri) > - - https://review.opendev.org/725886 (Victoria) > > > Credits > ~~~~~~~ > - - kay (CVE Pending) > > > References > ~~~~~~~~~~ > - - https://launchpad.net/bugs/1872733 > - - https://launchpad.net/bugs/1872735 > - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=Pending > > > Notes > ~~~~~ > - - The stable/rocky branch is under extended maintenance and will receive > no new > point releases, but a patch for it is provided as a courtesy. > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl6zE70ACgkQ56j9K3b+ > vREQsBAAnHZLyrbjSwu7/CEdDVfb0sQZfDvyuXMttzouXQ6ZwEgLFKzc/aFWMjru > loyst9jAx2pJzvxDfMYO11oU0M5tYFCFxhKsVvu+3ggbcNHeov1s25bPkxE7A2j7 > IYJj9b+bbieYVj1ru3FJjDl3iTae4K73DeHNBCdxTSeahJZdya7hiboA1VJFt4p7 > fNqU3+szsYt/vwspPBi7x+xnZszIMaUw8tVgxzB4KVD6YXbDR9Mp7itH77kGdn8l > e3OpnURvfaIkPbK6fqE6jjwjQEL/6+Ahffaf4KqvsdjbAcdQRpK0UQrBX+n6DIWd > TRwV/W7bEy64HrC16W78fcBlegRmEUUM4xNmdll3lwUS5KqfEeM3vXU4Ksfe9tQ2 > 8fDU1hDALcC55+2CMMrdFfmX/MBSTz0HVmP4snaGuoXBL/iQz22OmekFKC1tmXxb > +vAtOUBsdzphRZn9KWvPIHOFGeuepWb9W0eN594JT2pdHfniLj6EaPrBaN63l7M/ > pu0DTPygN5IdUXv6v/vquQZp50CaN59okmXDNiFkBeHsfaAqhdyjJjRaYvyU62OA > apjVam8/f2HM0RC0vvpIqv0z0kU55NPCo61dlMZPg6U9JiQd2PzBqvEtDF1lyByF > vz5e+r9fmtRcgCJIYr0Z7VlOlSMONpITN03oICaexieDTEXDXHc= > =lSDG > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Thu May 7 21:00:50 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 7 May 2020 16:00:50 -0500 Subject: [OSSA-2020-005] Keystone: OAuth1 request token authorize silently ignores roles parameter (CVE PENDING) In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ============================================================================== OSSA-2020-005: OAuth1 request token authorize silently ignores roles parameter ============================================================================== :Date: May 06, 2020 :CVE: CVE-2020-12690 Affects ~~~~~~~ - - Keystone: <15.0.1, ==16.0.0 Description ~~~~~~~~~~~ kay reported a vulnerability in Keystone's OAuth1 Token API. The list of roles provided for an OAuth1 access token are ignored, so when an OAuth1 access token is used to request a keystone token, the keystone token will contain every role assignment the creator had for the project instead of the provided subset of roles. This results in the provided keystone token having more role assignments than the creator intended, possibly giving unintended escalated access. Errata ~~~~~~ CVE-2020-12690 was assigned after the original publication date. Patches ~~~~~~~ - - https://review.opendev.org/725894 (Rocky) - - https://review.opendev.org/725892 (Stein) - - https://review.opendev.org/725890 (Train) - - https://review.opendev.org/725887 (Ussuri) - - https://review.opendev.org/725885 (Victoria) Credits ~~~~~~~ - - kay (CVE-2020-12690) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1873290 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12690 Notes ~~~~~ - - The stable/rocky branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. OSSA History ~~~~~~~~~~~~ - - 2020-05-07 - Errata 1 - - 2020-05-06 - Original Version -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl60dYoACgkQ56j9K3b+ vRG6Tg//ZV/05IJTRghymKImfgWiT4G49Z2gZ5TgxbMqLmJ1+w5YthbaDNSrlmyO zmXBG5xLDuXhG6aD9IeKBjmVMgJhr2oef0bqV73vuwmTaUPW60A7cpx5en7frEbT UBgaG49+9BxtJsTJyI2oDpzAj9Z42u/gZPzfM3wbaCjbvAHJP7t2aqQL51iwCbhM IJSJUYprfrPf/YbeG6k1uWuNIT7iZs1TgqyLQfoYzbNX1sIP3rJie3XC7ZOOt+De FJ+AxLy9cRihG1p3kVS6SUQmSyIyluUyP6FhxBOyL36ZXCwEZABVjHXbK2QK4F2A Tgfz8R8moJ/J4ReWw2z226czaCWKg3ApjGdjEqBhakBrGP/aTualMlDFRSHxkI/9 oAUucNKGS64XgUmGPwQhVm4oCNrs+9YpGdH63S14N9os64BHB/D4hGMzHwrE4Fxk ejuIzrYAHqsnKIgNDhAl2gZJgT6j924MJfR/ImkdLp31S5qh49NrCbA5cmgLY9Ke XzNrnLhKcqSN+z1YwVidUWF8B7HEliPQBHgVwf4bpWl+jKgjr5wfWKYW5f9civtu 1tWjbgdjYqce/gataAjIOw41IIFrSGWyZfHc2wQnkBwR3xhz2NPbxPCniHZg5kAT h/pAiVk6InwpTnTfor8OoHFPiD7MTg34EJmEkGqmCPPOIpm/BSk= =3dVo -----END PGP SIGNATURE----- On Wed, May 6, 2020 at 2:53 PM Gage Hugo wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > ============================================================================== > OSSA-2020-005: OAuth1 request token authorize silently ignores roles > parameter > > ============================================================================== > > :Date: May 06, 2020 > :CVE: Pending > > > Affects > ~~~~~~~ > - - Keystone: <15.0.1, ==16.0.0 > > > Description > ~~~~~~~~~~~ > kay reported a vulnerability in Keystone's OAuth1 Token API. The list > of roles provided for an OAuth1 access token are ignored, so when an > OAuth1 access token is used to request a keystone token, the keystone > token will contain every role assignment the creator had for the > project instead of the provided subset of roles. This results in the > provided keystone token having more role assignments than the creator > intended, possibly giving unintended escalated access. > > > Patches > ~~~~~~~ > - - https://review.opendev.org/725894 (Rocky) > - - https://review.opendev.org/725892 (Stein) > - - https://review.opendev.org/725890 (Train) > - - https://review.opendev.org/725887 (Ussuri) > - - https://review.opendev.org/725885 (Victoria) > > > Credits > ~~~~~~~ > - - kay (CVE Pending) > > > References > ~~~~~~~~~~ > - - https://launchpad.net/bugs/1873290 > - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=Pending > > > Notes > ~~~~~ > - - The stable/rocky branch is under extended maintenance and will receive > no new > point releases, but a patch for it is provided as a courtesy. > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl6zFWsACgkQ56j9K3b+ > vRFDnhAArgXdQUnCyckPQciBvxMxQvqhCEhzGH0aQNAmMLaImYUwFhFVVO0DlcNb > kt/ynLQLdyi3YnCz1x4VhUXaCh4Rhi9pYkU4LKa/tvJj6anrCSLHmuDD52idkZeB > sFslgkh/BGfdM4HcuPLhs4SSaZpI53ASitiOhyjBIN/DmpLUbZgmJ1iz3FfQ3cTB > wtjYI4jGCCMq+4POSozWMzeYdL3JzR264jBCRrCw1ErIPjpF4KSOFaH5vqakBnzw > Ot7KR7s7FmIwU7LhCuvjgLW3rxwE1g5bz+Qd/97rC1bTx/iPHklQjMP5SoGwmjta > Kx1prUaQqFys5Bw93e0cj1Fwn0zNHUjqLs4LZscNbyGRyAZCPREeg2quwBxVUNk9 > D6jxW3J2LYIu+ictVV5fnBQd4/+NtxM8ofLDM03QZouUpkNfCHAmW81BYqd2+Pii > VbJi5Litz+DHLrAyh0O4zD/PBc5+5zxB2EXEDVEJitqaxQWfogJwJzGe89ULom0I > VXMuYOvqaLV9f2JIG6SEBiKrfaUhSgoHTrmznt82KOlsOBMamQUaj5iTqDoDzPD2 > LVB2WLABj1cFZsnTFAec1qKwEPXuT0p3Dsb7eyvwsq5aJYS5I2bjK6Q1WcCcqzJF > 1b+v0iqW0Qu+Hk4fwvcrqqQMDZ7Q982tT+B7sU8xV4jYBtFLseQ= > =iEFE > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu May 7 21:09:00 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 May 2020 16:09:00 -0500 Subject: Uniting TC and UC In-Reply-To: References: Message-ID: <171f0f85af1.1298868ff97980.1675370170083386145@ghanshyammann.com> ---- On Thu, 07 May 2020 09:32:42 -0500 Mohamed Elsakhawy wrote ---- > Hi All, > > As you may know already, there has been an ongoing discussion to merge UC and TC under a single body. Three options were presented, along with their impact on the bylaws. > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html > > We had several discussions in the UC on the three options as well as the UC items that need to be well-represented under the common committee, and here’s what we propose for the merger: > > 1- There’s consensus on utilizing option 1, thus requiring no bylaws change and merging AUC and ATC : > > 1- No bylaws change > As bylaws changes take a lot of time and energy, the simplest approach would be to merge the TC and UC without changing the bylaws at all. The single body (called TC) would incorporate the AUC criteria by adding all AUC members as extra-ATC. It would tackle all aspects of our community. To respect the letter of the bylaws, the TC would formally designate 5 of its members to be the 'UC' and those would select a 'UC chair'. But all tasks would be handled together. > > 2- Two main core UC responsibilities need to continue to be represented under TC > - Representing the user community to the broader community (Board of directors, TC...) +1. > - Working with user groups worldwide to keep the community vibrant and informed. This was one of my main concerns and worry since starting. This is really important which is more of managerial activities and event/group coordination along with OpenStack Ambassadors and IMO does not really fit under TC activities, we may need some other place/team to take care of these tasks. -gmann > > 3- The current active workgroups under UC’s governance will be moved to be under TC’s governance. > > 4- AUC and ATC to be merged into a single list, potentially called AC (Active Contributor) > > 5- We hope to have the merger finalized before the upcoming UC elections in September 2020. > > > In addition to discussions over the mailing list, we also have the opportunity of "face-to-face" discussions at the upcoming PTG. > > > Thanks > > > Mohamed > --melsakhawy > > > From rodrigodsousa at gmail.com Thu May 7 22:10:46 2020 From: rodrigodsousa at gmail.com (Rodrigo Duarte) Date: Thu, 7 May 2020 15:10:46 -0700 Subject: [keystone] Core group changes In-Reply-To: References: <8fc9f441-0b29-4b60-afbc-234dc520933e@www.fastmail.com> <62BBC867-58BE-4C7A-A555-CE2CE6571780@bu.edu> Message-ID: Well deserved, Vishakha! On Sat, Apr 25, 2020 at 7:39 PM Xiyuan Wang wrote: > Nice job, Vishakha > > Nikolla, Kristi 于2020年4月25日周六 上午12:20写道: > >> Congrats Vishakha! >> >> Thank you Dave! >> >> > On Apr 24, 2020, at 11:55 AM, Colleen Murphy >> wrote: >> > >> > Hi team, >> > >> > I'm very pleased to say that Vishakha Agarwal has joined the >> python-keystoneclient core team! Vishakha has gained a lot of knowledge of >> the clients in the last few months and this change will help empower her to >> continue to drive our goals of ensuring all of the clients fully support >> keystone's APIs. Thanks for taking this on, Vishakha! >> > >> > I also must announce that I've removed Dave Chen from the core list. >> Dave's responsibilities at his company have not been related to keystone >> for some time, and although we've been very grateful for the occasional >> review he gives from time to time, he's let me know he believes it's time >> to be formally removed. As with every core removal, if life brings him back >> to keystone we'll gladly welcome him back to the core team. Thanks, Dave, >> for all your hard work on keystone! >> > >> > Colleen >> > >> >> >> -- Rodrigo http://rodrigods.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu May 7 22:18:33 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 7 May 2020 18:18:33 -0400 Subject: rabbitmq api query Message-ID: Folks, I am just playing with API and getting strange result, may be i am doing something wrong please correct me. [root at aio1-rabbit-mq-container-5233d3c3 root]# rabbitmqctl list_vhosts Listing vhosts ... name /neutron /designate /keystone /aodh / /nova /glance /ceilometer and i have configured monitoring user to query data from API Following works and i am getting data #curl -s http://172.29.239.29:15672/api/vhosts --user monitoring:de4a28d10da980077cea | jq now try to query specific vhosts like nova getting error # curl -s http://172.29.239.29:15672/api/vhosts/nova --user monitoring:de4a28d10da980077cea | jq { "error": "not_authorised", "reason": "Not administrator user" } Even i set full permission, getting same error no_authorised, what i am missing? # rabbitmqctl set_permissions monitoring -p /nova "" "" "" From m2elsakha at gmail.com Thu May 7 22:26:18 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Thu, 7 May 2020 18:26:18 -0400 Subject: Uniting TC and UC In-Reply-To: <171f0f85af1.1298868ff97980.1675370170083386145@ghanshyammann.com> References: <171f0f85af1.1298868ff97980.1675370170083386145@ghanshyammann.com> Message-ID: Thanks All, @Jeremy , I don't see a reason for the separate ML to continue to exist, having all discussions in openstack-discuss makes more sense. I'll bring it to the next UC meeting and keep you posted. @Ghanshyam : Part of the proposal is to have "5" designated members under TC to perform the UC duties including UG involvement. I guess that mandates some expansion of current TC activities, but that's a valid point, we may want to discuss further. On Thu, May 7, 2020 at 5:09 PM Ghanshyam Mann wrote: > ---- On Thu, 07 May 2020 09:32:42 -0500 Mohamed Elsakhawy < > m2elsakha at gmail.com> wrote ---- > > Hi All, > > > > As you may know already, there has been an ongoing discussion to merge > UC and TC under a single body. Three options were presented, along with > their impact on the bylaws. > > > > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html > > > > We had several discussions in the UC on the three options as well as > the UC items that need to be well-represented under the common committee, > and here’s what we propose for the merger: > > > > 1- There’s consensus on utilizing option 1, thus requiring no bylaws > change and merging AUC and ATC : > > > > 1- No bylaws change > > As bylaws changes take a lot of time and energy, the simplest approach > would be to merge the TC and UC without changing the bylaws at all. The > single body (called TC) would incorporate the AUC criteria by adding all > AUC members as extra-ATC. It would tackle all aspects of our community. To > respect the letter of the bylaws, the TC would formally designate 5 of its > members to be the 'UC' and those would select a 'UC chair'. But all tasks > would be handled together. > > > > 2- Two main core UC responsibilities need to continue to be represented > under TC > > - Representing the user community to the broader community (Board of > directors, TC...) > > +1. > > > - Working with user groups worldwide to keep the community vibrant and > informed. > > This was one of my main concerns and worry since starting. This is really > important which is more of managerial activities > and event/group coordination along with OpenStack Ambassadors and IMO does > not really fit under TC activities, we may > need some other place/team to take care of these tasks. > > -gmann > > > > > 3- The current active workgroups under UC’s governance will be moved to > be under TC’s governance. > > > > 4- AUC and ATC to be merged into a single list, potentially called AC > (Active Contributor) > > > > 5- We hope to have the merger finalized before the upcoming UC > elections in September 2020. > > > > > > In addition to discussions over the mailing list, we also have the > opportunity of "face-to-face" discussions at the upcoming PTG. > > > > > > Thanks > > > > > > Mohamed > > --melsakhawy > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri May 8 00:24:43 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 8 May 2020 02:24:43 +0200 Subject: rabbitmq api query In-Reply-To: References: Message-ID: <3d61db92-9527-7f3b-1857-16eb1fdab49e@debian.org> On 5/8/20 12:18 AM, Satish Patel wrote: > Folks, > > I am just playing with API and getting strange result, may be i am > doing something wrong please correct me. > > [root at aio1-rabbit-mq-container-5233d3c3 root]# rabbitmqctl list_vhosts > Listing vhosts ... > name > /neutron > /designate > /keystone > /aodh > / > /nova > /glance > /ceilometer > > and i have configured monitoring user to query data from API > > Following works and i am getting data > #curl -s http://172.29.239.29:15672/api/vhosts --user > monitoring:de4a28d10da980077cea | jq > > now try to query specific vhosts like nova getting error > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user > monitoring:de4a28d10da980077cea | jq > { > "error": "not_authorised", > "reason": "Not administrator user" > } > > Even i set full permission, getting same error no_authorised, what i am missing? > > # rabbitmqctl set_permissions monitoring -p /nova "" "" "" > As per: https://www.rabbitmq.com/access-control.html The syntax is: rabbitmqctl set_permissions 'username' -p '/nova' '.*' '.*' '.*' Cheers, Thomas Goirand (zigo) From satish.txt at gmail.com Fri May 8 00:45:20 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 7 May 2020 20:45:20 -0400 Subject: rabbitmq api query In-Reply-To: <3d61db92-9527-7f3b-1857-16eb1fdab49e@debian.org> References: <3d61db92-9527-7f3b-1857-16eb1fdab49e@debian.org> Message-ID: I have already tried that and same error, most of people saying use "" "" "" will give you full access. On Thu, May 7, 2020 at 8:28 PM Thomas Goirand wrote: > > On 5/8/20 12:18 AM, Satish Patel wrote: > > Folks, > > > > I am just playing with API and getting strange result, may be i am > > doing something wrong please correct me. > > > > [root at aio1-rabbit-mq-container-5233d3c3 root]# rabbitmqctl list_vhosts > > Listing vhosts ... > > name > > /neutron > > /designate > > /keystone > > /aodh > > / > > /nova > > /glance > > /ceilometer > > > > and i have configured monitoring user to query data from API > > > > Following works and i am getting data > > #curl -s http://172.29.239.29:15672/api/vhosts --user > > monitoring:de4a28d10da980077cea | jq > > > > now try to query specific vhosts like nova getting error > > > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user > > monitoring:de4a28d10da980077cea | jq > > { > > "error": "not_authorised", > > "reason": "Not administrator user" > > } > > > > Even i set full permission, getting same error no_authorised, what i am missing? > > > > # rabbitmqctl set_permissions monitoring -p /nova "" "" "" > > > > As per: > https://www.rabbitmq.com/access-control.html > > The syntax is: > rabbitmqctl set_permissions 'username' -p '/nova' '.*' '.*' '.*' > > Cheers, > > Thomas Goirand (zigo) > From mgagne at calavera.ca Fri May 8 00:52:23 2020 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 7 May 2020 20:52:23 -0400 Subject: rabbitmq api query In-Reply-To: References: Message-ID: Hi, On Thu, May 7, 2020 at 6:18 PM Satish Patel wrote: > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user > monitoring:de4a28d10da980077cea | jq > { > "error": "not_authorised", > "reason": "Not administrator user" > } > > Even i set full permission, getting same error no_authorised, what i am missing? > > # rabbitmqctl set_permissions monitoring -p /nova "" "" "" > Make sure your user as one of the following tag to be able to access the management interface: management, policymaker, monitoring or administrator tag https://www.rabbitmq.com/management.html#permissions -- Mathieu From satish.txt at gmail.com Fri May 8 01:39:08 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 7 May 2020 21:39:08 -0400 Subject: rabbitmq api query In-Reply-To: References: Message-ID: I did following # rabbitmqctl add_user test test # rabbitmqctl set_user_tags test administrator # rabbitmqctl set_permissions -p / test ".*" ".*" ".*" Now i am getting following error, look like something is missing here, my vhost name is /nova so do i need to specify "/" in api call? # curl -s http://172.29.239.29:15672/api/vhosts/nova --user test:test | jq { "error": "Object Not Found", "reason": "Not Found" } On Thu, May 7, 2020 at 8:52 PM Mathieu Gagné wrote: > > Hi, > > On Thu, May 7, 2020 at 6:18 PM Satish Patel wrote: > > > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user > > monitoring:de4a28d10da980077cea | jq > > { > > "error": "not_authorised", > > "reason": "Not administrator user" > > } > > > > Even i set full permission, getting same error no_authorised, what i am missing? > > > > # rabbitmqctl set_permissions monitoring -p /nova "" "" "" > > > > Make sure your user as one of the following tag to be able to access > the management interface: management, policymaker, monitoring or > administrator tag > > https://www.rabbitmq.com/management.html#permissions > > -- > Mathieu From mgagne at calavera.ca Fri May 8 02:19:12 2020 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Thu, 7 May 2020 22:19:12 -0400 Subject: rabbitmq api query In-Reply-To: References: Message-ID: I think you need to encode your / as %2F So the URL becomes: http://172.29.239.29:15672/api/vhosts/%2Fnova -- Mathieu On Thu, May 7, 2020 at 9:39 PM Satish Patel wrote: > > I did following > > # rabbitmqctl add_user test test > # rabbitmqctl set_user_tags test administrator > # rabbitmqctl set_permissions -p / test ".*" ".*" ".*" > > Now i am getting following error, look like something is missing > here, my vhost name is /nova so do i need to specify "/" in api > call? > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user test:test | jq > { > "error": "Object Not Found", > "reason": "Not Found" > } > > On Thu, May 7, 2020 at 8:52 PM Mathieu Gagné wrote: > > > > Hi, > > > > On Thu, May 7, 2020 at 6:18 PM Satish Patel wrote: > > > > > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user > > > monitoring:de4a28d10da980077cea | jq > > > { > > > "error": "not_authorised", > > > "reason": "Not administrator user" > > > } > > > > > > Even i set full permission, getting same error no_authorised, what i am missing? > > > > > > # rabbitmqctl set_permissions monitoring -p /nova "" "" "" > > > > > > > Make sure your user as one of the following tag to be able to access > > the management interface: management, policymaker, monitoring or > > administrator tag > > > > https://www.rabbitmq.com/management.html#permissions > > > > -- > > Mathieu From satish.txt at gmail.com Fri May 8 02:42:43 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 7 May 2020 22:42:43 -0400 Subject: rabbitmq api query In-Reply-To: References: Message-ID: Damn it, you are goddamn right!!! thank you.. On Thu, May 7, 2020 at 10:19 PM Mathieu Gagné wrote: > > I think you need to encode your / as %2F > > So the URL becomes: > http://172.29.239.29:15672/api/vhosts/%2Fnova > > -- > Mathieu > > On Thu, May 7, 2020 at 9:39 PM Satish Patel wrote: > > > > I did following > > > > # rabbitmqctl add_user test test > > # rabbitmqctl set_user_tags test administrator > > # rabbitmqctl set_permissions -p / test ".*" ".*" ".*" > > > > Now i am getting following error, look like something is missing > > here, my vhost name is /nova so do i need to specify "/" in api > > call? > > > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user test:test | jq > > { > > "error": "Object Not Found", > > "reason": "Not Found" > > } > > > > On Thu, May 7, 2020 at 8:52 PM Mathieu Gagné wrote: > > > > > > Hi, > > > > > > On Thu, May 7, 2020 at 6:18 PM Satish Patel wrote: > > > > > > > > # curl -s http://172.29.239.29:15672/api/vhosts/nova --user > > > > monitoring:de4a28d10da980077cea | jq > > > > { > > > > "error": "not_authorised", > > > > "reason": "Not administrator user" > > > > } > > > > > > > > Even i set full permission, getting same error no_authorised, what i am missing? > > > > > > > > # rabbitmqctl set_permissions monitoring -p /nova "" "" "" > > > > > > > > > > Make sure your user as one of the following tag to be able to access > > > the management interface: management, policymaker, monitoring or > > > administrator tag > > > > > > https://www.rabbitmq.com/management.html#permissions > > > > > > -- > > > Mathieu From yu.chengde at 99cloud.net Fri May 8 08:35:56 2020 From: yu.chengde at 99cloud.net (YuChengDe) Date: Fri, 8 May 2020 16:35:56 +0800 (GMT+08:00) Subject: =?UTF-8?B?UmU6W29wZW5zdGFjay9yZXF1aXJlbWVudHNdIEhvcml6b24gaW1hZ2Ugd29ya3MgYWJub3JtYWwgYmVjYXVzZSBvZiB0aGUgY29uZmxpY3QgYmV3dGVlbiBteXNxbGNsaWVudCBhbmQgRGphbmdvIA==?= In-Reply-To: Message-ID: Hi Matthew: log listed below is came from applying horizon image. Execute "apps.populate(settings.INSTALLED_APPS)" failed Please help to check. 2020-04-24T04:53:02.10284535Z stderr F + ln -s /etc/openstack-dashboard/local_settings /var/lib/openstack/lib/python3.6/site-packages/openstack_dashboard/local/local_settings.py 2020-04-24T04:53:02.103275286Z stderr F + exec /tmp/manage.py migrate --noinput 2020-04-24T04:53:05.382939734Z stderr F /var/lib/openstack/lib/python3.6/site-packages/scss/compiler.py:1430: DeprecationWarning: invalid escape sequence \: 2020-04-24T04:53:05.382950387Z stderr F result = tb * (i + nesting) + "@media -sass-debug-info{filename{font-family:file\:\/\/%s}line{font-family:\\00003%s}}" % (filename, lineno) + nl 2020-04-24T04:53:05.382954148Z stderr F /var/lib/openstack/lib/python3.6/site-packages/scss/cssdefs.py:516: DeprecationWarning: invalid escape sequence \s 2020-04-24T04:53:05.382956261Z stderr F ''', re.VERBOSE) 2020-04-24T04:53:05.382959967Z stderr F /var/lib/openstack/lib/python3.6/site-packages/scss/namespace.py:172: DeprecationWarning: inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec() 2020-04-24T04:53:05.382963516Z stderr F argspec = inspect.getargspec(function) 2020-04-24T04:53:05.385459332Z stderr F Traceback (most recent call last): 2020-04-24T04:53:05.385473767Z stderr F File "/tmp/manage.py", line 19, in 2020-04-24T04:53:05.385477281Z stderr F execute_from_command_line(sys.argv) 2020-04-24T04:53:05.385479958Z stderr F File "/var/lib/openstack/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line 2020-04-24T04:53:05.385482395Z stderr F utility.execute() 2020-04-24T04:53:05.38548575Z stderr F File "/var/lib/openstack/lib/python3.6/site-packages/django/core/management/__init__.py", line 357, in execute 2020-04-24T04:53:05.385487888Z stderr F django.setup() 2020-04-24T04:53:05.385489803Z stderr F File "/var/lib/openstack/lib/python3.6/site-packages/django/__init__.py", line 24, in setup 2020-04-24T04:53:05.385492082Z stderr F apps.populate(settings.INSTALLED_APPS) 2020-04-24T04: -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net 发件人:YuChengDe 发送日期:2020-05-07 22:04:47 收件人:openstack-discuss at lists.openstack.org 主题:[openstack/requirements] Horizon image works abnormal because of the conflict bewteen mysqlclient and Django Hello: I have built a horizon image in ussuri version, but it works abnormal due to package mismatch problem. According to Django===2.2.12 from latest upper-constraint.txt, it is not compatible with mysqlclient. Using old version Django===2.1.5 can fix the problem. Could you help me to check the mismatch problem. Many thanks. -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Fri May 8 08:56:03 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 8 May 2020 08:56:03 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release Message-ID: <1588928163362.88879@binero.com> Hello, Just a heads up that we've had a warning in the Ussuri release about it being the last release where we are going to support Puppet 5. This patch [1] was now submitted to remove this warning and officially state that Puppet 5 is unsupported from the Victoria release. [1] https://review.opendev.org/726310? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Fri May 8 09:58:48 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Fri, 08 May 2020 11:58:48 +0200 Subject: [TripleO][Manila] 'DEFAULT/network_plugin_ipv6_enabled' parameter - why is it not simply enabled by default? Message-ID: Hi, I'm working on a change to remove some parameters related to IPv6 in TripleO. This is to simplify/automate the process for operators, and reduce failures due to user error. See: https://review.opendev.org/723898 I'm not sure what to do about the 'ManilaIPv6' parameter in TripleO. It control's the *DEFAULT/network_plugin_ipv6_enabled* parameter as well as *${share_backend_name}/network_plugin_ipv6_enabled* for some backends. Dockstrings [2] and [3] in the puppet module for these backends says the same as the Manila doc[1]. A note in Manilla doc[1] reading: """ The ip version of the share network is defined by the flags of network_plugin_ipv4_enabled and network_plugin_ipv6_enabled in the manila.conf configuration since Pike. The network_plugin_ipv4_enabled default value is set to True. The network_plugin_ipv6_enabled default value is set to False. If network_plugin_ipv6_enabled option is True, the value of network_plugin_ipv4_enabled will be ignored, it means to support both IPv4 and IPv6 share network. """ Is there a downside in simply enabling IPv6 by default in Manila? Why is this option present at all? [1] https://docs.openstack.org/manila/latest/admin/shared-file-systems-network-plugins.html [2] https://opendev.org/openstack/puppet-manila/src/branch/master/manifests/backend/dellemc_vnx.pp#L48-L52 [3] https://opendev.org/openstack/puppet-manila/src/branch/master/manifests/backend/dellemc_unity.pp#L49-L53 -- Harald Jensås From tpb at dyncloud.net Fri May 8 11:10:26 2020 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 8 May 2020 07:10:26 -0400 Subject: [TripleO][Manila] 'DEFAULT/network_plugin_ipv6_enabled' parameter - why is it not simply enabled by default? In-Reply-To: References: Message-ID: <20200508111026.ky5vra7qhd5r42an@barron.net> On 08/05/20 11:58 +0200, Harald Jensås wrote: >Hi, > >I'm working on a change to remove some parameters related to IPv6 in >TripleO. This is to simplify/automate the process for operators, and >reduce failures due to user error. >See: https://review.opendev.org/723898 > >I'm not sure what to do about the 'ManilaIPv6' parameter in TripleO. >It control's the *DEFAULT/network_plugin_ipv6_enabled* parameter as >well as *${share_backend_name}/network_plugin_ipv6_enabled* for some >backends. Dockstrings [2] and [3] in the puppet module for these >backends says the same as the Manila doc[1]. > >A note in Manilla doc[1] reading: >""" >The ip version of the share network is defined by the flags of >network_plugin_ipv4_enabled and network_plugin_ipv6_enabled in the >manila.conf configuration since Pike. The network_plugin_ipv4_enabled >default value is set to True. The network_plugin_ipv6_enabled >default value is set to False. If network_plugin_ipv6_enabled >option is True, the value of network_plugin_ipv4_enabled will be >ignored, it means to support both IPv4 and IPv6 share network. > >""" > > >Is there a downside in simply enabling IPv6 by default in Manila? >Why is this option present at all? Why present: historically, some Manila backends only supported exporting and mounting file shares over IPv4 share networks, but then added IPv6 capabilities and needed configuration for their Manila drivers. Those drivers went down different code paths depending on whether IPv6 capability was enabled or not. Why default False: backwards compatibility. If the back storage itself, outside of OpenStack proper, hadn't been upgraded or configured to support IPv6, leaving the configuration toggle False ensured that the Manila driver wouldn't treat the backend device as having capabilities that it lacked. Can we remove this option now? Maybe. As you observe, it seems really to only be used nowadays by Dell-EMC drivers. We can check current requirements with the folks who use these drivers and see if the option is still needed. Even if it is, we might be able to toggle it different means than this THT/puppet variable. That said, Manila follows a deprecate in one release, remove/change in the following policy with regard to configuration option changes (removal of options or change in their default values in manila.conf proper). Goutham Pacha Ravi (Manila PTL) and I talked about this and were going to ask if it would make sense to leave the parameter as is in the review in question but make a LP Bug calling for its removal. We can then deprecate and remove the manila configuration variable it controls, or control it by other means, but follow our normal configuration change procedures. We have by the way independent motivation to remove the THT/puppet parameter anyways, namely that it has global scope for Manila but is being used for back end specific configuration. That didn't matter for TripleO when it only supported configuring a single Manila back end but that single back end limitation is getting removed. -- Tom Barron > > > >[1] >https://docs.openstack.org/manila/latest/admin/shared-file-systems-network-plugins.html >[2] >https://opendev.org/openstack/puppet-manila/src/branch/master/manifests/backend/dellemc_vnx.pp#L48-L52 >[3] >https://opendev.org/openstack/puppet-manila/src/branch/master/manifests/backend/dellemc_unity.pp#L49-L53 > >-- >Harald Jensås > > From rosmaita.fossdev at gmail.com Fri May 8 11:10:30 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 8 May 2020 07:10:30 -0400 Subject: [cinder] RC-3 released Message-ID: <0487df94-19c1-9ad6-8d7d-8eb13faa7875@gmail.com> Just for awareness: we cut a RC-3 yesterday in order to include a backport of Corey Bryant's eventlet fix [0]. So the official Ussuri release of cinder will be based on RC-3. (I'm only mentioning this because at the weekly meeting, it looked like the release would be based on RC-2.) [0] https://review.opendev.org/#/c/725797/ From hjensas at redhat.com Fri May 8 14:59:19 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Fri, 08 May 2020 16:59:19 +0200 Subject: [TripleO][Manila] 'DEFAULT/network_plugin_ipv6_enabled' parameter - why is it not simply enabled by default? In-Reply-To: <20200508111026.ky5vra7qhd5r42an@barron.net> References: <20200508111026.ky5vra7qhd5r42an@barron.net> Message-ID: <6c67a57c7775e27f41a4c03f16ba5094873d636f.camel@redhat.com> On Fri, 2020-05-08 at 07:10 -0400, Tom Barron wrote: > On 08/05/20 11:58 +0200, Harald Jensås wrote: > > Hi, > > > > I'm working on a change to remove some parameters related to IPv6 > > in > > TripleO. This is to simplify/automate the process for operators, > > and > > reduce failures due to user error. > > See: https://review.opendev.org/723898 > > > > I'm not sure what to do about the 'ManilaIPv6' parameter in > > TripleO. > > It control's the *DEFAULT/network_plugin_ipv6_enabled* parameter as > > well as *${share_backend_name}/network_plugin_ipv6_enabled* for > > some > > backends. Dockstrings [2] and [3] in the puppet module for these > > backends says the same as the Manila doc[1]. > > > > A note in Manilla doc[1] reading: > > """ > > The ip version of the share network is defined by the flags of > > network_plugin_ipv4_enabled and network_plugin_ipv6_enabled in the > > manila.conf configuration since Pike. The > > network_plugin_ipv4_enabled > > default value is set to True. The network_plugin_ipv6_enabled > > default value is set to False. If network_plugin_ipv6_enabled > > option is True, the value of network_plugin_ipv4_enabled will be > > ignored, it means to support both IPv4 and IPv6 share network. > > > > """ > > > > > > Is there a downside in simply enabling IPv6 by default in Manila? > > Why is this option present at all? > > Why present: historically, some Manila backends only supported > exporting and mounting file shares over IPv4 share networks, but > then > added IPv6 capabilities and needed configuration for their Manila > drivers. Those drivers went down different code paths depending on > whether IPv6 capability was enabled or not. > > Why default False: backwards compatibility. If the back storage > itself, outside of OpenStack proper, hadn't been upgraded or > configured to support IPv6, leaving the configuration toggle False > ensured that the Manila driver wouldn't treat the backend device as > having capabilities that it lacked. > > Can we remove this option now? Maybe. As you observe, it seems > really to only be used nowadays by Dell-EMC drivers. We can check > current requirements with the folks who use these drivers and see if > the option is still needed. Even if it is, we might be able to > toggle > it different means than this THT/puppet variable. That said, Manila > follows a deprecate in one release, remove/change in the following > policy with regard to configuration option changes (removal of > options > or change in their default values in manila.conf proper). > > Goutham Pacha Ravi (Manila PTL) and I talked about this and were > going > to ask if it would make sense to leave the parameter as is in the > review in question but make a LP Bug calling for its removal. We > can > then deprecate and remove the manila configuration variable it > controls, or control it by other means, but follow our normal > configuration change procedures. > > We have by the way independent motivation to remove the THT/puppet > parameter anyways, namely that it has global scope for Manila but is > being used for back end specific configuration. That didn't matter > for TripleO when it only supported configuring a single Manila back > end but that single back end limitation is getting removed. > > -- Tom Barron > Thanks Tom! It makes sense to not change this in TripleO for now then. -- Harald > > > > > > [1] > > https://docs.openstack.org/manila/latest/admin/shared-file-systems-network-plugins.html > > [2] > > https://opendev.org/openstack/puppet-manila/src/branch/master/manifests/backend/dellemc_vnx.pp#L48-L52 > > [3] > > https://opendev.org/openstack/puppet-manila/src/branch/master/manifests/backend/dellemc_unity.pp#L49-L53 > > > > -- > > Harald Jensås > > > > From allison at openstack.org Fri May 8 15:08:40 2020 From: allison at openstack.org (Allison Price) Date: Fri, 8 May 2020 10:08:40 -0500 Subject: Uniting TC and UC In-Reply-To: <171f0f85af1.1298868ff97980.1675370170083386145@ghanshyammann.com> References: <171f0f85af1.1298868ff97980.1675370170083386145@ghanshyammann.com> Message-ID: <140A4316-F9E1-4916-A67F-0ACD15891421@openstack.org> Allison Price OpenStack Foundation allison at openstack.org > On May 7, 2020, at 4:09 PM, Ghanshyam Mann wrote: > > ---- On Thu, 07 May 2020 09:32:42 -0500 Mohamed Elsakhawy > wrote ---- >> Hi All, >> >> As you may know already, there has been an ongoing discussion to merge UC and TC under a single body. Three options were presented, along with their impact on the bylaws. >> >> http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html >> >> We had several discussions in the UC on the three options as well as the UC items that need to be well-represented under the common committee, and here’s what we propose for the merger: >> >> 1- There’s consensus on utilizing option 1, thus requiring no bylaws change and merging AUC and ATC : >> >> 1- No bylaws change >> As bylaws changes take a lot of time and energy, the simplest approach would be to merge the TC and UC without changing the bylaws at all. The single body (called TC) would incorporate the AUC criteria by adding all AUC members as extra-ATC. It would tackle all aspects of our community. To respect the letter of the bylaws, the TC would formally designate 5 of its members to be the 'UC' and those would select a 'UC chair'. But all tasks would be handled together. >> >> 2- Two main core UC responsibilities need to continue to be represented under TC >> - Representing the user community to the broader community (Board of directors, TC...) > > +1. > >> - Working with user groups worldwide to keep the community vibrant and informed. > > This was one of my main concerns and worry since starting. This is really important which is more of managerial activities > and event/group coordination along with OpenStack Ambassadors and IMO does not really fit under TC activities, we may > need some other place/team to take care of these tasks. Helping user groups and ambassadors is actually something that the OpenStack Foundation staff has dedicated resources for, so I don’t think that it’s something that the TC needs to worry about incorporating into their activities or setting up another team. > > -gmann > >> >> 3- The current active workgroups under UC’s governance will be moved to be under TC’s governance. >> >> 4- AUC and ATC to be merged into a single list, potentially called AC (Active Contributor) >> >> 5- We hope to have the merger finalized before the upcoming UC elections in September 2020. >> >> >> In addition to discussions over the mailing list, we also have the opportunity of "face-to-face" discussions at the upcoming PTG. >> >> >> Thanks >> >> >> Mohamed >> --melsakhawy -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri May 8 15:50:41 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 May 2020 10:50:41 -0500 Subject: Uniting TC and UC In-Reply-To: <140A4316-F9E1-4916-A67F-0ACD15891421@openstack.org> References: <171f0f85af1.1298868ff97980.1675370170083386145@ghanshyammann.com> <140A4316-F9E1-4916-A67F-0ACD15891421@openstack.org> Message-ID: <171f4fb4b55.1206cc32221707.6969322475172664736@ghanshyammann.com> ---- On Fri, 08 May 2020 10:08:40 -0500 Allison Price wrote ---- > > Allison PriceOpenStack Foundationallison at openstack.org > > > > On May 7, 2020, at 4:09 PM, Ghanshyam Mann wrote: > ---- On Thu, 07 May 2020 09:32:42 -0500 Mohamed Elsakhawy wrote ---- > Hi All, > > As you may know already, there has been an ongoing discussion to merge UC and TC under a single body. Three options were presented, along with their impact on the bylaws. > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html > > We had several discussions in the UC on the three options as well as the UC items that need to be well-represented under the common committee, and here’s what we propose for the merger: > > 1- There’s consensus on utilizing option 1, thus requiring no bylaws change and merging AUC and ATC : > > 1- No bylaws change > As bylaws changes take a lot of time and energy, the simplest approach would be to merge the TC and UC without changing the bylaws at all. The single body (called TC) would incorporate the AUC criteria by adding all AUC members as extra-ATC. It would tackle all aspects of our community. To respect the letter of the bylaws, the TC would formally designate 5 of its members to be the 'UC' and those would select a 'UC chair'. But all tasks would be handled together. > > 2- Two main core UC responsibilities need to continue to be represented under TC > - Representing the user community to the broader community (Board of directors, TC...) > > +1. > > - Working with user groups worldwide to keep the community vibrant and informed. > > This was one of my main concerns and worry since starting. This is really important which is more of managerial activities > and event/group coordination along with OpenStack Ambassadors and IMO does not really fit under TC activities, we may > need some other place/team to take care of these tasks. > > Helping user groups and ambassadors is actually something that the OpenStack Foundation staff has dedicated resources for, so I don’t think that it’s something that the TC needs to worry about incorporating into their activities or setting up another team. Thanks, Allison for updates. That is much helpful for this merge. -gmann > > -gmann > > > 3- The current active workgroups under UC’s governance will be moved to be under TC’s governance. > > 4- AUC and ATC to be merged into a single list, potentially called AC (Active Contributor) > > 5- We hope to have the merger finalized before the upcoming UC elections in September 2020. > > > In addition to discussions over the mailing list, we also have the opportunity of "face-to-face" discussions at the upcoming PTG. > > > Thanks > > > Mohamed > --melsakhawy > From rfolco at redhat.com Fri May 8 16:23:43 2020 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 8 May 2020 13:23:43 -0300 Subject: [tripleo] TripleO CI Summary: Unified Sprint 26 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 26** (Apr 17 thru May 06). The following is a summary of completed work during this sprint cycle [1]: - Ussuri branching prep work: container/image builds, job definitions and shifted periodic pipelines. - Contributed for TripleO-Operator-Ansible. - Initial design for the internal component pipeline prep work: resolved dependencies and requirements to build the pipeline. Successfully built RHEL8 OSP containers. - Started migrating promoter jobs to CentOS8. Converted molecule delegated jobs to run on tox environment (WIP - code needs a few fixes and consolidation). - Resumed work on an upstream standalone TLS job. - Resumed work to validate ansible 2.9 and the latest ceph-ansible. - Ruck/Rover recorded notes [2]. The planned work for the next sprint [3] extends the work started in the previous sprint and focuses on the following: - Stable branching work (ussuri). - TripleO-Operator-Ansible: contribute w/ code, docs, tests and zuul jobs. - Internal component pipeline: create basic standalone and baremetal jobs. - Continue vexxhost migration and zuul reproducer (on-going) work. - Fix promoter tests on CentOS8 and build an internal promoter server. - TripleO IPA multinode job merge. - Build CentOS8 train full promotion pipeline and a job to upgrade from train to ussuri. The Ruck and Rover for this sprint are Rafael Folco (rfolco) and Bhagyashri Shewale (bhagyashris). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in etherpad [4]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-26 [2] https://hackmd.io/1pY-KQB_QwOe-a-5oEXTRg [3] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-27 [4] https://hackmd.io/2MdkNAUuT7aBcM0Yck4xnw -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri May 8 16:54:29 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 8 May 2020 18:54:29 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <1588928163362.88879@binero.com> References: <1588928163362.88879@binero.com> Message-ID: On 5/8/20 10:56 AM, Tobias Urdin wrote: > Hello, > > > Just a heads up that we've had a warning in the Ussuri release about it > being the > > last release where we are going to support Puppet 5. > > > This patch [1] was now submitted to remove this warning and officially > state that > > Puppet 5 is unsupported from the Victoria release. > > > [1] https://review.opendev.org/726310​ > Tobias, I don't agree with this decision. Debian Buster is still with puppet 5, and puppet 6 hasn't even been uploaded to Unstable yet. How can I make you change your mind? Thomas Goirand (zigo) From Arkady.Kanevsky at dell.com Fri May 8 17:17:07 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 8 May 2020 17:17:07 +0000 Subject: [Tempest][Restack]Call for help on 2020:06 Certificaion In-Reply-To: <371984283.1451826.1584718659038@mail.yahoo.com> References: <371984283.1451826.1584718659038.ref@mail.yahoo.com> <371984283.1451826.1584718659038@mail.yahoo.com> Message-ID: Nobody on the interop call today. I dropped From: prakash RAMCHANDRAN Sent: Friday, March 20, 2020 10:38 AM To: openstack-discuss at lists.openstack.org Subject: [Tempest][Restack]Call for help on 2020:06 Certificaion [EXTERNAL EMAIL] Hi all, Appreciate all your inputs in mail and one-on-one conversations I had with few of you. Like to set proper expectations before we deal with extending our goals. First we must deliver on continuity of Refstack, while we evaluate the value it adds to end users through survey. I. Every Release - There is a new criterion for Certification - Do we have one for 2020.06? - Who has volunteered or would like to volunteer for this effort? II. Can we have previous WG Chair or anyone can brief us (by call or by email ) as what the process was and how it was managed. |||. Who were last release OSF Reviewers? Can they volunteer for 2020:06 release definition for Refstack - https://opendev.org/openstack/refstack/ VI. Given the circumstances (CoViD-19) will it be possible to deliver Refstack in June /July 2020? (Need some estimates from TC or others who managed this in past years) V. Is Zun - ?, Magnum -? Are both OCI and/or CNI compliant? - Can they be tested using Tempest and add those two or any other container projects to refstack core? For Survey (suggest any specifics Questions to ask- may be few questions to - Customer / Vendors/ new feedback Q1-Value of Certification , Q2. How Often customers ask for it in RFP or sales cycle?, Q3. What additional Tests, process, frequency etc will add value for them? Q4. Do you think we should continue the efforts of Refstack?) Let me know through email and have added this to TC meeting for Thursday April 2. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting Thanks Prakash Sent from Yahoo Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri May 8 17:58:19 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 May 2020 17:58:19 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: References: <1588928163362.88879@binero.com> Message-ID: <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> On 2020-05-08 18:54:29 +0200 (+0200), Thomas Goirand wrote: [...] > I don't agree with this decision. Debian Buster is still with > puppet 5, and puppet 6 hasn't even been uploaded to Unstable yet. > > How can I make you change your mind? My attempt at reading the tea leaves Puppet calls their version support matrix indicates that Puppet 5.x enters "extended support" this month and reaches EOL in November. OpenStack Victoria is scheduled to release in October, so I guess it's a question of whether the Puppet OpenStack team wants the burden of spending a cycle targeting support for a Puppet version which will be EOL the month after the release (especially given deployment projects usually release as much as a month after the coordinated release already). Debian Bullseye will probably have to release with Puppet >= 6, so hopefully it'll be in buster-backports soon after it enters testing, but it would be good to find out from Debian's Puppet package maintainers what their plans are. It looks like Puppet 6.x has been around for over a year now, hopefully it's at least on their radar. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pramchan at yahoo.com Fri May 8 19:34:03 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 8 May 2020 19:34:03 +0000 (UTC) Subject: [all][InteropWG] Request to review tests your Distro for 2020.06 draft guidelines References: <1708160800.349541.1588966443299.ref@mail.yahoo.com> Message-ID: <1708160800.349541.1588966443299@mail.yahoo.com> Hi all, Please review your tests for Draft 2020.06 guidelines to be proposed to Board.You can do that on and should start appearing in next 24-48 hours depending on Zuulhttps://refstack.openstack.org/#/community_results Plus please register for InteropWG  PTG meet on  June 1 st 6AM-*AM PDT slot See etherpadsPTG eventhttps://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020Specific invite Tempest, TC members, QA/API SIG teams, Edge SIG WG , Baremetal SIG (for Ironic), K8s SIG (for OoK / KoO) & Scientific SIG teams. Other discussions  https://etherpad.opendev.org/p/interop2020 Welcome to join weekly  Friday Meetings (Refer NA/EU/APJ in etherpad below)  https://etherpad.opendev.org/p/interop Appreciate all support from committee and special appreciation to Mark T Voelker for providing the bridge.He has been immensely valuable in bringing as Vice Chair the wealth of history to enable this Interop WG. ThanksPrakash  -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri May 8 20:06:14 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 8 May 2020 22:06:14 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> Message-ID: <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> On 5/8/20 7:58 PM, Jeremy Stanley wrote: > On 2020-05-08 18:54:29 +0200 (+0200), Thomas Goirand wrote: > [...] >> I don't agree with this decision. Debian Buster is still with >> puppet 5, and puppet 6 hasn't even been uploaded to Unstable yet. >> >> How can I make you change your mind? > > My attempt at reading the tea leaves Puppet calls their version > support matrix indicates that Puppet 5.x enters "extended support" > this month and reaches EOL in November. OpenStack Victoria is > scheduled to release in October, so I guess it's a question of > whether the Puppet OpenStack team wants the burden of spending a > cycle targeting support for a Puppet version which will be EOL the > month after the release (especially given deployment projects > usually release as much as a month after the coordinated release > already). I don't see how this can be a burden. It's not as if the language changed that much and if there was major incompatibilities. Also, even if Puppet upstream is moving, Puppet 5 will stay on Debian Buster for the life of stable, so there it will still be supported. > Debian Bullseye will probably have to release with Puppet >= 6, so > hopefully it'll be in buster-backports soon after it enters testing, > but it would be good to find out from Debian's Puppet package > maintainers what their plans are. It looks like Puppet 6.x has been > around for over a year now, hopefully it's at least on their radar. The puppet packaging "team" is understaffed. All of the work is made by Apollon Oikonomopoulos without much help. If he needs help, I'll see how I can help him. Though indeed, we need to know what the plans are. Packaging puppet 6 is far from easy: it involves a lot of new stuff. Until we know, would very much prefer if we put this decision on hold. Cheers, Thomas Goirand (zigo) From mnaser at vexxhost.com Fri May 8 20:23:25 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 8 May 2020 16:23:25 -0400 Subject: [tc] ptg etherpad In-Reply-To: References: Message-ID: Hi everyone, We've set up times right now, but, we'd like TC members to please mark their names next to the topics that they're interested in to avoid any conflicts and making sure that we schedule them in the correct timeslot. Thanks, Mohammed On Tue, May 5, 2020 at 12:18 PM Mohammed Naser wrote: > > Hi everyone, > > Please have a look at the following Etherpad and try to fill it out > with any suggested topics as well as attendance in order to gauge how > much time we will need at the PTG: > > https://etherpad.opendev.org/p/tc-victoria-ptg > > Thank you, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. -- Mohammed Naser VEXXHOST, Inc. From fungi at yuggoth.org Fri May 8 20:24:49 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 May 2020 20:24:49 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> Message-ID: <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> On 2020-05-08 22:06:14 +0200 (+0200), Thomas Goirand wrote: > On 5/8/20 7:58 PM, Jeremy Stanley wrote: [...] > > whether the Puppet OpenStack team wants the burden of spending a > > cycle targeting support for a Puppet version which will be EOL the > > month after the release (especially given deployment projects > > usually release as much as a month after the coordinated release > > already). > > I don't see how this can be a burden. It's not as if the language > changed that much and if there was major incompatibilities. [...] Well, it does mean keeping those modules working with two major versions of Puppet which, speaking from past experience, is not easy (and a big part of why we decided to replace all our orchestration and configuration management in OpenDev with something other than Puppet after the Puppet 3->4->5 transition). How much of a strain that is on the Puppet OpenStack team, I can't say. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri May 8 21:11:29 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 May 2020 16:11:29 -0500 Subject: [qa][tempest-plugins] Tempest plugins guide for release, stable branch support, testing etc Message-ID: <171f620fd2a.edcd09b928901.4838679907640170994@ghanshyammann.com> Hello Everyone, This is one of the pending items, Tempest team wanted to do. Tempest has more than 70 plugins and 57 are active and in running state. Tempest and its plugins have similar use cases for testing upstream CI/CD as well as the production cloud. There are always question on how we can coordinate and make release, stable branches testing more consistent among all the plugins. To do that, I have documented the policy all plugins can follow along with Tempest. - https://docs.openstack.org/tempest/latest/plugins/index.html The above document includes the plugin creation, releases for start the support for new release or end of support for Extended Maintenance branches, testing the supported stable etc. I will be working on a few of those policy implementations on plugins. One thing I already started is to add the supported stable branch job on plugins master gate. Few plugins like neutron, ironic, heat, Octavia already doing this but not all. - https://review.opendev.org/#/q/topic:stable-testing-tempest-plugins+(status:open+OR+status:merged) This document will be improved with more items and more best practices we can use for writing tests among all the plugins. One of the pending items I can think is to make the plugins service clients stable interface for other plugins so that integrated testing among plugins can be done in a more consistent and reliable way. All the plugins can register their service clients in Tempest and other plugins can detect them from Tempest itself. Tempest already provide the central framework to do that. -gmann From sean.mcginnis at gmx.com Fri May 8 21:28:23 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 May 2020 16:28:23 -0500 Subject: [RelMgmt] Victoria PTG Schedule Message-ID: <87d3b653-d489-3fc1-c8bc-f5ec126e5c7e@gmx.com> Hello release team, In our weekly meeting we had discussed a couple of time slots for the virtual PTG that could work for the team. We were waiting on the TC sessions to be set before picking one to avoid conflicts though. Those TC sessions are now set, so it looks like we will need to take the earlier timeslot. I have put us down for the Folsom room for 13UTC-14UTC. I have started an etherpad here: https://etherpad.opendev.org/p/relmgmt-victoria-ptg We can still capture things in our tracking etherpad, but if we end up needing to take any other notes, please add them to the PTG etherpad. Also, feel free to add anything in there ahead of the PTG to make sure we cover anything that we want to discuss. Let me know if there are any questions or concerns. Sean From sean.mcginnis at gmx.com Fri May 8 21:34:29 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 May 2020 16:34:29 -0500 Subject: [release] Release countdown for week R-0, May 11 - 15 Message-ID: <20200508213429.GA902838@sm-workstation> Development Focus ----------------- We will be releasing the coordinated OpenStack Ussuri release next week, on May 13. Thanks to everyone involved in the Ussuri cycle! We are now in pre-release freeze, so no new deliverable will be created until final release, unless a release-critical regression is spotted. Otherwise, teams participating in the virtual PTG June 1-5 should start to plan what they will be discussing there, by creating and filling team etherpads. You can access the list of PTG etherpads at: http://ptg.openstack.org/etherpads.html General Information ------------------- On release day, the release team will produce final versions of deliverables following the cycle-with-rc release model, by re-tagging the commit used for the last RC. A patch doing just that will be proposed. PTLs and release liaisons should watch for that final release patch from the release team. While not required, we would appreciate having an ack from each team before we approve it on the 13th, so that their approval is included in the metadata that goes onto the signed tag. Upcoming Deadlines & Dates -------------------------- Final Ussuri release: May 13 Virtual Victoria PTG: June 1-5 From kennelson11 at gmail.com Sat May 9 00:13:55 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 8 May 2020 17:13:55 -0700 Subject: [all] Virtual Ussuri Celebration Message-ID: Hello Everyone! I wanted to invite you all to a virtual celebration of the release! Next Friday, May 15th at 20:00 UTC, I invite you to join me in celebrating :) The purpose of this meeting will be two fold: 1. To celebrate all that our community has accomplished over the last release since we can't get together in person in June. I was thinking with trivia and a baking contest (I was going to attempt a cake in the shape of the OpenStack logo or maybe the actual Ussuri logo) :) There was also a request for piano karaoke which is also still on the table.. 2. To test out the OpenDev team's meetpad instance and work out kinks so that it can be vetted for PTG use. Here is the room we will be testing: https://meetpad.opendev.org/virtual-ussuri-celebration Worst case scenario, I'll share my zoom. Can't wait to see you all there (and what you've baked)! -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Sat May 9 02:55:14 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 8 May 2020 19:55:14 -0700 Subject: [all][PTL] U Community Goal: Project PTL & Contrib Docs Update #9 Message-ID: Hello Everyone! Even more excellent progress[1] has been made as we are inching closer to completion. Again, thank you for all the hard work! Projects listed below are missing progress/merged status according to the story[4]. If you are on this list, please get started IMMEDIATELY. All told, if you are doing the bare minimum of only filling out the template[2] it shouldn't take more than 5-10 minutes per repo. - Barbican - Cloudkitty - Congress - Cyborg - Ec2-API - Freezer - Heat - Horizon - Karbor - Kuryr - LOCI - Masakari - Mistral - Monasca - Murano - OpenStack Charms - openstack-chef - OpenStackAnsible - OpenStackClient - Packaging-RPM - Placement - Puppet OpenStack - Rally - Release Management - Senlin - Solum - Storlets - Swift - Tacker - Telemetry - Tricircle - TripleO - Zaqar PLEASE add me if you need help with reviews and let me know if there is any other help I can provide; we are nearly out of time and there is a lot of work to be done still. If you have questions about the goal itself, here is the link to that[3]. And as you push patches, please be sure to update the StoryBoard story[4]. Thanks! -Kendall (diablo_rojo) [1] Progress: https://review.opendev.org/#/q/topic:project-ptl-and-contrib-docs+(status:open+OR+status:merged) [2] Cookiecutter Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst [3] Accepted Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html [4] Task Tracking: https://storyboard.openstack.org/#!/story/2007236 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Sat May 9 09:28:26 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Sat, 9 May 2020 09:28:26 +0000 Subject: [all][PTL] U Community Goal: Project PTL & Contrib Docs Update #9 In-Reply-To: References: Message-ID: <1589016506821.17738@binero.com> Hello Kendall, Unfortunately I've completely missed this but tried to catch up for Puppet OpenStack starting with [1] and [2] (need to do [2] for all our 40+ modules). We've already cut stable/ussuri branches, may not be worth backporting and wasting CI resources to get the CONTRIBUTING.rst file in, if really needed we can get that done as well. [1] https://review.opendev.org/#/c/726509/ [2] https://review.opendev.org/#/c/726510/? Best regards ________________________________ From: Kendall Nelson Sent: Saturday, May 9, 2020 4:55 AM To: OpenStack Discuss Subject: [all][PTL] U Community Goal: Project PTL & Contrib Docs Update #9 Hello Everyone! Even more excellent progress[1] has been made as we are inching closer to completion. Again, thank you for all the hard work! Projects listed below are missing progress/merged status according to the story[4]. If you are on this list, please get started IMMEDIATELY. All told, if you are doing the bare minimum of only filling out the template[2] it shouldn't take more than 5-10 minutes per repo. * Barbican * Cloudkitty * Congress * Cyborg * Ec2-API * Freezer * Heat * Horizon * Karbor * Kuryr * LOCI * Masakari * Mistral * Monasca * Murano * OpenStack Charms * openstack-chef * OpenStackAnsible * OpenStackClient * Packaging-RPM * Placement * Puppet OpenStack * Rally * Release Management * Senlin * Solum * Storlets * Swift * Tacker * Telemetry * Tricircle * TripleO * Zaqar PLEASE add me if you need help with reviews and let me know if there is any other help I can provide; we are nearly out of time and there is a lot of work to be done still. If you have questions about the goal itself, here is the link to that[3]. And as you push patches, please be sure to update the StoryBoard story[4]. Thanks! -Kendall (diablo_rojo) [1] Progress: https://review.opendev.org/#/q/topic:project-ptl-and-contrib-docs+(status:open+OR+status:merged) [2] Cookiecutter Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst [3] Accepted Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html [4] Task Tracking: https://storyboard.openstack.org/#!/story/2007236 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat May 9 15:41:30 2020 From: zigo at debian.org (Thomas Goirand) Date: Sat, 9 May 2020 17:41:30 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> Message-ID: <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> On 5/8/20 10:24 PM, Jeremy Stanley wrote: > On 2020-05-08 22:06:14 +0200 (+0200), Thomas Goirand wrote: >> On 5/8/20 7:58 PM, Jeremy Stanley wrote: > [...] >>> whether the Puppet OpenStack team wants the burden of spending a >>> cycle targeting support for a Puppet version which will be EOL the >>> month after the release (especially given deployment projects >>> usually release as much as a month after the coordinated release >>> already). >> >> I don't see how this can be a burden. It's not as if the language >> changed that much and if there was major incompatibilities. > [...] > > Well, it does mean keeping those modules working with two major > versions of Puppet which, speaking from past experience, is not > easy (and a big part of why we decided to replace all our > orchestration and configuration management in OpenDev with something > other than Puppet after the Puppet 3->4->5 transition). My understanding is that adding compatibility to a new version isn't easy, but keeping compat backward isn't that hard. > How much of a strain that is on the Puppet OpenStack team, I can't > say. It means at least keeping one CI job running with puppet 5. This could be the Debian one if I succeed in restoring Debian as a voting set of jobs. Cheers, Thomas Goirand (zigo) From tobias.urdin at binero.com Sat May 9 18:52:17 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Sat, 9 May 2020 18:52:17 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org>, <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> Message-ID: <1589050337848.3939@binero.com> Hello, I don't agree, we should continue on the chosen path of not supporting Puppet 5 in the Victoria release. We've had Puppet 6 support since I introduced the testing for it in 2018 back then we ran Puppet 5 and Puppet 6 on every commit until we deemed it pretty redundant and moved Puppet 6 jobs to experimental while keeping the Puppet 6 syntax and unit jobs. We've never claimed that Puppet OpenStack is going to support downstream OS repackaging of Puppet, even though RDO/TripleO does the same we've always tested Puppet with upstream versions for our testing, only Debian has skipped that and testing with downstream packages. I don't think keeping Puppet 5 jobs for Debian would be a good idea because it would block the whole idea of moving to Puppet 6 in the first place. Puppet 5 will be EOL one month after Victoria release, while I highly doubt that we will introduce changes that will break Puppet 5 in the Victoria cycle we would like to start looking forwarding instead of being stuck with all the legacy stuff (now that we've moved to CentOS 8 as well), after being active in the project a longer time we've gone for semi-broken maintenance-mode-only to more active with keeping everything up to date and following the changes of the OpenStack community. Thanks to a number of contributors, thank you everyone! There is a lot things that we could do in Puppet OpenStack (but hey, resources to perform them is scarce) just to give an example the new Resource API [1] (whether it be with or without the usage of OpenStack CLI). If anybody ever want something to do I have nice big list of things that we could do, I've posted an old version of it in an email to this mailing list during PTL nominations. Best regards [1] https://review.opendev.org/#/q/topic:new-providers+(status:open+OR+status:merged) ________________________________________ From: Thomas Goirand Sent: Saturday, May 9, 2020 5:41 PM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Puppet 5 is officially unsupported in Victoria release On 5/8/20 10:24 PM, Jeremy Stanley wrote: > On 2020-05-08 22:06:14 +0200 (+0200), Thomas Goirand wrote: >> On 5/8/20 7:58 PM, Jeremy Stanley wrote: > [...] >>> whether the Puppet OpenStack team wants the burden of spending a >>> cycle targeting support for a Puppet version which will be EOL the >>> month after the release (especially given deployment projects >>> usually release as much as a month after the coordinated release >>> already). >> >> I don't see how this can be a burden. It's not as if the language >> changed that much and if there was major incompatibilities. > [...] > > Well, it does mean keeping those modules working with two major > versions of Puppet which, speaking from past experience, is not > easy (and a big part of why we decided to replace all our > orchestration and configuration management in OpenDev with something > other than Puppet after the Puppet 3->4->5 transition). My understanding is that adding compatibility to a new version isn't easy, but keeping compat backward isn't that hard. > How much of a strain that is on the Puppet OpenStack team, I can't > say. It means at least keeping one CI job running with puppet 5. This could be the Debian one if I succeed in restoring Debian as a voting set of jobs. Cheers, Thomas Goirand (zigo) From zigo at debian.org Sat May 9 22:56:37 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 10 May 2020 00:56:37 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <1589050337848.3939@binero.com> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> Message-ID: <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> On 5/9/20 8:52 PM, Tobias Urdin wrote: > Hello, > > I don't agree, we should continue on the chosen path of not supporting Puppet 5 > in the Victoria release. > > We've had Puppet 6 support since I introduced the testing for it in 2018 back then we ran > Puppet 5 and Puppet 6 on every commit until we deemed it pretty redundant and moved > Puppet 6 jobs to experimental while keeping the Puppet 6 syntax and unit jobs. > > We've never claimed that Puppet OpenStack is going to support downstream OS repackaging of > Puppet, even though RDO/TripleO does the same we've always tested Puppet with upstream > versions for our testing, only Debian has skipped that and testing with downstream packages. I don't understand why you insist that we shouldn't use downstream distribution packages. I haven't heard that the project claimed that we are "support[ing] downstream OS repackaging of Puppet", but I haven't heard that we aren't either, or even any preference in this regard. This I miss this information somewhere? Did someone even write this somewhere? Or is this only your own view? One thing is that the Debian packages for Puppet are of better quality than the upstream ones in many ways. There's also the problem that adding an external artifact is *not* what my project is about (see below). > I don't think keeping Puppet 5 jobs for Debian would be a good idea because it would block the > whole idea of moving to Puppet 6 in the first place. What would we earn from Puppet 6? Is there some improvements in the language you would like to benefit of? We aren't even using half of what Puppet 5 offers (like typed variables, for example). Maybe we could start by that. > Puppet 5 will be EOL one month after Victoria release, while I highly doubt that we will introduce > changes that will break Puppet 5 in the Victoria cycle we would like to start looking forwarding instead > of being stuck with all the legacy stuff Not having things that are breaking me is exactly what I am requesting here. So yeah, please don't break me !!! :) My installer is fully contained in Debian and contains absolutely ZERO external resources. I want to keep things this way for many reasons, including being able to ship the whole product on a redistribuable CD made by things only from Debian. I very much agree that we should move forward with Puppet 6 at some point, but in Debian, we're far from being there yet, unfortunately. Packaging Puppet 6 requires a lot of things to happen, like for example (if I'm not mistaking) having Puppet upstream to support Ruby 2.7, which is currently in Sid, plus packaging lots of Clojure stuff and so on. All I'm asking is that you delay this enough so the work can be done, and given the amount of work and my time constraint, I'm really not sure what this means, unfortunately. > If anybody ever want something to do I have nice big list of things that we could do, I've posted an old version > of it in an email to this mailing list during PTL nominations. Can you point at it so I have a look? (just the subject so I can search or any other pointer...). I have some ideas too of things I'd like to get done. For example, I have started a provider for barbican secret, in order to get swift on-disk encryption being setup automatically (this is currently a half-automated thing for me right now), and many other improvements of this kind. Have we booked some sessions for the virtual PTG? Cheers, Thomas Goirand (zigo) From zigo at debian.org Sat May 9 23:12:46 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 10 May 2020 01:12:46 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> Message-ID: On 5/10/20 12:56 AM, Thomas Goirand wrote: > On 5/9/20 8:52 PM, Tobias Urdin wrote: >> Hello, >> >> I don't agree, we should continue on the chosen path of not supporting Puppet 5 >> in the Victoria release. >> >> We've had Puppet 6 support since I introduced the testing for it in 2018 back then we ran >> Puppet 5 and Puppet 6 on every commit until we deemed it pretty redundant and moved >> Puppet 6 jobs to experimental while keeping the Puppet 6 syntax and unit jobs. >> >> We've never claimed that Puppet OpenStack is going to support downstream OS repackaging of >> Puppet, even though RDO/TripleO does the same we've always tested Puppet with upstream >> versions for our testing, only Debian has skipped that and testing with downstream packages. > > I don't understand why you insist that we shouldn't use downstream > distribution packages. I haven't heard that the project claimed that we > are "support[ing] downstream OS repackaging of Puppet", but I haven't > heard that we aren't either, or even any preference in this regard. This > I miss this information somewhere? Did someone even write this > somewhere? Or is this only your own view? > > One thing is that the Debian packages for Puppet are of better quality > than the upstream ones in many ways. There's also the problem that > adding an external artifact is *not* what my project is about (see below). One more thing: puppetlabs is only providing packages for the current stable distribution of Debian (whatever that is), never for testing or sid, and that's a perfectly valid environment where I sometimes test deployments. So if we get incompatible with Puppet 5, this also break this use case, currently. Cheers, Thomas Goirand (zigo) From hongbin034 at gmail.com Sun May 10 01:16:54 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 9 May 2020 21:16:54 -0400 Subject: OpenStack Ussuri Community Meetings In-Reply-To: <425EF78A-2155-4A4C-BEB2-93266DAA6FA7@openstack.org> References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> <422BE6AE-ACCC-427D-8C97-5142F5FACA70@cern.ch> <017C2735-2FB7-4AD3-B5AF-30730EDBFC43@openstack.org> <425EF78A-2155-4A4C-BEB2-93266DAA6FA7@openstack.org> Message-ID: Hi Allison, If it is not too late, I can attend the meeting at Thursday, May 14 at 0200 UTC and give a 5 minute updates on the Zun project. However, if it is too hard to adjust the schedule at this time, that is fine. Sorry for the late request. Best regards, Hongbin On Thu, May 7, 2020 at 4:44 PM Allison Price wrote: > Hi Feilong, > > If you can share a few slides with me ahead of the meeting (ideally by EOD > Monday, I am going to compile them all into one presentation that we will > screenshare during the meeting. This way, the meeting recording will show > the slides. Attached is a powerpoint template that you can use. I recommend > keeping it to 2-3 slides if possible. > > Thanks! > Allison > > > > > On May 7, 2020, at 3:21 PM, feilong wrote: > > Hi Allison, > > During the meeting, should we share a PPT/demo or just an oral > introduction about the highlights? If a PPT is preferred, is there a > template we should use? Thanks. > > > On 6/05/20 4:03 AM, Allison Price wrote: > > Hi Tim, > > Yes, both the slides and recordings will be shared on the mailing list > after the meetings. > > Thanks, > Allison > > > On May 5, 2020, at 2:02 AM, Tim Bell wrote: > > Thanks for organising this. > > Will recordings / slides be made available ? > > Tim > > On 4 May 2020, at 20:07, Allison Price wrote: > > Hi everyone, > > The OpenStack Ussuri release is only a week and a half away! Members of > the TC and project teams are holding two community meetings: one on > Wednesday, May 13 and Thursday, May 14. Here, they will share some of the > release highlights and project features. > > Join us: > > > - *Wednesday, May 13 at 1400 UTC* > - Moderator > - Mohammed Naser, TC > - Presenters / Open for Questions > - Slawek Kaplonski, Neutron > - Michael Johnson, Octavia > - Goutham Pacha Ravi, Manila > - Mark Goddard, Kolla > - Balazs Gibizer, Nova > - Brian Rosmaita, Cinder > > > > - *Thursday, May 14 at 0200 UTC* > - Moderator: Rico Lin, TC > - Presenters / Open for Questions > - Michael Johnson, Octavia > - Goutham Pacha Ravi, Manila > - Rico Lin, Heat > - Feilong Wang, Magnum > - Brian Rosmaita, Cinder > > > > See you there! > Allison > > > > > > Allison Price > OpenStack Foundation > allison at openstack.org > > > > > > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Sun May 10 16:18:07 2020 From: allison at openstack.org (Allison Price) Date: Sun, 10 May 2020 11:18:07 -0500 Subject: OpenStack Ussuri Community Meetings In-Reply-To: References: <9D938D3B-2525-45CC-98C1-E346706C9714@openstack.org> <422BE6AE-ACCC-427D-8C97-5142F5FACA70@cern.ch> <017C2735-2FB7-4AD3-B5AF-30730EDBFC43@openstack.org> <425EF78A-2155-4A4C-BEB2-93266DAA6FA7@openstack.org> Message-ID: <6AB24DED-3835-4C3D-8DF7-F27A77E7D097@openstack.org> Hi Hongbin, It is not too late! Would you be able to create and share 1-3 slides by end of day Monday? I can then add your slides to the deck and distribute to all of the presenters ahead of Thursday’s meeting. Thanks for volunteering! Allison > On May 9, 2020, at 8:16 PM, Hongbin Lu wrote: > > Hi Allison, > > If it is not too late, I can attend the meeting at Thursday, May 14 at 0200 UTC and give a 5 minute updates on the Zun project. However, if it is too hard to adjust the schedule at this time, that is fine. Sorry for the late request. > > Best regards, > Hongbin > > On Thu, May 7, 2020 at 4:44 PM Allison Price > wrote: > Hi Feilong, > > If you can share a few slides with me ahead of the meeting (ideally by EOD Monday, I am going to compile them all into one presentation that we will screenshare during the meeting. This way, the meeting recording will show the slides. Attached is a powerpoint template that you can use. I recommend keeping it to 2-3 slides if possible. > > Thanks! > Allison > > > > >> On May 7, 2020, at 3:21 PM, feilong > wrote: >> >> Hi Allison, >> >> During the meeting, should we share a PPT/demo or just an oral introduction about the highlights? If a PPT is preferred, is there a template we should use? Thanks. >> >> >> >> On 6/05/20 4:03 AM, Allison Price wrote: >>> Hi Tim, >>> >>> Yes, both the slides and recordings will be shared on the mailing list after the meetings. >>> >>> Thanks, >>> Allison >>> >>> >>>> On May 5, 2020, at 2:02 AM, Tim Bell > wrote: >>>> >>>> Thanks for organising this. >>>> >>>> Will recordings / slides be made available ? >>>> >>>> Tim >>>> >>>>> On 4 May 2020, at 20:07, Allison Price > wrote: >>>>> >>>>> Hi everyone, >>>>> >>>>> The OpenStack Ussuri release is only a week and a half away! Members of the TC and project teams are holding two community meetings: one on Wednesday, May 13 and Thursday, May 14. Here, they will share some of the release highlights and project features. >>>>> >>>>> Join us: >>>>> >>>>> Wednesday, May 13 at 1400 UTC >>>>> Moderator >>>>> Mohammed Naser, TC >>>>> Presenters / Open for Questions >>>>> Slawek Kaplonski, Neutron >>>>> Michael Johnson, Octavia >>>>> Goutham Pacha Ravi, Manila >>>>> Mark Goddard, Kolla >>>>> Balazs Gibizer, Nova >>>>> Brian Rosmaita, Cinder >>>>> >>>>> Thursday, May 14 at 0200 UTC >>>>> Moderator: Rico Lin, TC >>>>> Presenters / Open for Questions >>>>> Michael Johnson, Octavia >>>>> Goutham Pacha Ravi, Manila >>>>> Rico Lin, Heat >>>>> Feilong Wang, Magnum >>>>> Brian Rosmaita, Cinder >>>>> >>>>> >>>>> See you there! >>>>> Allison >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Allison Price >>>>> OpenStack Foundation >>>>> allison at openstack.org >>>>> >>>>> >>>>> >>>>> >>>> >>> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun May 10 16:39:46 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 10 May 2020 12:39:46 -0400 Subject: [requirements][qa] new pip resolver & our constraints needs Message-ID: <0D6BF508-39F0-4C6E-A5D2-E42327302485@doughellmann.com> The PyPA team is working on a new resolver for pip. As part of that work, they have had some questions about the way OpenStack uses the constraints feature. They’ve been great about taking input, and prometheanfire has been doing some testing to ensure the new work doesn’t break compatibility [1] (thanks Matthew!). There is a new question from the pip maintainers about whether constraints need to support “nameless” entries (by referring to a URL in a constraints file instead of using package names) [2]. I don’t see anything in the upper-constraints.txt that looks like a URL, but I don’t know how teams might be configuring their lower-constraints.txt or whether we do anything in devstack to munge the constraints list to point to local packages as part of LIBS_FROM_GIT handling. Is anyone aware of any uses of URLs in constraints files anywhere within OpenStack? Doug [1] https://review.opendev.org/#/c/726186/ [2] https://github.com/pypa/pip/issues/8210 From fungi at yuggoth.org Sun May 10 17:14:45 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 10 May 2020 17:14:45 +0000 Subject: [requirements][qa] new pip resolver & our constraints needs In-Reply-To: <0D6BF508-39F0-4C6E-A5D2-E42327302485@doughellmann.com> References: <0D6BF508-39F0-4C6E-A5D2-E42327302485@doughellmann.com> Message-ID: <20200510171445.lotwj2ngye5ti466@yuggoth.org> On 2020-05-10 12:39:46 -0400 (-0400), Doug Hellmann wrote: [...] > Is anyone aware of any uses of URLs in constraints files anywhere > within OpenStack? [...] Granted it's only indexing master branches, but this quick search shows no occurrences: http://codesearch.openstack.org/?q=https%3F%3A&i=nope&files=lower-constraints.txt -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jungleboyj at gmail.com Sun May 10 17:52:40 2020 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Sun, 10 May 2020 12:52:40 -0500 Subject: Uniting TC and UC In-Reply-To: <6f8571bf-0d23-8a3b-1225-8c2bdd95285d@openstack.org> References: <6f8571bf-0d23-8a3b-1225-8c2bdd95285d@openstack.org> Message-ID: On 5/7/2020 11:00 AM, Thierry Carrez wrote: > Mohamed Elsakhawy wrote: >> As you may know already, there has been an ongoing discussion to >> merge UC and TC under a single body. Three options were presented, >> along with their impact on the bylaws. >> >> http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html >> >> >> We had several discussions in the UC on the three options as well as >> the UC items that need to be well-represented under the common >> committee, and here’s what we propose for the merger: >> [...] > > Thanks Mohamed for the detailed plan. With Jeremy's caveats, it sounds > good to me. I'm happy to assist by filing the necessary changes in the > various governance repositories. > >> [...] >> In addition to discussions over the mailing list, we also have the >> opportunity of "face-to-face" discussions at the upcoming PTG. > > I added this topic to the etherpad at: > https://etherpad.opendev.org/p/tc-victoria-ptg > I agree.  Thanks Mohamed for putting this together.  It is consistent with the plan we discussed earlier in the year. Thanks for adding it to the Etherpad ttx. Jay From reza.b2008 at gmail.com Sun May 10 18:04:01 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Sun, 10 May 2020 22:34:01 +0430 Subject: [TripleO] External network on compute node In-Reply-To: <98236409f2ea0a5f0a7bffd3eb1054ef5399ba69.camel@redhat.com> References: <98236409f2ea0a5f0a7bffd3eb1054ef5399ba69.camel@redhat.com> Message-ID: Hi Harald, Thanks for your explanation. ' /usr/share/openstack-tripleo-heat-templates/ ' was just happened during copy-pasting here :) I exported the overcloud plan, and it was exactly same as what I sent before. I redeployed everything and it didn't help either. What are the problems of my network-isolation.yaml and network-environment.yaml files in your opinion? I have to add that in this environment I don't know why the external network doesn't provide Internet for VMs. But everything else works fine. I don't have any Vlan configured in my environment and I'm planning to only have flat and geneve networks, and having external network for every compute node so I can ignore provisioning network bottleneck and spof. Regards, Reza On Wed, 6 May 2020 at 17:10, Harald Jensås wrote: > On Wed, 2020-05-06 at 14:00 +0430, Reza Bakhshayeshi wrote: > > here is my deploy command: > > > > openstack overcloud deploy \ > > --control-flavor control \ > > --compute-flavor compute \ > > --templates ~/openstack-tripleo-heat-templates \ > > -r /home/stack/roles_data.yaml \ > > -e /home/stack/containers-prepare-parameter.yaml \ > > -e environment.yaml \ > > -e /usr/share/openstack-tripleo-heat- > > templates/environments/services/octavia.yaml \ > > This is not related, but: > Why use '/usr/share/openstack-tripleo-heat-templates/' and not > '~/openstack-tripleo-heat-templates/' here? > > > -e ~/openstack-tripleo-heat-templates/environments/services/neutron- > > ovn-dvr-ha.yaml \ > > -e ~/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > > -e ~/openstack-tripleo-heat-templates/environments/network- > > isolation.yaml \ > > -e ~/openstack-tripleo-heat-templates/environments/network- > > environment.yaml \ > > Hm, I'm not sure network-isolation.yaml and network-environment.yaml > contains what you expect. Can you do a plan export? > > openstack overcloud plan export --output-file oc-plan.tar.gz overcloud > > Then have a look at `environments/network-isolation.yaml` and > `environments/network-environment.yaml` in the plan? > > I think you may want to copy these two files out of the templates tree > and use the out of tree copies instead. > > > --timeout 360 \ > > --ntp-server time.google.com -vvv > > > > network-environment.yaml: > > http://paste.openstack.org/show/793179/ > > > > network-isolation.yaml: > > http://paste.openstack.org/show/793181/ > > > > compute-dvr.yaml > > http://paste.openstack.org/show/793183/ > > > > I didn't modify network_data.yaml > > > > > > > -- > Harald > > > On Wed, 6 May 2020 at 05:27, Harald Jensås > > wrote: > > > On Tue, 2020-05-05 at 23:25 +0430, Reza Bakhshayeshi wrote: > > > > Hi all. > > > > The default way of compute node for accessing Internet if through > > > > undercloud. > > > > I'm going to assign an IP from External network to each compute > > > node > > > > with default route. > > > > But the deployment can't assign an IP to br-ex and fails with: > > > > > > > > " raise AddrFormatError('invalid IPNetwork %s' > > > % > > > > addr)", > > > > "netaddr.core.AddrFormatError: invalid IPNetwork > > > ", > > > > > > > > Actually 'ip_netmask': '' is empty during deployment for compute > > > > nodes. > > > > I've added external network to compute node role: > > > > External: > > > > subnet: external_subnet > > > > > > > > and for network interface: > > > > - type: ovs_bridge > > > > name: bridge_name > > > > mtu: > > > > get_param: ExternalMtu > > > > dns_servers: > > > > get_param: DnsServers > > > > use_dhcp: false > > > > addresses: > > > > - ip_netmask: > > > > get_param: ExternalIpSubnet > > > > routes: > > > > list_concat_unique: > > > > - get_param: ExternalInterfaceRoutes > > > > - - default: true > > > > next_hop: > > > > get_param: > > > ExternalInterfaceDefaultRoute > > > > members: > > > > - type: interface > > > > name: nic3 > > > > mtu: > > > > get_param: ExternalMtu > > > > use_dhcp: false > > > > primary: true > > > > > > > > Any suggestion would be grateful. > > > > Regards, > > > > Reza > > > > > > > > > > I think we need more information to see what the issue is. > > > - your deploy command? > > > - content of network_data.yaml used (unless the default) > > > - environment files related to network-isolation, network- > > > environment, > > > network-isolation? > > > > > > > > > -- > > > Harald > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun May 10 21:55:19 2020 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 10 May 2020 16:55:19 -0500 Subject: [neutron] Bug deputy report week of May 4 - 10 Message-ID: Medium: - https://bugs.launchpad.net/neutron/+bug/1876752 neutron-ovn-db-sync-util fails with KeyError: 'ovn'. Proposed fix https://review.opendev.org/#/c/725309 - https://bugs.launchpad.net/neutron/+bug/1877146 neutron-dynamic-plugin tempest jobs fail on stable/train branch. Proposed fix: https://review.opendev.org/#/c/725920/ - https://bugs.launchpad.net/neutron/+bug/1877296 Tunnel ports are not cleaned up with several ovs agents restart - https://bugs.launchpad.net/neutron/+bug/1877377 [OVN] neutron-ovn-tempest-ovs-master-fedora periodic job is failing - https://bugs.launchpad.net/neutron/+bug/1877404 Add "qos_policy_id" field to "FloatingIP" OVO. Proposed fix: https://review.opendev.org/#/c/726208/ - https://bugs.launchpad.net/neutron/+bug/1877408 Implement FIP QoS in OVN backend - https://bugs.launchpad.net/neutron/+bug/1877447 Add TCP/UDP port forwarding extension to OVN - https://bugs.launchpad.net/neutron/+bug/1877560 Optimize "QosPolicy" OVO bound objects retrieve methods. Proposed fix https://review.opendev.org/#/c/726358/ - https://bugs.launchpad.net/neutron/+bug/1877254 neutron agent. list API lacks sort and page feature Low: - https://bugs.launchpad.net/neutron/+bug/1876898 L3 agent should reports enabled extensions to the server. Proposed fix: https://review.opendev.org/#/c/725532 - https://bugs.launchpad.net/neutron/+bug/1877195 [ovn] neutron devstack needs to support openflow15. Proposed fix: https://review.opendev.org/#/c/726010/ Opinion: - https://bugs.launchpad.net/neutron/+bug/1877248 Rabbitmq cluster split brain causes l2 population related flows to be cleared when ovs-neutron-agent is restarted RFEs: - https://bugs.launchpad.net/neutron/+bug/1877301 [RFE] L3 Router support ndp proxy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Sun May 10 21:57:46 2020 From: mthode at mthode.org (Matthew Thode) Date: Sun, 10 May 2020 16:57:46 -0500 Subject: [requirements][qa] new pip resolver & our constraints needs In-Reply-To: <20200510171445.lotwj2ngye5ti466@yuggoth.org> References: <0D6BF508-39F0-4C6E-A5D2-E42327302485@doughellmann.com> <20200510171445.lotwj2ngye5ti466@yuggoth.org> Message-ID: <20200510215746.curjfniv3o4i6erl@mthode.org> On 20-05-10 17:14:45, Jeremy Stanley wrote: > On 2020-05-10 12:39:46 -0400 (-0400), Doug Hellmann wrote: > [...] > > Is anyone aware of any uses of URLs in constraints files anywhere > > within OpenStack? > [...] > > Granted it's only indexing master branches, but this quick search > shows no occurrences: > > http://codesearch.openstack.org/?q=https%3F%3A&i=nope&files=lower-constraints.txt > I think we disallow it in the requirements-check job. Yep, parse_urls is set to false by default. openstack_requirements/requirement.py def parse_line(req_line, permit_urls=False): """Parse a single line of a requirements file. requirements files here are a subset of pip requirements files: we don't try to parse URL entries, or pip options like -f and -e. Those are not permitted in global-requirements.txt. If encountered in a synchronised file such as requirements.txt or test-requirements.txt, they are illegal but currently preserved as-is. They may of course be used by local test configurations, just not committed into the OpenStack reference branches. :param permit_urls: If True, urls are parsed into Requirement tuples. By default they are not, because they cannot be reflected into setuptools kwargs, and thus the default is conservative. When urls are permitted, -e *may* be supplied at the start of the line. """ -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From amotoki at gmail.com Mon May 11 04:21:08 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 11 May 2020 13:21:08 +0900 Subject: [horizon] [infra] horizon stable gate failure: yarn is required? Message-ID: Hi, nodejs4 jobs in stein and older stable branches in horizon are now failing [1]. The symptom is that "yarn --version" fails during ensure-yarn [2]. It looks like that yarn now requires nodejs 10 or later and does not support nodejs4. The failure happens in the jobs on ubuntu-xenial nodes. When I try nodejs10 in stable/stein, the nodejs4 jobs (lint/run) succeeded [3]. My question is whether we need to install yarn in the pre-run? horizon does not require yarn. All nodejs jobs inherit nodejs-npm job defined in zuul/zuul-jobs and the pre-run playbook [4] installs yarn. I am not sure when we started to require yarn and who requires yarn. Generally speaking, I don't think it is a good idea to switch the nodejs version in these older stable branches. horizon and horizon plugins use nodejs only for testing, so if switching to nodejs10 would solve the failure completely, it might be an option too though. Thought? -amotoki [1] https://review.opendev.org/#/q/topic:health-check+status:open+project:openstack/horizon+branch:%255Estable/.* [2] https://zuul.opendev.org/t/openstack/build/72317c21cf7b4a1a885b9a26c5ae4c2d/console [3] https://review.opendev.org/#/c/726702/ [4] https://opendev.org/zuul/zuul-jobs/src/branch/master/playbooks/javascript/pre.yaml From amotoki at gmail.com Mon May 11 04:41:51 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 11 May 2020 13:41:51 +0900 Subject: [horizon][qa][infra] horizon master gate failure Message-ID: Hi, horizon master branch is now failing due to permission defined during installing tempest-horizon into the tempest venv [1]. It happens since May 9. The source tempest-horizon repo is located in /home/zuul/src/opendev.org/openstack/tempest-horizon/. I digged into the detail for a while but haven't figured out the root cause yet. I have the following questions on this: (1) Did we have any changes on the job configuration recently? stable/ussuri and master branches are similar but it only happens in the master branch. (2) When I changed the location of the tempest-horizon directory specified as tempest-plugins, the failure has gone [2]. Is there any recommendation from the tempest team? Thanks, -amotoki [1] https://zuul.opendev.org/t/openstack/build/9389be57c39d4971867abfa831bd3793 [2] https://review.opendev.org/#/c/726698/1/.zuul.yaml From radoslaw.piliszek at gmail.com Mon May 11 07:42:07 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Mon, 11 May 2020 09:42:07 +0200 Subject: [horizon] [infra] horizon stable gate failure: yarn is required? In-Reply-To: References: Message-ID: <78187bf0-7ea9-fe84-2885-759e68fa8128@gmail.com> Hi Akihiro, from my PoV yarn should be installed by projects that require it, especially since it's not baked into the instance image but tried to be installed each time (and hence causes these late failures). Since the jobs are now broken anyway, I think removing yarn from pre play is the right move. Obviously, ensure-yarn has to be fixed to be useful for those parties who require yarn and older nodejs. And "those parties" in case of OpenStack seem to be only RefStack, which means there is only one project to adapt then. -yoctozepto On 2020-05-11 06:21, Akihiro Motoki wrote: > Hi, > > nodejs4 jobs in stein and older stable branches in horizon are now failing [1]. > The symptom is that "yarn --version" fails during ensure-yarn [2]. > > It looks like that yarn now requires nodejs 10 or later and does not > support nodejs4. > The failure happens in the jobs on ubuntu-xenial nodes. > When I try nodejs10 in stable/stein, the nodejs4 jobs (lint/run) succeeded [3]. > > My question is whether we need to install yarn in the pre-run? > horizon does not require yarn. > All nodejs jobs inherit nodejs-npm job defined in zuul/zuul-jobs and > the pre-run playbook [4] installs yarn. > I am not sure when we started to require yarn and who requires yarn. > > Generally speaking, I don't think it is a good idea to switch the > nodejs version in these older stable branches. > horizon and horizon plugins use nodejs only for testing, so if > switching to nodejs10 would solve the failure completely, it might be > an option too though. > > Thought? > > -amotoki > > [1] https://review.opendev.org/#/q/topic:health-check+status:open+project:openstack/horizon+branch:%255Estable/.* > [2] https://zuul.opendev.org/t/openstack/build/72317c21cf7b4a1a885b9a26c5ae4c2d/console > [3] https://review.opendev.org/#/c/726702/ > [4] https://opendev.org/zuul/zuul-jobs/src/branch/master/playbooks/javascript/pre.yaml > From arne.wiebalck at cern.ch Mon May 11 07:49:34 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 May 2020 09:49:34 +0200 Subject: [baremetal-sig][ironic] Baremetal whitepaper session 3 In-Reply-To: <44993048-bb11-3241-ce89-515251f001ab@cern.ch> References: <44993048-bb11-3241-ce89-515251f001ab@cern.ch> Message-ID: <3abf101a-f510-5cc2-cbbc-a03f908cd7f5@cern.ch> Dear all, The next white paper session is now scheduled for tomorrow, May 12th at 2pm UTC. I've set up a meeting here: https://cern.zoom.us/j/94248770580 Everyone is welcome to join. See you tomorrow! Arne On 06.05.20 19:50, Arne Wiebalck wrote: > Dear all, > > The white paper is taking shape ... but still needs a little > more work. If you want to help and join the session(s) to > discuss the current state and the next steps, please reply > to the doodle before the end of this week: > > https://doodle.com/poll/zcprxptw6nk4diq7 > > Like last time, I will send out the call details once we have > settled on the time slot(s). > > Thanks! >  Arne > > -- > Arne Wiebalck > CERN IT > From ltoscano at redhat.com Mon May 11 08:47:28 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 11 May 2020 10:47:28 +0200 Subject: [horizon][qa][infra] horizon master gate failure In-Reply-To: References: Message-ID: <4887479.GXAFRqVoOG@whitebase.usersys.redhat.com> On Monday, 11 May 2020 06:41:51 CEST Akihiro Motoki wrote: > Hi, > > horizon master branch is now failing due to permission defined during > installing tempest-horizon into the tempest venv [1]. > It happens since May 9. > > The source tempest-horizon repo is located in > /home/zuul/src/opendev.org/openstack/tempest-horizon/. > I digged into the detail for a while but haven't figured out the root cause > yet. > > I have the following questions on this: > > (1) Did we have any changes on the job configuration recently? > stable/ussuri and master branches are similar but it only happens in > the master branch. I've seen that in another project (ironic) as well, and it was fixed by using tempest_plugins. > > (2) When I changed the location of the tempest-horizon directory > specified as tempest-plugins, > the failure has gone [2]. Is there any recommendation from the tempest team? As you can see from the logs, tempest_plugins translates that value to: TEMPEST_PLUGINS="/opt/stack/tempest-horizon" which is the location where devstack expects to find the code, and where the deployment copies it, as prepared by the setup-devstack-source-dirs role. I can't pinpoint what happened exactly (something in zuul maybe) but that path currently used isn't the expected path anyway. As you are going to change it, tempest_plugins is the recommended way (and it works from pike onwards, in case you need to backport it). Ciao -- Luigi From tkajinam at redhat.com Mon May 11 12:03:00 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 11 May 2020 21:03:00 +0900 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> Message-ID: Hi, IIUC the most important reason behind puppet 5 removal is that puppet 5 is EOLed soon, this month. https://puppet.com/docs/puppet/latest/about_agent.html As you know puppet-openstack has some external dependencies, this can cause the problem with our support for puppet 5. For example if any dependencies remove their compatibility with puppet 5, we should pin all of them to keep puppet 5 tests running. This is the biggest concern I know about keeping puppet 5 support. While it makes sense to use puppet 5 for existing stable branches from a stable management perspective, I don't think it's actually reasonable to extend support for EOLed stuff in master development with possibly adding pins to old modules. IMO we can delay the actual removal a bit until puppet 6 gets ready in Debian, but I'd like to hear some actual plans to have puppet 6 available in Debian so that we can expect short gap about puppet 5 eol timing, between puppet-openstack and puppet itself. Thank you, Takashi On Sun, May 10, 2020 at 8:14 AM Thomas Goirand wrote: > On 5/10/20 12:56 AM, Thomas Goirand wrote: > > On 5/9/20 8:52 PM, Tobias Urdin wrote: > >> Hello, > >> > >> I don't agree, we should continue on the chosen path of not supporting > Puppet 5 > >> in the Victoria release. > >> > >> We've had Puppet 6 support since I introduced the testing for it in > 2018 back then we ran > >> Puppet 5 and Puppet 6 on every commit until we deemed it pretty > redundant and moved > >> Puppet 6 jobs to experimental while keeping the Puppet 6 syntax and > unit jobs. > >> > >> We've never claimed that Puppet OpenStack is going to support > downstream OS repackaging of > >> Puppet, even though RDO/TripleO does the same we've always tested > Puppet with upstream > >> versions for our testing, only Debian has skipped that and testing with > downstream packages. > > > > I don't understand why you insist that we shouldn't use downstream > > distribution packages. I haven't heard that the project claimed that we > > are "support[ing] downstream OS repackaging of Puppet", but I haven't > > heard that we aren't either, or even any preference in this regard. This > > I miss this information somewhere? Did someone even write this > > somewhere? Or is this only your own view? > > > > One thing is that the Debian packages for Puppet are of better quality > > than the upstream ones in many ways. There's also the problem that > > adding an external artifact is *not* what my project is about (see > below). > > One more thing: puppetlabs is only providing packages for the current > stable distribution of Debian (whatever that is), never for testing or > sid, and that's a perfectly valid environment where I sometimes test > deployments. So if we get incompatible with Puppet 5, this also break > this use case, currently. > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon May 11 12:30:00 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 11 May 2020 13:30:00 +0100 Subject: [requirements][qa] new pip resolver & our constraints needs In-Reply-To: <0D6BF508-39F0-4C6E-A5D2-E42327302485@doughellmann.com> References: <0D6BF508-39F0-4C6E-A5D2-E42327302485@doughellmann.com> Message-ID: On Sun, 2020-05-10 at 12:39 -0400, Doug Hellmann wrote: > The PyPA team is working on a new resolver for pip. As part of that work, they have had some questions about the way > OpenStack uses the constraints feature. They’ve been great about taking input, and prometheanfire has been doing some > testing to ensure the new work doesn’t break compatibility [1] (thanks Matthew!). > > There is a new question from the pip maintainers about whether constraints need to support “nameless” entries (by > referring to a URL in a constraints file instead of using package names) [2]. I don’t see anything in the upper- > constraints.txt that looks like a URL, but I don’t know how teams might be configuring their lower-constraints.txt or > whether we do anything in devstack to munge the constraints list to point to local packages as part of LIBS_FROM_GIT > handling. > > Is anyone aware of any uses of URLs in constraints files anywhere within OpenStack? yes we sometimes have urls to git repos. they are not in constraits files but the are in test-requiremente or requirements.txt its not the best example but networking-ovs-dpdk has neutron for several reasons https://opendev.org/x/networking-ovs-dpdk/src/branch/master/test-requirements.txt#L14 -e git+https://github.com/openstack/neutron.git at master#egg=neutron in that url i have @master which can be a brach,tag or commit so you could use the same to tack an unreleased version fo an external depncy in a constraits file. im not sure if we require this for anythign i know in white box we had to also use the url syntax cor crundini and iniparse because the python 3 version were not released on pypi yet https://opendev.org/x/whitebox-tempest-plugin/commit/3ef1dded7d18eeec48e75c715df73da09237c965 if it was not packaged on pypi ever and we supported lower constratins... then i can see us using git+https://github.com/pixelb/crudini.git at 0.9.3#egg=crudini to be our lower constraint. right now we are installing master as they still have not pushed 0.9.3 to pypi https://github.com/pixelb/crudini/issues/58 this is the only why i think it would be useful to have support for the urls in constraits. > Doug > > [1] https://review.opendev.org/#/c/726186/ > [2] https://github.com/pypa/pip/issues/8210 > From doug at doughellmann.com Mon May 11 12:48:37 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 11 May 2020 08:48:37 -0400 Subject: [requirements][qa] new pip resolver & our constraints needs In-Reply-To: References: <0D6BF508-39F0-4C6E-A5D2-E42327302485@doughellmann.com> Message-ID: > On May 11, 2020, at 8:30 AM, Sean Mooney wrote: > > On Sun, 2020-05-10 at 12:39 -0400, Doug Hellmann wrote: >> The PyPA team is working on a new resolver for pip. As part of that work, they have had some questions about the way >> OpenStack uses the constraints feature. They’ve been great about taking input, and prometheanfire has been doing some >> testing to ensure the new work doesn’t break compatibility [1] (thanks Matthew!). >> >> There is a new question from the pip maintainers about whether constraints need to support “nameless” entries (by >> referring to a URL in a constraints file instead of using package names) [2]. I don’t see anything in the upper- >> constraints.txt that looks like a URL, but I don’t know how teams might be configuring their lower-constraints.txt or >> whether we do anything in devstack to munge the constraints list to point to local packages as part of LIBS_FROM_GIT >> handling. >> >> Is anyone aware of any uses of URLs in constraints files anywhere within OpenStack? > yes we sometimes have urls to git repos. > they are not in constraits files but the are in test-requiremente or requirements.txt OK, the question is specifically about whether unnamed dependencies listed in *constraints* are somehow expected to work. Having URLs in requirements lists isn’t an issue. jrosser pointed out on IRC that openstack-ansible uses URLs with egg= so I’ve suggested describing that upstream in case it’s relevant. > > its not the best example but networking-ovs-dpdk has neutron for several reasons > https://opendev.org/x/networking-ovs-dpdk/src/branch/master/test-requirements.txt#L14 > > -e git+https://github.com/openstack/neutron.git at master#egg=neutron > > in that url i have @master which can be a brach,tag or commit > so you could use the same to tack an unreleased version fo an external depncy in a constraits file. > > im not sure if we require this for anythign > > i know in white box we had to also use the url syntax cor crundini and iniparse > because the python 3 version were not released on pypi yet > > https://opendev.org/x/whitebox-tempest-plugin/commit/3ef1dded7d18eeec48e75c715df73da09237c965 > > if it was not packaged on pypi ever and we supported lower constratins... then i can see us using > git+https://github.com/pixelb/crudini.git at 0.9.3#egg=crudini to be our lower constraint. > > right now we are installing master as they still have not pushed 0.9.3 to pypi > https://github.com/pixelb/crudini/issues/58 > > this is the only why i think it would be useful to have support for the urls in constraits. It sounds like this might be another case to describe on that pypa GitHub issue. > >> Doug >> >> [1] https://review.opendev.org/#/c/726186/ >> [2] https://github.com/pypa/pip/issues/8210 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon May 11 13:33:37 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 May 2020 13:33:37 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> Message-ID: <20200511133336.7thn67uxfp63m4vi@yuggoth.org> On 2020-05-11 21:03:00 +0900 (+0900), Takashi Kajinami wrote: > IIUC the most important reason behind puppet 5 removal is that puppet 5 > is EOLed soon, this month. > https://puppet.com/docs/puppet/latest/about_agent.html [...] That information seems to conflict with https://puppet.com/docs/puppet-enterprise/product-support-lifecycle/ which indicates that PE 2018.1 enters "extended support" this month, and reaches "end of life" in November. But either way, it's not got very long, you're right. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From abhardwaj at definitionnetworks.com Mon May 11 13:48:14 2020 From: abhardwaj at definitionnetworks.com (Amit Bhardwaj) Date: Mon, 11 May 2020 19:18:14 +0530 Subject: [SENLIN] Senlin Installation Help Message-ID: Hello group! I am stuck at installation and verification of Senlin for Stein release on Ubuntu 18.04.. I had raised a request on ask.openstack.org too. Kindly have a look and suggest further. https://ask.openstack.org/en/question/127499/senlin-installation-problems/ Regards, Amit Bhardwaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon May 11 15:37:20 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 11 May 2020 17:37:20 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> Message-ID: <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> On 5/11/20 2:03 PM, Takashi Kajinami wrote: > Hi, > > > IIUC the most important reason behind puppet 5 removal is that puppet 5 > is EOLed soon, this month. >  https://puppet.com/docs/puppet/latest/about_agent.html > > As you know puppet-openstack has some external dependencies, this can > cause the problem with our support for puppet 5. > For example if any dependencies remove their compatibility with puppet 5, > we should pin all of them to keep puppet 5 tests running. > This is the biggest concern I know about keeping puppet 5 support. > > While it makes sense to use puppet 5 for existing stable branches from a > stable > management perspective, I don't think it's actually reasonable to extend > support > for EOLed stuff in master development with possibly adding pins to old > modules. > IMO we can delay the actual removal a bit until puppet 6 gets ready in > Debian, > but I'd like to hear some actual plans to have puppet 6 available in Debian > so that we can expect short gap about puppet 5 eol timing, between > puppet-openstack > and puppet itself. > > Thank you, > Takashi Thank you, a bit more time, is the only thing I was asking for! About the plan for packaging Puppet 6 in Debian: I don't know yet, as one will have to do the work, and that's probably going to be me, since nobody is volunteering... :( Now, about dependencies: if supporting Puppet 5 gets on the way to use a newer dependency, then I suppose we can try to manage this when it happens. Worst case: forget about Puppet 5 if we get into such a bad situation. Until we're there, let's hope it doesn't happen too soon. I can tell you when I know more about the amount of work that there is to do. At the moment, it's still a bit blurry to me. Cheers, Thomas Goirand (zigo) From mordred at inaugust.com Mon May 11 15:54:38 2020 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 11 May 2020 10:54:38 -0500 Subject: [horizon] [infra] horizon stable gate failure: yarn is required? In-Reply-To: <78187bf0-7ea9-fe84-2885-759e68fa8128@gmail.com> References: <78187bf0-7ea9-fe84-2885-759e68fa8128@gmail.com> Message-ID: > On May 11, 2020, at 2:42 AM, Radosław Piliszek wrote: > > Hi Akihiro, > > from my PoV yarn should be installed by projects that require it, especially since it's not baked into the instance image but tried to be installed each time (and hence causes these late failures). > > Since the jobs are now broken anyway, I think removing yarn from pre play is the right move. > Obviously, ensure-yarn has to be fixed to be useful for those parties who require yarn and older nodejs. > And "those parties" in case of OpenStack seem to be only RefStack, which means there is only one project to adapt then. What good timing - I was actually working on updated javascript jobs over the weekend: https://review.opendev.org/726547 I think we can update that to also only install yarn when it’s going to use yarn, I’ll make an update - and then I think if we update horizon to the new jobs this should be no problem. > -yoctozepto > > On 2020-05-11 06:21, Akihiro Motoki wrote: >> Hi, >> nodejs4 jobs in stein and older stable branches in horizon are now failing [1]. >> The symptom is that "yarn --version" fails during ensure-yarn [2]. >> It looks like that yarn now requires nodejs 10 or later and does not >> support nodejs4. >> The failure happens in the jobs on ubuntu-xenial nodes. >> When I try nodejs10 in stable/stein, the nodejs4 jobs (lint/run) succeeded [3]. >> My question is whether we need to install yarn in the pre-run? >> horizon does not require yarn. >> All nodejs jobs inherit nodejs-npm job defined in zuul/zuul-jobs and >> the pre-run playbook [4] installs yarn. >> I am not sure when we started to require yarn and who requires yarn. >> Generally speaking, I don't think it is a good idea to switch the >> nodejs version in these older stable branches. >> horizon and horizon plugins use nodejs only for testing, so if >> switching to nodejs10 would solve the failure completely, it might be >> an option too though. >> Thought? >> -amotoki >> [1] https://review.opendev.org/#/q/topic:health-check+status:open+project:openstack/horizon+branch:%255Estable/.* >> [2] https://zuul.opendev.org/t/openstack/build/72317c21cf7b4a1a885b9a26c5ae4c2d/console >> [3] https://review.opendev.org/#/c/726702/ >> [4] https://opendev.org/zuul/zuul-jobs/src/branch/master/playbooks/javascript/pre.yaml > > From mordred at inaugust.com Mon May 11 16:19:45 2020 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 11 May 2020 11:19:45 -0500 Subject: [horizon] [infra] horizon stable gate failure: yarn is required? In-Reply-To: References: <78187bf0-7ea9-fe84-2885-759e68fa8128@gmail.com> Message-ID: > On May 11, 2020, at 10:54 AM, Monty Taylor wrote: > > > >> On May 11, 2020, at 2:42 AM, Radosław Piliszek wrote: >> >> Hi Akihiro, >> >> from my PoV yarn should be installed by projects that require it, especially since it's not baked into the instance image but tried to be installed each time (and hence causes these late failures). >> >> Since the jobs are now broken anyway, I think removing yarn from pre play is the right move. >> Obviously, ensure-yarn has to be fixed to be useful for those parties who require yarn and older nodejs. >> And "those parties" in case of OpenStack seem to be only RefStack, which means there is only one project to adapt then. > > What good timing - I was actually working on updated javascript jobs over the weekend: > > https://review.opendev.org/726547 > > I think we can update that to also only install yarn when it’s going to use yarn, I’ll make an update - and then I think if we update horizon to the new jobs this should be no problem. Updated - maybe a patch to horizon on the v4 stable branches with a depends-on that patch updating to the new jobs would validate that the new jobs fix the issue? > >> -yoctozepto >> >> On 2020-05-11 06:21, Akihiro Motoki wrote: >>> Hi, >>> nodejs4 jobs in stein and older stable branches in horizon are now failing [1]. >>> The symptom is that "yarn --version" fails during ensure-yarn [2]. >>> It looks like that yarn now requires nodejs 10 or later and does not >>> support nodejs4. >>> The failure happens in the jobs on ubuntu-xenial nodes. >>> When I try nodejs10 in stable/stein, the nodejs4 jobs (lint/run) succeeded [3]. >>> My question is whether we need to install yarn in the pre-run? >>> horizon does not require yarn. >>> All nodejs jobs inherit nodejs-npm job defined in zuul/zuul-jobs and >>> the pre-run playbook [4] installs yarn. >>> I am not sure when we started to require yarn and who requires yarn. >>> Generally speaking, I don't think it is a good idea to switch the >>> nodejs version in these older stable branches. >>> horizon and horizon plugins use nodejs only for testing, so if >>> switching to nodejs10 would solve the failure completely, it might be >>> an option too though. >>> Thought? >>> -amotoki >>> [1] https://review.opendev.org/#/q/topic:health-check+status:open+project:openstack/horizon+branch:%255Estable/.* >>> [2] https://zuul.opendev.org/t/openstack/build/72317c21cf7b4a1a885b9a26c5ae4c2d/console >>> [3] https://review.opendev.org/#/c/726702/ >>> [4] https://opendev.org/zuul/zuul-jobs/src/branch/master/playbooks/javascript/pre.yaml >> >> > > > From cboylan at sapwetik.org Mon May 11 17:34:52 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 11 May 2020 10:34:52 -0700 Subject: Tox basepython and Python3 Message-ID: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> Hello everyone, This has come up a few times on IRC so we are probably well overdue for a email about it. Long story short, if your tox.ini config sets basepython [0] to `python3` and you also use py35, py36, py37, or py38 test targets there is a good chance you are not testing what you intend to be testing. You also need to set ignore_basepython_conflict to true [1]. The reason for this is basepython acts as an override for the python executable to be used when creating tox virtualenvs. `python3` on most platforms indicates a specific python3 version: Ubuntu Xenial 3.5, Ubuntu Bionic and CentOS 8 3.6, and so on. This means that even though you are asking for python 3.7 via py37 you'll get whatever version `python3` is on the running platform. To address this we can set ignore_basepython_conflict and tox will use the version specified as part of the target and not the basepython override. You might wonder why using basepython is useful at all given this situation. The reason for it is the default python used by tox for virtualenvs is the version of python tox was installed under. This means that if tox is running under python2 it will use python2 for virtualenvs when no other version is set. Since many software projects are now trying to drop python2 support they want to explicitly force python3 in the default case. basepython gets us halfway there, ignore_basepython_conflict the rest of the way. [0] https://tox.readthedocs.io/en/latest/config.html#conf-basepython [1] https://tox.readthedocs.io/en/latest/config.html#conf-ignore_basepython_conflict Hopefully this helps explain some of tox's odd behavior in a beneficial way. Now go and check your tox.ini files :) Clark From fungi at yuggoth.org Mon May 11 18:20:25 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 May 2020 18:20:25 +0000 Subject: Tox and missing interpreters (was: Tox basepython and Python3) In-Reply-To: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> Message-ID: <20200511182024.euqmssxm2gtwg5ch@yuggoth.org> On a related note, the proliferation of tested Python versions has led many projects to enable the skip_missing_interpreters option in their tox.ini files. Please don't, this is just plain DANGEROUS. I know it's nice that when you've got a bunch of Python versions in your default tox envlist some of which a typical developer may not have installed, they can still run `tox` and not get errors about those. However, it also means that if you run `tox -e py38` and don't have any Python 3.8 interpreter, tox will happily say it did nothing successfully. Yes it's fairly obvious when you see it happen locally. It's far less obvious when you add a Python 3.8 job in the gate but don't make sure that interpreter is actually installed then and get back a +1 from Zuul when tox ran no tests at all. An alternative solution, which some projects like Zuul have switched to, is not listing a bunch of specific pyXY versions in the tox envlist, but just putting "py3" instead. This will cause folks who are running `tox` to get unit tests with whatever their default python3 interpreter is, but also they'll get a clear error if they don't have any python3 interpreter at all. If someone has Python 3.8 installed and it isn't their default python3 but they still want to test with it, they can of course do `tox -e py38` and that will work as expected. This also means you no longer have to update the envlist in tox.ini every time you add or remove support for a specific interpreter version. Besides, tox.ini is not a good place to list what versions of the interpreter your project supports, that's what trove classifiers in the setup.cfg file are for. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aj at suse.com Mon May 11 19:46:00 2020 From: aj at suse.com (Andreas Jaeger) Date: Mon, 11 May 2020 21:46:00 +0200 Subject: [horizon] [infra] horizon stable gate failure: yarn is required? In-Reply-To: References: <78187bf0-7ea9-fe84-2885-759e68fa8128@gmail.com> Message-ID: <916921e5-8100-189e-c03f-9efd05189910@suse.com> On 11/05/2020 18.19, Monty Taylor wrote: > > >> On May 11, 2020, at 10:54 AM, Monty Taylor wrote: >> >> >> >>> On May 11, 2020, at 2:42 AM, Radosław Piliszek wrote: >>> >>> Hi Akihiro, >>> >>> from my PoV yarn should be installed by projects that require it, especially since it's not baked into the instance image but tried to be installed each time (and hence causes these late failures). >>> >>> Since the jobs are now broken anyway, I think removing yarn from pre play is the right move. >>> Obviously, ensure-yarn has to be fixed to be useful for those parties who require yarn and older nodejs. >>> And "those parties" in case of OpenStack seem to be only RefStack, which means there is only one project to adapt then. >> >> What good timing - I was actually working on updated javascript jobs over the weekend: >> >> https://review.opendev.org/726547 >> >> I think we can update that to also only install yarn when it’s going to use yarn, I’ll make an update - and then I think if we update horizon to the new jobs this should be no problem. > > Updated - maybe a patch to horizon on the v4 stable branches with a depends-on that patch updating to the new jobs would validate that the new jobs fix the issue? Like this? https://review.opendev.org/726940 Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From smooney at redhat.com Mon May 11 20:02:47 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 11 May 2020 21:02:47 +0100 Subject: Tox basepython and Python3 In-Reply-To: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> Message-ID: <4318854b4524f1cbe8b52c2e41888b52ad3169ef.camel@redhat.com> On Mon, 2020-05-11 at 10:34 -0700, Clark Boylan wrote: > Hello everyone, > > This has come up a few times on IRC so we are probably well overdue for a email about it. Long story short, if your > tox.ini config sets basepython [0] to `python3` and you also use py35, py36, py37, or py38 test targets there is a > good chance you are not testing what you intend to be testing. You also need to set ignore_basepython_conflict to true > [1]. > > The reason for this is basepython acts as an override for the python executable to be used when creating tox > virtualenvs. `python3` on most platforms indicates a specific python3 version: Ubuntu Xenial 3.5, Ubuntu Bionic and > CentOS 8 3.6, and so on. This means that even though you are asking for python 3.7 via py37 you'll get whatever > version `python3` is on the running platform. To address this we can set ignore_basepython_conflict and tox will use > the version specified as part of the target and not the basepython override. > > You might wonder why using basepython is useful at all given this situation. The reason for it is the default python > used by tox for virtualenvs is the version of python tox was installed under. This means that if tox is running under > python2 it will use python2 for virtualenvs when no other version is set. Since many software projects are now trying > to drop python2 support they want to explicitly force python3 in the default case. basepython gets us halfway there, > ignore_basepython_conflict the rest of the way. > > [0] https://tox.readthedocs.io/en/latest/config.html#conf-basepython > [1] https://tox.readthedocs.io/en/latest/config.html#conf-ignore_basepython_conflict > > Hopefully this helps explain some of tox's odd behavior in a beneficial way. Now go and check your tox.ini files :) yep and to reinforce that point we also have this big warning comment nova so that people know why we do this. https://github.com/openstack/nova/blob/master/tox.ini#L4-L7 in later version of tox ignore_basepython_conflict = True will be the default as that is generally less surprisng behavior but with our current min version both are needed. > Clark > From renat.akhmerov at gmail.com Tue May 12 05:20:38 2020 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 12 May 2020 12:20:38 +0700 Subject: [all][PTL] U Community Goal: Project PTL & Contrib Docs Update #9 In-Reply-To: References: Message-ID: Hi Kendall, We’ve already done that for Mistral: https://docs.openstack.org/mistral/latest/developer/contributor/contributing.html Let us know if anything else is needed at this point. Thanks Renat Akhmerov @Nokia On 9 May 2020, 09:56 +0700, Kendall Nelson , wrote: > Hello Everyone! > > Even more excellent progress[1] has been made as we are inching closer to completion. Again, thank you for all the hard work! > > Projects listed below are missing progress/merged status according to the story[4]. If you are on this list, please get started IMMEDIATELY. All told, if you are doing the bare minimum of only filling out the template[2] it shouldn't take more than 5-10 minutes per repo. > • Barbican > • Cloudkitty > • Congress > • Cyborg > • Ec2-API > • Freezer > • Heat > • Horizon > • Karbor > • Kuryr > • LOCI > • Masakari > • Mistral > • Monasca > • Murano > • OpenStack Charms > • openstack-chef > • OpenStackAnsible > • OpenStackClient > • Packaging-RPM > • Placement > • Puppet OpenStack > • Rally > • Release Management > • Senlin > • Solum > • Storlets > • Swift > • Tacker > • Telemetry > • Tricircle > • TripleO > • Zaqar > > PLEASE add me if you need help with reviews and let me know if there is any other help I can provide; we are nearly out of time and there is a lot of work to be done still. > > If you have questions about the goal itself, here is the link to that[3]. > > And as you push patches, please be sure to update the StoryBoard story[4]. > > Thanks! > > -Kendall (diablo_rojo) > > > [1] Progress: https://review.opendev.org/#/q/topic:project-ptl-and-contrib-docs+(status:open+OR+status:merged) > [2] Cookiecutter Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst > [3] Accepted Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html > [4] Task Tracking: https://storyboard.openstack.org/#!/story/2007236 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue May 12 07:53:56 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 12 May 2020 09:53:56 +0200 Subject: [TripleO] External network on compute node In-Reply-To: References: <98236409f2ea0a5f0a7bffd3eb1054ef5399ba69.camel@redhat.com> Message-ID: <639d0b4c7bd27fa125809f9f384db680cae22c4a.camel@redhat.com> On Sun, 2020-05-10 at 22:34 +0430, Reza Bakhshayeshi wrote: > Hi Harald, Thanks for your explanation. > > ' /usr/share/openstack-tripleo-heat-templates/ ' was just happened > during copy-pasting here :) > > I exported the overcloud plan, and it was exactly same as what I sent > before. > Ok, I was worried the jinja2 rendering would produce different results in the plan, in case you had manually edited these files: ~/openstack-tripleo-heat-templates/environments/network-isolation.yaml ~/openstack-tripleo-heat-templates/environments/network-environment.yaml > I redeployed everything and it didn't help either. > > What are the problems of my network-isolation.yaml and network- > environment.yaml files in your opinion? I'm not quite sure what is wrong, but the fact that you got 'ip_netmask': '' is an indication that some resource might be a Noop resource or a custom resource without validations? While it should actually be a port/network resource. The output of 'openstack stack environment show overcloud' might help. (NOTE: sanitize the output of that removing sensitive data before posting it to a public place ...) > I have to add that in this environment I don't know why the external > network doesn't provide Internet for VMs. > But everything else works fine. > > I don't have any Vlan configured in my environment and I'm planning > to only have flat and geneve networks, and having external network > for every compute node so I can ignore provisioning network > bottleneck and spof. > > Regards, > Reza > > On Wed, 6 May 2020 at 17:10, Harald Jensås > wrote: > > On Wed, 2020-05-06 at 14:00 +0430, Reza Bakhshayeshi wrote: > > > here is my deploy command: > > > > > > openstack overcloud deploy \ > > > --control-flavor control \ > > > --compute-flavor compute \ > > > --templates ~/openstack-tripleo-heat-templates \ > > > -r /home/stack/roles_data.yaml \ > > > -e /home/stack/containers-prepare-parameter.yaml \ > > > -e environment.yaml \ > > > -e /usr/share/openstack-tripleo-heat- > > > templates/environments/services/octavia.yaml \ > > > > This is not related, but: > > Why use '/usr/share/openstack-tripleo-heat-templates/' and not > > '~/openstack-tripleo-heat-templates/' here? > > > > > -e ~/openstack-tripleo-heat- > > templates/environments/services/neutron- > > > ovn-dvr-ha.yaml \ > > > -e ~/openstack-tripleo-heat-templates/environments/docker-ha.yaml > > \ > > > -e ~/openstack-tripleo-heat-templates/environments/network- > > > isolation.yaml \ > > > -e ~/openstack-tripleo-heat-templates/environments/network- > > > environment.yaml \ > > > > Hm, I'm not sure network-isolation.yaml and network- > > environment.yaml > > contains what you expect. Can you do a plan export? > > > > openstack overcloud plan export --output-file oc-plan.tar.gz > > overcloud > > > > Then have a look at `environments/network-isolation.yaml` and > > `environments/network-environment.yaml` in the plan? > > > > I think you may want to copy these two files out of the templates > > tree > > and use the out of tree copies instead. > > > > > --timeout 360 \ > > > --ntp-server time.google.com -vvv > > > > > > network-environment.yaml: > > > http://paste.openstack.org/show/793179/ > > > > > > network-isolation.yaml: > > > http://paste.openstack.org/show/793181/ > > > > > > compute-dvr.yaml > > > http://paste.openstack.org/show/793183/ > > > > > > I didn't modify network_data.yaml > > > > > > > > > > > > -- > > Harald > > > > > On Wed, 6 May 2020 at 05:27, Harald Jensås > > > wrote: > > > > On Tue, 2020-05-05 at 23:25 +0430, Reza Bakhshayeshi wrote: > > > > > Hi all. > > > > > The default way of compute node for accessing Internet if > > through > > > > > undercloud. > > > > > I'm going to assign an IP from External network to each > > compute > > > > node > > > > > with default route. > > > > > But the deployment can't assign an IP to br-ex and fails > > with: > > > > > > > > > > " raise AddrFormatError('invalid IPNetwork > > %s' > > > > % > > > > > addr)", > > > > > "netaddr.core.AddrFormatError: invalid > > IPNetwork > > > > ", > > > > > > > > > > Actually 'ip_netmask': '' is empty during deployment for > > compute > > > > > nodes. > > > > > I've added external network to compute node role: > > > > > External: > > > > > subnet: external_subnet > > > > > > > > > > and for network interface: > > > > > - type: ovs_bridge > > > > > name: bridge_name > > > > > mtu: > > > > > get_param: ExternalMtu > > > > > dns_servers: > > > > > get_param: DnsServers > > > > > use_dhcp: false > > > > > addresses: > > > > > - ip_netmask: > > > > > get_param: ExternalIpSubnet > > > > > routes: > > > > > list_concat_unique: > > > > > - get_param: ExternalInterfaceRoutes > > > > > - - default: true > > > > > next_hop: > > > > > get_param: > > > > ExternalInterfaceDefaultRoute > > > > > members: > > > > > - type: interface > > > > > name: nic3 > > > > > mtu: > > > > > get_param: ExternalMtu > > > > > use_dhcp: false > > > > > primary: true > > > > > > > > > > Any suggestion would be grateful. > > > > > Regards, > > > > > Reza > > > > > > > > > > > > > I think we need more information to see what the issue is. > > > > - your deploy command? > > > > - content of network_data.yaml used (unless the default) > > > > - environment files related to network-isolation, network- > > > > environment, > > > > network-isolation? > > > > > > > > > > > > -- > > > > Harald > > > > > > > > > > From mdulko at redhat.com Tue May 12 09:05:10 2020 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Tue, 12 May 2020 11:05:10 +0200 Subject: [all][PTL] U Community Goal: Project PTL & Contrib Docs Update #9 In-Reply-To: References: Message-ID: <38eded590c9e8024e77a392de36f731c21d3c73f.camel@redhat.com> On Fri, 2020-05-08 at 19:55 -0700, Kendall Nelson wrote: > Hello Everyone! > > Even more excellent progress[1] has been made as we are inching closer to completion. Again, thank you for all the hard work! > > Projects listed below are missing progress/merged status according to the story[4]. If you are on this list, please get started IMMEDIATELY. All told, if you are doing the bare minimum of only filling out the template[2] it shouldn't take more than 5-10 minutes per repo. > Barbican > Cloudkitty > Congress > Cyborg > Ec2-API > Freezer > Heat > Horizon > Karbor > Kuryr I updated status of Kuryr task on storyboard to reflect that this is done. > LOCI > Masakari > Mistral > Monasca > Murano > OpenStack Charms > openstack-chef > OpenStackAnsible > OpenStackClient > Packaging-RPM > Placement > Puppet OpenStack > Rally > Release Management > Senlin > Solum > Storlets > Swift > Tacker > Telemetry > Tricircle > TripleO > Zaqar > > PLEASE add me if you need help with reviews and let me know if there is any other help I can provide; we are nearly out of time and there is a lot of work to be done still. > > If you have questions about the goal itself, here is the link to that[3]. > > And as you push patches, please be sure to update the StoryBoard story[4]. > > Thanks! > > -Kendall (diablo_rojo) > > > [1] Progress: https://review.opendev.org/#/q/topic:project-ptl-and-contrib-docs+(status:open+OR+status:merged) > [2] Cookiecutter Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst > [3] Accepted Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html > [4] Task Tracking: https://storyboard.openstack.org/#!/story/2007236 From stephenfin at redhat.com Tue May 12 09:48:53 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 May 2020 10:48:53 +0100 Subject: Tox and missing interpreters (was: Tox basepython and Python3) In-Reply-To: <20200511182024.euqmssxm2gtwg5ch@yuggoth.org> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> <20200511182024.euqmssxm2gtwg5ch@yuggoth.org> Message-ID: <8ac0ed30b52e401337445486cc80de1dcfc9bce1.camel@redhat.com> On Mon, 2020-05-11 at 18:20 +0000, Jeremy Stanley wrote: > On a related note, the proliferation of tested Python versions has > led many projects to enable the skip_missing_interpreters option in > their tox.ini files. Please don't, this is just plain DANGEROUS. The 'py3' option listed below is a better one, but for those that *really* want this behavior: alias 'tox=tox --skip-missing-interpreters' Your tests all "pass" locally, but the gate continues to properly test things. Alternatively, use Fedora where all supported Python versions are packaged by default and this isn't an issue :P Stephen > I know it's nice that when you've got a bunch of Python versions in > your default tox envlist some of which a typical developer may not > have installed, they can still run `tox` and not get errors about > those. However, it also means that if you run `tox -e py38` and > don't have any Python 3.8 interpreter, tox will happily say it did > nothing successfully. Yes it's fairly obvious when you see it happen > locally. It's far less obvious when you add a Python 3.8 job in the > gate but don't make sure that interpreter is actually installed then > and get back a +1 from Zuul when tox ran no tests at all. > > An alternative solution, which some projects like Zuul have switched > to, is not listing a bunch of specific pyXY versions in the tox > envlist, but just putting "py3" instead. This will cause folks who > are running `tox` to get unit tests with whatever their default > python3 interpreter is, but also they'll get a clear error if they > don't have any python3 interpreter at all. If someone has Python 3.8 > installed and it isn't their default python3 but they still want to > test with it, they can of course do `tox -e py38` and that will work > as expected. This also means you no longer have to update the > envlist in tox.ini every time you add or remove support for a > specific interpreter version. Besides, tox.ini is not a good place > to list what versions of the interpreter your project supports, > that's what trove classifiers in the setup.cfg file are for. From ruslanas at lpic.lt Tue May 12 10:42:38 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 12 May 2020 12:42:38 +0200 Subject: [TripleO][Rdo][Train] Deploy OpenStack using only provisioning network In-Reply-To: References: Message-ID: Thank you Luke and Harald. I was following your recommendations and links, and I have managed to do these modifications to the setup shared by Harald, to adjust to my needs. https://github.com/qw3r3wq/homelab Have done clone, and have updated my changes. I have 2 main issues now: 1) when deploying overcloud, it do not add ssh key to authorized host, and gets timeout, but I can work with that. 1.solution) while running installation I ssh into it, before ansible tries... shitty workaround, but should be ok for POC, need to fix it also. *2) As you see from config files, I use local undercloud as repo for container images, but it is not able to fetch data from there, as it is marked as secure, but undercloud configures it as unsecure. Can I somehow specify to installed, so it would modify /etc/container(s)/repositories.conf to add undercloud IP and url to insecure repo list. cause it helps tp fix my issues. but then cannot proceed as it has part of things up, so I need to do fresh setup, which is without insecure repos.* *2.solution) no ideas.* 3) Problem: when setting up undercloud with proxy variables exported, it adds them into containers, but even I have no_prpoxy which has idrac IP specified, or range, ironic-conductor sends request to redfish using proxy... 3.solution) I think solution would be to use undercloud repo (predownload images) and make undercloud install from it, but when I even add 'insecure' repos value to $local_ip it drops error [1] trying to connect to repo....docker.io Any thoughts? [1] Retrying (Retry(total=7, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /v2 ... raise ConnectionError(e, request=request) ConnectionError: HTTPSConnectionPool(host='registries-1.docker.io', port=443): Max retries exceeded with url: /v2/ (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) On Tue, 5 May 2020 at 20:17, Harald Jensås wrote: > On Mon, 2020-05-04 at 00:20 +0200, Ruslanas Gžibovskis wrote: > > Hi all, > > > > I am doing some testing and will do some deployment on some remote > > hosts. > > > > Remote hosts will use provider network only specific for each > > compute. > > > > I was thinking, do I really need all the External, InternalAPI, > > Storage, StorageManagemnt, Tenant networks provided to all of the > > nodes? Maybe I could use a Provision network for all of that, and > > make swift/glance copy on all computes to provide local images. > > > > I understand, if I do not have tenant network, all VM's in same > > project but in different sites, will not see each other, but it is ok > > at the moment. > > > > Thank you for your help > > > > I use tripleo to deploy a single node aio with only 1 network interface > as a lab at home. You can see the configuration here: > > https://github.com/hjensas/homelab/tree/master/overcloud > > Basically I use a an empty network data file, and removed > the 'networks' section in my custom role data file. > > With no networks defined everything is placed on the 'ctlplane' (i.e > provisioning network). Same thing you are asking for? > > > I think you can do the same thing. For the provider networks I believe > you will need per-role NeutronBridgeMappings i.e something like: > > ControllerParameters: > NeutronBridgeMappings: br-ex:provider0 > ComputeSite1: > NeutronBridgeMappings: br-foo:provider1 > ComputeSite2: > NeutronBridgeMappings: br-bar:provider2 > > > -- > Harald > > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.kulik at sap.com Tue May 12 11:08:52 2020 From: johannes.kulik at sap.com (Johannes Kulik) Date: Tue, 12 May 2020 13:08:52 +0200 Subject: [oslo][nova][vmware] Replacement for suds library In-Reply-To: <3616729F-8084-45BD-AA13-3E5487A4937D@vmware.com> References: <3616729F-8084-45BD-AA13-3E5487A4937D@vmware.com> Message-ID: <9c2e1119-f224-d450-07ae-6d496157ae69@sap.com> > > 在 2020/3/20 下午10:10,“Stephen Finucane” 写入: >> >> The suds-jurko library used by oslo.vmware is emitting the following >> warnings in nova tests. >> >> /nova/.tox/py36/lib/python3.6/site-packages/suds/resolver.py:89: DeprecationWarning: invalid escape sequence \% >> self.splitp = re.compile('({.+})*[^\%s]+' % ps[0]) >> /nova/.tox/py36/lib/python3.6/site-packages/suds/wsdl.py:619: DeprecationWarning: invalid escape sequence \s >> body.parts = re.split('[\s,]', parts) >> >> These warnings are going to be errors in Python 3.10 [1]. We have over >> 18 months before we need to worry about this [2], but I'd like to see >> some movement on this well before then. It seems the suds-jurko fork is >> dead [3] and movement on yet another fork, suds-community, is slow [4]. >> How difficult would it be to switch over to something that does seem >> maintained like zeep [5] and, assuming it's doable, is anyone from >> VMWare/SAP able to do this work? >> >> Stephen > > Stephen, > > Thank you very much for pointing this out. Lichao (xuel at vmware.com) and I from VMware will involve into this issue. > > Do you think zeep is a good alternative of suds ? Or did the replacement already take place on other project ? > > We would like to make assessment to the zeep first and then make an action plan. > > Yingji. Hi Yingji, Stephen, we've been working downstream on switching oslo.vmware away from suds. We're still on queens, but oslo.vmware didn't change too much since then ... We've opted for zeep, because it's 1) currently still maintained 2) promises python 3 support 3) uses lxml at it's base, thus giving a performance boost, which we need In our tests, a script doing some simple queries against vSphere 6.5 finished in ~5s (zeep) instead of ~10s (suds). Looking at nova-compute nodes, the CPU-usage decreased drastically. Which is what we were aiming for. We haven't run it in production, yet, but are planning to do so gradually. We're willing to put some more work into getting the changes upstream, if someone can assist in the process and if you're fine with that. To get a glimpse at the changes necessary for oslo.vmare, have a look at [1]. These are for queens, though. We've also put some work in to make the transition easier, by moving suds-specific code from nova into oslo.vmare and providing some helper-functions for the differences in ManagedObjectReference attribute access [2], [3], [4], [5]. Obviously, there are changes needed in nova and cinder, if we need to use helper-functions. For nova, we've got a couple of patches, that are not public, yet. Sorry for coming into this with a "solution" already, but we have a direct need for switching downstream, as explained above. Have nice day otherwise, Johannes [1] https://github.com/sapcc/oslo.vmware/pull/4/files [2] https://github.com/sapcc/oslo.vmware/commit/84d3e3177affa8dbffdbc0ecf0cbc2aea6b3dbde [3] https://github.com/sapcc/oslo.vmware/commit/385a0352beab3ddb8273138abd31f0788638bb76 [4] https://github.com/sapcc/oslo.vmware/commit/6a531ba84e8db43db1cb9ff433e6d18cdd98a4c6 [5] https://github.com/sapcc/oslo.vmware/commit/993fe5f98a7b8172710af4f27b6c1a3eabb1c7d4 -- Johannes Kulik IT Architecture Senior Specialist *SAP SE *| Rosenthaler Str. 30 | 10178 Berlin | Germany From smooney at redhat.com Tue May 12 12:43:12 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 12 May 2020 13:43:12 +0100 Subject: Tox and missing interpreters (was: Tox basepython and Python3) In-Reply-To: <8ac0ed30b52e401337445486cc80de1dcfc9bce1.camel@redhat.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> <20200511182024.euqmssxm2gtwg5ch@yuggoth.org> <8ac0ed30b52e401337445486cc80de1dcfc9bce1.camel@redhat.com> Message-ID: <018fc56c2cf9d795de647828917eb442bbe0f7bc.camel@redhat.com> On Tue, 2020-05-12 at 10:48 +0100, Stephen Finucane wrote: > On Mon, 2020-05-11 at 18:20 +0000, Jeremy Stanley wrote: > > On a related note, the proliferation of tested Python versions has > > led many projects to enable the skip_missing_interpreters option in > > their tox.ini files. Please don't, this is just plain DANGEROUS. why is this dangourse it wont casue the ci jobs to be skipped since we guarentee the interperter will be present i was planning to add this to cybrog in https://review.opendev.org/#/c/708705/1/tox.ini and had planned to add it to nova too sowhat is the reason for considering it dangerous. i think we shoudl be adding that to all project by default so if there is a stong reason not to do this i would like to hear why. > > The 'py3' option listed below is a better one, but for those that > *really* want this behavior: > > alias 'tox=tox --skip-missing-interpreters' > > Your tests all "pass" locally, but the gate continues to properly test > things. > > Alternatively, use Fedora where all supported Python versions are > packaged by default and this isn't an issue :P > > Stephen > > > I know it's nice that when you've got a bunch of Python versions in > > your default tox envlist some of which a typical developer may not > > have installed, they can still run `tox` and not get errors about > > those. However, it also means that if you run `tox -e py38` and > > don't have any Python 3.8 interpreter, tox will happily say it did > > nothing successfully. Yes it's fairly obvious when you see it happen > > locally. It's far less obvious when you add a Python 3.8 job in the > > gate but don't make sure that interpreter is actually installed then > > and get back a +1 from Zuul when tox ran no tests at all. > > > > An alternative solution, which some projects like Zuul have switched > > to, is not listing a bunch of specific pyXY versions in the tox > > envlist, but just putting "py3" instead. This will cause folks who > > are running `tox` to get unit tests with whatever their default > > python3 interpreter is, but also they'll get a clear error if they > > don't have any python3 interpreter at all. If someone has Python 3.8 > > installed and it isn't their default python3 but they still want to > > test with it, they can of course do `tox -e py38` and that will work > > as expected. This also means you no longer have to update the > > envlist in tox.ini every time you add or remove support for a > > specific interpreter version. Besides, tox.ini is not a good place > > to list what versions of the interpreter your project supports, > > that's what trove classifiers in the setup.cfg file are for. > > From nate.johnston at redhat.com Tue May 12 14:05:25 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 12 May 2020 10:05:25 -0400 Subject: [neutron] parent ports for trunks being claimed by instances Message-ID: <20200512140525.khc32evagyb7366r@firewall> Neutron developers, I am currently working on an issue with trunk ports that has come up a few times in my direct experience, and I hope that we can create a long term solution. I am hoping that developers with experience in trunk ports can validate my approach here, especially regarding fixing current behavior without introducing an API regression. By way of introduction to the specifics of the issue, let me blockquote from the LP bug I raised for this [1]: ---- When you create a trunk in Neutron you create a parent port for the trunk and attach the trunk to the parent. Then subports can be created on the trunk. When instances are created on the trunk, first a port is created and then an instance is associated with a free port. It looks to me that's this is the oversight in the logic. From the perspective of the code, the parent port looks like any other port attached to the trunk bridge. It doesn't have an instance attached to it so it looks like it's not being used for anything (which is technically correct). So it becomes an eligible port for an instance to bind to. That is all fine and dandy until you go to delete the instance and you get the "Port [port-id] is currently a parent port for trunk [trunk-id]" exception just as happened here. Anecdotally, it's seems rare that an instance will actually bind to it, but that is what happened for the user in this case and I have had several pings over the past year about people in a similar state. I propose that when a port is made parent port for a trunk, that the trunk be established as the owner of the port. That way it will be ineligible for instances seeking to bind to the port. ---- Clearly the above behavior indicates buggy issue that should be rectified in master and stable branches. Nobody wants a VM that can't be fully deleted because the port can't ever be deleted. This is especially egregious when it causes heat stack deletion failures. I am mostly concerned that by adding the trunk as an owner of the parent port, then the trunk will need to be deleted before the parent port can be deleted, otherwise a PortInUse error will occur when the port is deleted (i.e. on tempest test teardown). That to me seems indicative of an inadvertent API change. Do you think it's all right to say that if you delete a port that is a parent port of a trunk, and that trunk has no other subports, that the trunk deletion is implicit? Is that the lowest impact to the API that we can incur to resolve this issue? Your wisdom is appreciated, Nate [1] https://bugs.launchpad.net/neutron/+bug/1878031 From whayutin at redhat.com Mon May 11 20:28:13 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 11 May 2020 14:28:13 -0600 Subject: [tripleo] triaged bugs older than 365 days Message-ID: Greetings, I have moved any bug older than 365 days that are in Triaged status to "incomplete". If you think it should still be addressed just flip the bug back from incomplete back to triaged and add some updated comments. I'm pretty sure just commenting on the bug will keep it from expiring as well. Email me directly if you have any questions or concerns about a particular bug. We're just cleaning up. Thanks 1811713,Triaged,in-stable-queens in-stable-rocky, https://bugs.launchpad.net/tripleo/+bug/1811713,"'rocky undercloud fails to install'" 1812640,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1812640,"'iptables rules are not host-based when using podman'" 1814250,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1814250,"'races condition at os-net-config in overcloud deploy OVB'" 1814880,Triaged,in-stable-queens in-stable-rocky queens-backport-potential rocky-backport-potential, https://bugs.launchpad.net/tripleo/+bug/1814880,"'Containerized HAProxy cannot log in a dedicated log file'" 1815134,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1815134,"'openstack tripleo deploy stack failures truncate useful error messages'" 1815226,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1815226,"'puppet duplicate declaration error'" 1817356,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1817356,"'tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001 check job in RDO is failing on ovb stack cleanup'" 1817552,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1817552,"'periodic fs021 failing on overcloud deploy'" 1820671,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1820671,"'No matching service: barbican_tempest_plugin.tests.scenario.test_image_signing.ImageSigningTest'" 1821790,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1821790,"'selinux ssh denieals on CentOS/RHEL 8'" 1821854,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1821854,"'mistral leaks ssh processes'" 1822540,Triaged,security-hardening, https://bugs.launchpad.net/tripleo/+bug/1822540,"'By default show_image_direct_url MUST be set to False'" 1822609,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1822609,"'neutron ovs cleanup is docker.service specific'" 1823527,Triaged,dhcp-agent neutron unhealthy, https://bugs.launchpad.net/tripleo/+bug/1823527,"'neutron's dockers are unhealthy'" 1824993,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1824993,"'[network] Ceph monitor doesn't bootstrap in a standalone deployment'" 1825826,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1825826,"'standalone-upgrade failing during deploy steps for CalledProcessError: Command '['systemctl', 'stop', u'tripleo_keystone.service']' returned non-zero exit status'" 1826179,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1826179,"'standalone-upgrade fails during deploy steps (generate config) step1 for 'Evaluation Error:... Could not find class ::tripleo::profile::base::placement::api' '" 1828172,Triaged,zuul-reproducer, https://bugs.launchpad.net/tripleo/+bug/1828172,"'No match for argument: python2-dnf on tripleo reproducer libvirt Fedora 30'" 1828233,Triaged,ci zuul-reproducer, https://bugs.launchpad.net/tripleo/+bug/1828233,"'[reproducer] standalone deploy failed to access mirrors'" 1828276,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1828276,"'reproducer: still getting permission errors on docker commands by user on centos-7'" 1560111,Triaged,python-tripleoclient, https://bugs.launchpad.net/tripleo/+bug/1560111,"'The list of enabled Sahara plugins should be written to the generated deployer-input file'" 1680894,Triaged,containers validations, https://bugs.launchpad.net/tripleo/+bug/1680894,"'RFE: create a validation playbook for a containerized deployment'" 1705994,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1705994,"'openstack overcloud image upload - should support updating specific image '" 1706112,Triaged,composable-roles, https://bugs.launchpad.net/tripleo/+bug/1706112,"'Specifying custom flavor for controllers is inconsistent with other roles'" 1709706,Triaged,tripleoclient, https://bugs.launchpad.net/tripleo/+bug/1709706,"'Need better client-side error reporting'" 1714214,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1714214,"'Lack of documentation about TCP ports used by TripleO Services'" 1714544,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1714544,"'cant delete stuck with network-isolation'" 1716391,Triaged,ux,https://bugs.launchpad.net/tripleo/+bug/1716391,"'Should be possible to aggregate some Neutron configs across templates'" 1722218,Triaged,workflows,https://bugs.launchpad.net/tripleo/+bug/1722218,"'node cleaning to be a part of stack-delete'" 1722558,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1722558,"'Promoter: identify a way to store privately the secrets used by the promotion server, so they can be used in all the promotion-setup runs'" 1722871,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1722871,"'Promoter: add more disk space to promoter server'" 1723229,Triaged,containers validations, https://bugs.launchpad.net/tripleo/+bug/1723229,"'[RFE] Deploy should fail much earlier if the container images aren't found'" 1726593,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1726593,"'Cinder API v2 support needs to be removed in TripleO'" 1729541,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1729541,"' The VIP of OpenDaylight's Neutron Northbound port is configured to listen on Control Plane network'" 1729835,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1729835,"'Message formatting issue when retries all fail (introspection)'" 1734158,Triaged,quickstart ux, https://bugs.launchpad.net/tripleo/+bug/1734158,"'Adapt config downloads in quickstart for deployed servers'" 1734947,Triaged,quickstart tech-debt, https://bugs.launchpad.net/tripleo/+bug/1734947,"'RFE: support role data generation in quickstart'" 1736254,Triaged,pike stable tripleo tripleo-heat-templates, https://bugs.launchpad.net/tripleo/+bug/1736254,"'Using composable network or custom named network for MysqlNetwork results in deployment failure'" 1736272,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1736272,"'integrate elastic recheck into tempest email results'" 1736499,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1736499,"'quickstart fails with http_proxy'" 1737768,Triaged,ci quickstart tech-debt validations, https://bugs.launchpad.net/tripleo/+bug/1737768,"'CI/featureset010: validations aren't fatal'" 1740396,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1740396,"'Validation: missing check for HostnameMap'" 1740940,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1740940,"'Neutron timeout has different settings on ctrl and comp'" 1741288,Triaged,tripleo-common tripleoclient ux workflows, https://bugs.launchpad.net/tripleo/+bug/1741288,"''overcloud node' commands should allow deleting by node name also'" 1741470,Triaged,documentation, https://bugs.launchpad.net/tripleo/+bug/1741470,"'Improve docs for manual install on a virtual env'" 1742237,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1742237,"'Failure when deploying the overcloud on a predeployed server'" 1743239,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1743239,"'deployment failure with external ceph setup'" 1745090,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1745090,"'Heat engine resource check failed when adding a node after undercloud newton->ocata->pike->master upgrade'" 1745223,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1745223,"'default reproduce-script to deploy latest code'" 1745225,Triaged,quickstart ux, https://bugs.launchpad.net/tripleo/+bug/1745225,"'devmode.sh should support a mode to tear down without redeployment'" 1745449,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1745449,"'neutron interface driver names should use shortnames'" 1745500,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1745500,"'pre-provisioned nodes hostnames should not resolve to 127.0.0.1'" 1746285,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1746285,"'RFE: measure undercloud puppet install steps and leave artifact'" 1746291,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1746291,"'RFE: measure the overcloud deployment steps and leave artifact '" 1747606,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1747606,"'undercloud vm - has an unusable dns from the last network, if network > 1'" 1747999,Triaged,containers,https://bugs.launchpad.net/tripleo/+bug/1747999,"'docker-puppet.py config hash update logging is misleading'" 1749435,Triaged,workflows,https://bugs.launchpad.net/tripleo/+bug/1749435,"'Node delete always overwrites *RemovalPolicies'" 1749446,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1749446,"'Novajoin availability should not impeach deploy'" 1749477,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1749477,"'Quickstart installed undercloud async tasks broken'" 1751929,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1751929,"'Quickstart script kills non-root user processes during initial deploy'" 1752132,Triaged,ui,https://bugs.launchpad.net/tripleo/+bug/1752132,"'horizon returns 500 error due to missing heat-dashboard'" 1752427,Triaged,quickstart tech-debt, https://bugs.launchpad.net/tripleo/+bug/1752427,"'containers-check script is installed via pip'" 1752925,Triaged,containers ux, https://bugs.launchpad.net/tripleo/+bug/1752925,"'Deployments with docker containers in unhealthy state should fail deployment'" 1753817,Triaged,ci quickstart tech-debt, https://bugs.launchpad.net/tripleo/+bug/1753817,"'branch jobs on quickstart/extras/tripleo-ci fail if they depend on a master change in a branched repo'" 1755223,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1755223,"'Remove Newton Support from oooq-extras validate-tempest role'" 1756138,Triaged,controller replace, https://bugs.launchpad.net/tripleo/+bug/1756138,"'replace controller fail in build containers in new controller '" 1757211,Triaged,documentation quickstart ux, https://bugs.launchpad.net/tripleo/+bug/1757211,"'[quickstart] overcloud_as_undercloud needs to be clarified and documented'" 1758324,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1758324,"'5 minute delay when provisioning over IPv6'" 1760735,Triaged,ocata-backport-potential pike-backport-potential queens-backport-potential validations, https://bugs.launchpad.net/tripleo/+bug/1760735,"'[rfe] validation to check for overcloud node clock sync relative to Undercloud'" 1761504,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1761504,"'Worflows 'list' and 'list_groups' from tripleo.validations.v1 don't produce any output'" 1761716,Triaged,pike-backport-potential queens-backport-potential, https://bugs.launchpad.net/tripleo/+bug/1761716,"'docker-puppet puppet-generated dirs not cleaned during rsync'" 1763433,Triaged,ux,https://bugs.launchpad.net/tripleo/+bug/1763433,"'Logical puzzles in tripleo CLI'" 1764476,Triaged,upgrade,https://bugs.launchpad.net/tripleo/+bug/1764476,"'poor use of variable names in tripleo_upgrade role re: triple_ci variable'" 1765787,Triaged,low-hanging-fruit, https://bugs.launchpad.net/tripleo/+bug/1765787,"'01-package-installs should fail the whole image build if it python tracebacks'" 1766199,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1766199,"'Link timedatectl to ntp'" 1767125,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1767125,"'Compute specific L3 agent no longer required'" 1767132,Triaged,nova,https://bugs.launchpad.net/tripleo/+bug/1767132,"'InstanceNameTemplate parameter is not applied in compute nodes'" 1768092,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1768092,"'improve promoter script error message by including dlrn hash'" 1820576,Triaged,tech-debt,https://bugs.launchpad.net/tripleo/+bug/1820576,"'master: Standalone002 job fails, keystone container didn't start because could not bind to address 192.168.24.1:35357'" 1821419,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1821419,"'[Docs] Document Rendering NIC Templates for Customization'" 1821600,Triaged,in-stable-queens queens-backport-potential, https://bugs.launchpad.net/tripleo/+bug/1821600,"'iscsid is not stoped after running FFU'" 1823003,Triaged,in-stable-queens queens-backport-potential, https://bugs.launchpad.net/tripleo/+bug/1823003,"'Queens: epmd process is not always started by systemd when initiating undercloud upgrade'" 1823260,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1823260,"'RFE: Provide the ability to retag already existing images in the undercloud registry'" 1823849,Triaged,in-stable-queens in-stable-rocky queens-backport-potential rocky-backport-potential, https://bugs.launchpad.net/tripleo/+bug/1823849,"'trck the enablement of deep_compare with puppet-pacemaker on queens and rocky'" 1824143,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1824143,"'Cleaning ha-env files for ovn'" 1824186,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1824186,"'ansible timeout issue'" 1827724,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1827724,"'Podman service is not used in certain NFV roles'" 1828001,Triaged,documentation, https://bugs.launchpad.net/tripleo/+bug/1828001,"'heat templates now end in j2.yaml on page Configuring Network Isolation in tripleo-docs'" 1686353,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1686353,"'Proxy Options for TripleO QuickStart'" 1708302,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1708302,"'openstack clients which can use ceph, should not need to depend on /etc/ceph'" 1709826,Triaged,ci documentation quickstart, https://bugs.launchpad.net/tripleo/+bug/1709826,"'Having a README file inside the core roles'" 1721841,Triaged,containers,https://bugs.launchpad.net/tripleo/+bug/1721841,"'ceph docker containers do not log to /var/log/containers'" 1723540,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1723540,"'tripleo-quickstart ASCII banner has wrong indentation in logs.o.o'" 1724991,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1724991,"'tracker - tripleo stack create canceled or timeout in stage 3'" 1739620,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1739620,"'quickstart covers up deficiency in swift logging in standard deployment'" 1746505,Triaged,tripleo-heat-templates, https://bugs.launchpad.net/tripleo/+bug/1746505,"'Keystone caching not enabled by default?'" 1746812,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1746812,"'rdo ovb jobs do not test the change submitted'" 1755368,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1755368,"'Missing centos-binary-tacker on hub.docker.com'" 1757271,Triaged,containers low-hanging-fruit, https://bugs.launchpad.net/tripleo/+bug/1757271,"'openstack overcloud container image prepare should handle slashes in namespace entries'" 1761093,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1761093,"'Upgrade 'Pike BM' -> 'Pike Containers': httpd issue'" 1763089,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1763089,"'undercloud vm fails to start properly'" 1769520,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1769520,"'RFE (reproducer): 'ci dev mode' for iterating on reproducer script/roles'" 1769526,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1769526,"'RFE (libvirt-reproducer): add param to take an array of DNS ip's vs. singular param'" 1769527,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1769527,"'RFE (libvirt-reproducer): check for disk space needed as part of initial libvirt-nodepool playbook/role '" 1769529,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1769529,"'RFE (libvirt-reproducer): handle case where 'stack' user already exists on virthost, script fails'" 1769531,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1769531,"'RFE (libvirt-reproducer): Display summary of domains + IP's created by script'" 1771470,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1771470,"'dlrnapi promoter: make output of container existence check more clear'" 1785799,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1785799,"'Multinode jobs in check pipeline in rdo sf are launched in the wrong nodepool provider'" 1789836,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1789836,"'openstack overlay deploy failed with docker error 'not a shared mount''" 1793774,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1793774,"'tripleo-docs out of date around containers'" 1795646,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1795646,"'When using containerized openshift-ansible, we're losing the profile_tasks output callback'" 1801388,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1801388,"'build output is impossible to read due to nested json dumps'" 1801825,Triaged,low-hanging-fruit, https://bugs.launchpad.net/tripleo/+bug/1801825,"'Don't quote {posargs} in tox.ini'" 1805186,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1805186,"'tripleo jobs failing on ImageNotFoundException: Not found image: docker'" 1815208,Triaged,documentation, https://bugs.launchpad.net/tripleo/+bug/1815208,"'Security Hardening in tripleo-docs Incorrect indentation in YAML example'" 1819461,Triaged,containers,https://bugs.launchpad.net/tripleo/+bug/1819461,"''sudo' calls try to connect to host DBus from within a container'" 1828070,Triaged,low-hanging-fruit, https://bugs.launchpad.net/tripleo/+bug/1828070,"'puppet deploy has excessive warnings about unresolved dependencies'" 1397477,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1397477,"'rabbitmq-server configure Memory and Disk limit '" 1594389,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1594389,"'[RFE] Add a squid proxy'" 1604525,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1604525,"'enable selinux enforcing as an option'" 1628518,Triaged,ci,https://bugs.launchpad.net/tripleo/+bug/1628518,"'[RFE] add snapshotting functionality'" 1634646,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1634646,"'memcache for keystone authtoken should be a single server'" 1639172,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1639172,"'Fixed IP assignment to physical node during openstack deployment'" 1641419,Triaged,ui ux,https://bugs.launchpad.net/tripleo/+bug/1641419,"'[RFE]: Better handling of large instackenv.json files'" 1647676,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1647676,"'Make `--teardown none` the default behavior'" 1647761,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1647761,"'No overcloud-public-vip in /etc/hosts on undercloud vm'" 1649813,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1649813,"'Use predictable placement by default'" 1650428,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1650428,"'RFE: support of ScaleIO backend for Cinder'" 1654198,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1654198,"'Multiple backends of the same type can not be used in cinder/manila'" 1659452,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1659452,"'[RFE] split out topology and provisioner details from the general config'" 1663025,Triaged,edge quickstart, https://bugs.launchpad.net/tripleo/+bug/1663025,"'[RFE] Add per-node network-to-NIC mapping'" 1663034,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1663034,"'[RFE] Enable and configure DHCP relay on virt-host'" 1664743,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1664743,"'[RFE] Support Mixed Virt/Bare Metal Environments'" 1666916,Triaged,low-hanging-fruit quickstart, https://bugs.launchpad.net/tripleo/+bug/1666916,"'[quickstart][RFE] Add configuration for multi-nic network isolation'" 1693928,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1693928,"'Support split RPC/notification messaging backends'" 1694773,Triaged,quickstart,https://bugs.launchpad.net/tripleo/+bug/1694773,"'[RFE][Quickstart] Update devmode.sh to display relevant info after deploying to RDO Cloud'" 1698302,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1698302,"'RFE - Use an options file for deployment'" 1729452,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1729452,"'[RFE] TripleO should report IP addresses used on each host post-deployment'" 1731973,Triaged,ui ux,https://bugs.launchpad.net/tripleo/+bug/1731973,"'[RFE] Simplify create/import plan modal'" 1731989,Triaged,ui ux,https://bugs.launchpad.net/tripleo/+bug/1731989,"'[RFE] Automatically introspect and provide nodes to simplify workflow'" 1782139,Triaged,edge upgrade ux, https://bugs.launchpad.net/tripleo/+bug/1782139,"'[RFE] support automatic t-h-t decoupling and versioning for mixed UC/OC cases (git integration for deployment plans and tht)'" 1789197,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1789197,"'Adding THT support for Cinder backup Google cloud driver'" 1812205,Triaged,containers,https://bugs.launchpad.net/tripleo/+bug/1812205,"'ContainerImagePrepare: ability to override specific container images'" 1815918,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1815918,"'[RFE]u00a0DeployedServer - Create neutron ports on ctlplane instead of using a fake port and DeployedServerPortMap'" 1817364,Triaged,,https://bugs.launchpad.net/tripleo/+bug/1817364,"'tripleo-build-containers: use quickstart to collect logs'" 1818503,Triaged,ci quickstart, https://bugs.launchpad.net/tripleo/+bug/1818503,"'[RFE] Make the configuration of quickstart variable-centric'" -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Tue May 12 14:44:43 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Tue, 12 May 2020 15:44:43 +0100 Subject: Tox basepython and Python3 In-Reply-To: <4318854b4524f1cbe8b52c2e41888b52ad3169ef.camel@redhat.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> <4318854b4524f1cbe8b52c2e41888b52ad3169ef.camel@redhat.com> Message-ID: <273A2312-37C1-4945-9903-66A76DCDFC71@redhat.com> I recently discovered that this problem goes even deeper, please read all comments on https://github.com/tox-dev/tox/issues/1565 Due to this it seems to be impossible to define zuul jobs that use a specific python version regardless the environment name. If you want to force "linters" job to use python3.8 only under CI/CD, is impossible. Using basepython=pythonX.Y combined with ignore_basepython_conflict=False seems the only way to enforce version use, and it comes at the cost of not being flexible for developers (as they may not have the exact version that we want to use on CI jobs). While for unittest jobs we do use pyXY in the environment name, we do not have the same for "linters", "docs". Alternative to add `{pyXY}-linters` seems to be the only option to trick it. Another (dangerous) approach would be to assure that the only python version available on each nodeset used by tox job is the one we want to use for testing. I engaged with Bernat Gabor (tox maintainer) on Gitter/tox-dev about this issue and apparently there is no solution. Tox v4 has some changes planned but is far away. * --discover cannot be used to enforce python detection * tox own interpreter version cannot be used to enforce version being picked (at least this is what the maintainer told me) Sadly, after spending a good number of hours on that I am more confused that I was when I started. With Zuul nodepool images that can change over night on multiple zuul servers (at least 4 i use), it seems that tox-* jobs are joing to be a permanent source of surprises, where we can discover that what they say they test is not really what they did. Anyone missing a `make foo` command? ;) Cheers Sorin > On 11 May 2020, at 21:02, Sean Mooney wrote: > > On Mon, 2020-05-11 at 10:34 -0700, Clark Boylan wrote: >> Hello everyone, >> >> This has come up a few times on IRC so we are probably well overdue for a email about it. Long story short, if your >> tox.ini config sets basepython [0] to `python3` and you also use py35, py36, py37, or py38 test targets there is a >> good chance you are not testing what you intend to be testing. You also need to set ignore_basepython_conflict to true >> [1]. >> >> The reason for this is basepython acts as an override for the python executable to be used when creating tox >> virtualenvs. `python3` on most platforms indicates a specific python3 version: Ubuntu Xenial 3.5, Ubuntu Bionic and >> CentOS 8 3.6, and so on. This means that even though you are asking for python 3.7 via py37 you'll get whatever >> version `python3` is on the running platform. To address this we can set ignore_basepython_conflict and tox will use >> the version specified as part of the target and not the basepython override. >> >> You might wonder why using basepython is useful at all given this situation. The reason for it is the default python >> used by tox for virtualenvs is the version of python tox was installed under. This means that if tox is running under >> python2 it will use python2 for virtualenvs when no other version is set. Since many software projects are now trying >> to drop python2 support they want to explicitly force python3 in the default case. basepython gets us halfway there, >> ignore_basepython_conflict the rest of the way. >> >> [0] https://tox.readthedocs.io/en/latest/config.html#conf-basepython >> [1] https://tox.readthedocs.io/en/latest/config.html#conf-ignore_basepython_conflict >> >> Hopefully this helps explain some of tox's odd behavior in a beneficial way. Now go and check your tox.ini files :) > yep and to reinforce that point we also have this big warning comment nova so that people know why we do this. > https://github.com/openstack/nova/blob/master/tox.ini#L4-L7 > in later version of tox ignore_basepython_conflict = True will be the default > as that is generally less surprisng behavior but with our current min version both are needed. >> Clark From gmann at ghanshyammann.com Tue May 12 14:49:27 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 09:49:27 -0500 Subject: [all][PTL] U Community Goal: Project PTL & Contrib Docs Update #9 In-Reply-To: References: Message-ID: <172095ca91c.ff0cd90649055.2326341403056598665@ghanshyammann.com> ---- On Tue, 12 May 2020 00:20:38 -0500 Renat Akhmerov wrote ---- > Hi Kendall, > > We’ve already done that for Mistral: https://docs.openstack.org/mistral/latest/developer/contributor/contributing.html > > Let us know if anything else is needed at this point. Thanks Renat, But you need to do this for all the repo under Mistral projects. - CONTRIBUTING.rst needs to be added or updated as per new template for all repo - doc/source/contributor/contributing.rst only needed if repo follow the same task tracking or process etc than mistral repo. For example, - python-mistralclient has seprate LP so we need this file listing python-mistralclient LP - If some repo need single core approval then that repo need this file to mention it explicitly. Basically to document the things which are differrent than mistral contributing.rst -gmann > > Thanks > > Renat Akhmerov > @NokiaOn 9 May 2020, 09:56 +0700, Kendall Nelson , wrote:Hello Everyone! > Even more excellent progress[1] has been made as we are inching closer to completion. Again, thank you for all the hard work! > Projects listed below are missing progress/merged status according to the story[4]. If you are on this list, please get started IMMEDIATELY. All told, if you are doing the bare minimum of only filling out the template[2] it shouldn't take more than 5-10 minutes per repo. • Barbican• Cloudkitty• Congress• Cyborg• Ec2-API• Freezer• Heat• Horizon• Karbor• Kuryr• LOCI• Masakari• Mistral• Monasca• Murano• OpenStack Charms• openstack-chef• OpenStackAnsible• OpenStackClient• Packaging-RPM• Placement• Puppet OpenStack• Rally• Release Management• Senlin• Solum• Storlets• Swift• Tacker• Telemetry• Tricircle• TripleO• Zaqar > PLEASE add me if you need help with reviews and let me know if there is any other help I can provide; we are nearly out of time and there is a lot of work to be done still. > If you have questions about the goal itself, here is the link to that[3]. > And as you push patches, please be sure to update the StoryBoard story[4]. > Thanks! > -Kendall (diablo_rojo) > > [1] Progress: https://review.opendev.org/#/q/topic:project-ptl-and-contrib-docs+(status:open+OR+status:merged)[2] Cookiecutter Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst[3] Accepted Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html[4] Task Tracking: https://storyboard.openstack.org/#!/story/2007236 From sean.mcginnis at gmx.com Tue May 12 14:51:23 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 12 May 2020 09:51:23 -0500 Subject: Tox basepython and Python3 In-Reply-To: <273A2312-37C1-4945-9903-66A76DCDFC71@redhat.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> <4318854b4524f1cbe8b52c2e41888b52ad3169ef.camel@redhat.com> <273A2312-37C1-4945-9903-66A76DCDFC71@redhat.com> Message-ID: <55207102-b651-10e1-6480-940f40a475ce@gmx.com> On 5/12/20 9:44 AM, Sorin Sbarnea wrote: > I recently discovered that this problem goes even deeper, please read all comments on https://github.com/tox-dev/tox/issues/1565 > > Due to this it seems to be impossible to define zuul jobs that use a specific python version regardless the environment name. If you want to force "linters" job to use python3.8 only under CI/CD, is impossible. > > Using basepython=pythonX.Y combined with ignore_basepython_conflict=False seems the only way to enforce version use, and it comes at the cost of not being flexible for developers (as they may not have the exact version that we want to use on CI jobs). > > While for unittest jobs we do use pyXY in the environment name, we do not have the same for "linters", "docs". Alternative to add `{pyXY}-linters` seems to be the only option to trick it. Is there a reason you need a specific py3 version for these jobs? Normally, at least in OpenStack, we just want these jobs to not use py27. Which version of py3 is used to build docs is usually not as big of a concern. Sean From cboylan at sapwetik.org Tue May 12 15:01:40 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 12 May 2020 08:01:40 -0700 Subject: Tox basepython and Python3 In-Reply-To: <273A2312-37C1-4945-9903-66A76DCDFC71@redhat.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> <4318854b4524f1cbe8b52c2e41888b52ad3169ef.camel@redhat.com> <273A2312-37C1-4945-9903-66A76DCDFC71@redhat.com> Message-ID: <2a6e3b19-1a3a-4943-8089-c0320cf6b7ff@www.fastmail.com> On Tue, May 12, 2020, at 7:44 AM, Sorin Sbarnea wrote: > I recently discovered that this problem goes even deeper, please read > all comments on https://github.com/tox-dev/tox/issues/1565 > > Due to this it seems to be impossible to define zuul jobs that use a > specific python version regardless the environment name. If you want to > force "linters" job to use python3.8 only under CI/CD, is impossible. > > Using basepython=pythonX.Y combined with > ignore_basepython_conflict=False seems the only way to enforce version > use, and it comes at the cost of not being flexible for developers (as > they may not have the exact version that we want to use on CI jobs). > > While for unittest jobs we do use pyXY in the environment name, we do > not have the same for "linters", "docs". Alternative to add > `{pyXY}-linters` seems to be the only option to trick it. I don't think that is a trick, this is an intentional feature of tox to solve the problem you have. If you want a specific version of python to be used tox aims to help you do that via the pyXY* environments. https://tox.readthedocs.io/en/latest/config.html#tox-environments clearly describes this behavior. > > Another (dangerous) approach would be to assure that the only python > version available on each nodeset used by tox job is the one we want to > use for testing. > > I engaged with Bernat Gabor (tox maintainer) on Gitter/tox-dev about > this issue and apparently there is no solution. Tox v4 has some changes > planned but is far away. > > > * --discover cannot be used to enforce python detection > * tox own interpreter version cannot be used to enforce version being > picked (at least this is what the maintainer told me) > > Sadly, after spending a good number of hours on that I am more confused > that I was when I started. > > With Zuul nodepool images that can change over night on multiple zuul > servers (at least 4 i use), it seems that tox-* jobs are joing to be a > permanent source of surprises, where we can discover that what they say > they test is not really what they did. In the case of linting, without a specific python version set, they are doing what you asked: run the linters without a specific version of python and use what is available. The problem I described earlier was asking for a specific python version and not getting that version. Running tox -e linters does not ask for a specific version. The workaround you described above is the actual tox solution to this problem. You should run py38-linters if that is your intent. > > Anyone missing a `make foo` command? ;) > > Cheers > Sorin From fungi at yuggoth.org Tue May 12 15:26:10 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 12 May 2020 15:26:10 +0000 Subject: Tox and missing interpreters (was: Tox basepython and Python3) In-Reply-To: <018fc56c2cf9d795de647828917eb442bbe0f7bc.camel@redhat.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> <20200511182024.euqmssxm2gtwg5ch@yuggoth.org> <8ac0ed30b52e401337445486cc80de1dcfc9bce1.camel@redhat.com> <018fc56c2cf9d795de647828917eb442bbe0f7bc.camel@redhat.com> Message-ID: <20200512152610.7gx7xdbcd444wz3o@yuggoth.org> On 2020-05-12 13:43:12 +0100 (+0100), Sean Mooney wrote: > On Tue, 2020-05-12 at 10:48 +0100, Stephen Finucane wrote: > > On Mon, 2020-05-11 at 18:20 +0000, Jeremy Stanley wrote: > > > On a related note, the proliferation of tested Python versions has > > > led many projects to enable the skip_missing_interpreters option in > > > their tox.ini files. Please don't, this is just plain DANGEROUS. > > why is this dangourse it wont casue the ci jobs to be skipped since > we guarentee the interperter will be present Yes, the current tox-py38 in zuul-jobs sets "python_version: 3.8" in the vars list which the parent job passes to the ensure-python role. This is true for all current the tox-pyXY jobs there except tox-py27 (I'm not entirely sure why the discrepancy). The ensure-python role, in its present state, will make sure the relevant pythonX.Y and pythonX.Y-dev packages are installed on Debian and Ubuntu nodes, but does nothing on other platforms. So it's probably fairly safe for upstream testing with Zuul if the project is only inheriting from the predefined tox-py3* jobs and using the default (Ubuntu based) nodeset. If the project defines its own job and neglects to set python_version or doesn't descend from the generic tox job in zuul-jobs or sets a different nodeset, then you'll get a successful build which ran no tests. That's a lot of caveats. We saw this first hand when OpenStack initially added its own py38 jobs, and have seen it crop up in third-party CI systems (as well as confuse some new developers when they're running tox locally). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue May 12 15:53:38 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 10:53:38 -0500 Subject: Tox basepython and Python3 In-Reply-To: <2a6e3b19-1a3a-4943-8089-c0320cf6b7ff@www.fastmail.com> References: <29487e7a-6d49-4ed7-b31e-f28e0e1b6696@www.fastmail.com> <4318854b4524f1cbe8b52c2e41888b52ad3169ef.camel@redhat.com> <273A2312-37C1-4945-9903-66A76DCDFC71@redhat.com> <2a6e3b19-1a3a-4943-8089-c0320cf6b7ff@www.fastmail.com> Message-ID: <17209976d21.11ac3aba752731.3893731640072152837@ghanshyammann.com> ---- On Tue, 12 May 2020 10:01:40 -0500 Clark Boylan wrote ---- > On Tue, May 12, 2020, at 7:44 AM, Sorin Sbarnea wrote: > > I recently discovered that this problem goes even deeper, please read > > all comments on https://github.com/tox-dev/tox/issues/1565 > > > > Due to this it seems to be impossible to define zuul jobs that use a > > specific python version regardless the environment name. If you want to > > force "linters" job to use python3.8 only under CI/CD, is impossible. > > > > Using basepython=pythonX.Y combined with > > ignore_basepython_conflict=False seems the only way to enforce version > > use, and it comes at the cost of not being flexible for developers (as > > they may not have the exact version that we want to use on CI jobs). > > > > While for unittest jobs we do use pyXY in the environment name, we do > > not have the same for "linters", "docs". Alternative to add > > `{pyXY}-linters` seems to be the only option to trick it. > > I don't think that is a trick, this is an intentional feature of tox to solve the problem you have. If you want a specific version of python to be used tox aims to help you do that via the pyXY* environments. https://tox.readthedocs.io/en/latest/config.html#tox-environments clearly describes this behavior. > > > > > Another (dangerous) approach would be to assure that the only python > > version available on each nodeset used by tox job is the one we want to > > use for testing. > > > > I engaged with Bernat Gabor (tox maintainer) on Gitter/tox-dev about > > this issue and apparently there is no solution. Tox v4 has some changes > > planned but is far away. > > > > > > * --discover cannot be used to enforce python detection > > * tox own interpreter version cannot be used to enforce version being > > picked (at least this is what the maintainer told me) > > > > Sadly, after spending a good number of hours on that I am more confused > > that I was when I started. > > > > With Zuul nodepool images that can change over night on multiple zuul > > servers (at least 4 i use), it seems that tox-* jobs are joing to be a > > permanent source of surprises, where we can discover that what they say > > they test is not really what they did. > > In the case of linting, without a specific python version set, they are doing what you asked: run the linters without a specific version of python and use what is available. The problem I described earlier was asking for a specific python version and not getting that version. Running tox -e linters does not ask for a specific version. > > The workaround you described above is the actual tox solution to this problem. You should run py38-linters if that is your intent. Correct. For all other tox env like for unit tests which can have a conflict for python version, we already use ignore_basepython_conflict=True with tox=3.1 as min version in all projects (if I have not missed any). That is what we did during py2.7 drop where we defined the basepython=python3 in common testenv. -gmann > > > > > Anyone missing a `make foo` command? ;) > > > > Cheers > > Sorin > > From gmann at ghanshyammann.com Tue May 12 16:14:43 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 11:14:43 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 Message-ID: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> Hello Everyone. You might have noticed that few or most of the projects pep8 job started failing. That is because flake8 new version 3.8.0 added the new pycodestyle with new rules. Hacking capped it with 4.0.0 version not with the minor version for 3.*. The new hacking version 3.0.1 is released which cap the flake8<3.8.0. Thanks to dtantsur and stephenfin for tacking care of it. To fix your pep8 job you can, - Either fix the pep8 error in code if easy and fast to fix. - Or bump the hacking minimum version to 3.0.1. I have proposed it for a few projects failing pep8. - https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) If pep8 job is passing then you do not need to do anything. The existing hacking version cap will work fine. -gmann From sean.mcginnis at gmx.com Tue May 12 16:25:14 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 12 May 2020 11:25:14 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> Message-ID: On 5/12/20 11:14 AM, Ghanshyam Mann wrote: > Hello Everyone. > > You might have noticed that few or most of the projects pep8 job started failing. > > That is because flake8 new version 3.8.0 added the new pycodestyle with new rules. Hacking capped > it with 4.0.0 version not with the minor version for 3.*. > > The new hacking version 3.0.1 is released which cap the flake8<3.8.0. Thanks to dtantsur and stephenfin for > tacking care of it. > > To fix your pep8 job you can, > > - Either fix the pep8 error in code if easy and fast to fix. > > - Or bump the hacking minimum version to 3.0.1. I have proposed it for a few projects failing pep8. > - https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) > > If pep8 job is passing then you do not need to do anything. The existing hacking version cap will work fine. > > -gmann This is also failing on some stable branches that had not moved to hacking 3.0 yet. In this case, it may be better to add a flake8 cap to the repo's test-requirements.txt file rather than backporting a major bump in hacking and dealing with the need to make a lot of code changes. Here is an example of that approach: https://review.opendev.org/#/c/727265/ From haleyb.dev at gmail.com Tue May 12 17:00:11 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 12 May 2020 13:00:11 -0400 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> Message-ID: On 5/12/20 12:25 PM, Sean McGinnis wrote: > On 5/12/20 11:14 AM, Ghanshyam Mann wrote: >> Hello Everyone. >> >> You might have noticed that few or most of the projects pep8 job >> started failing. >> >> That is because flake8 new version 3.8.0 added the new pycodestyle >> with new rules. Hacking capped >> it with 4.0.0 version not with the minor version for 3.*. >> >> The new hacking version 3.0.1 is released which cap the flake8<3.8.0. >> Thanks to dtantsur and stephenfin for >> tacking care of it. >> >>   To fix your pep8 job you can, >> >> - Either fix the pep8 error in code if easy and fast to fix. >> >> - Or bump the hacking minimum version to 3.0.1. I have proposed it for >> a few projects failing pep8. >>    - >> https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) >> >> >> If pep8 job is passing then you do not need to do anything. The >> existing hacking version cap will work fine. >> >> -gmann > > This is also failing on some stable branches that had not moved to > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > the repo's test-requirements.txt file rather than backporting a major > bump in hacking and dealing with the need to make a lot of code changes. > > Here is an example of that approach: > > https://review.opendev.org/#/c/727265/ I found in the neutron stable/ussuri repo that capping flake8<3.8.0 didn't work, but capping pycodestyle did. So that's another option. -pycodestyle>=2.0.0 # MIT +pycodestyle>=2.0.0,<2.6.0 # MIT https://review.opendev.org/#/c/727274/ -Brian From gmann at ghanshyammann.com Tue May 12 17:18:09 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 12:18:09 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> Message-ID: <17209e4cf44.bbf6bf9856596.3530296876704322268@ghanshyammann.com> ---- On Tue, 12 May 2020 11:25:14 -0500 Sean McGinnis wrote ---- > On 5/12/20 11:14 AM, Ghanshyam Mann wrote: > > Hello Everyone. > > > > You might have noticed that few or most of the projects pep8 job started failing. > > > > That is because flake8 new version 3.8.0 added the new pycodestyle with new rules. Hacking capped > > it with 4.0.0 version not with the minor version for 3.*. > > > > The new hacking version 3.0.1 is released which cap the flake8<3.8.0. Thanks to dtantsur and stephenfin for > > tacking care of it. > > > > To fix your pep8 job you can, > > > > - Either fix the pep8 error in code if easy and fast to fix. > > > > - Or bump the hacking minimum version to 3.0.1. I have proposed it for a few projects failing pep8. > > - https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) > > > > If pep8 job is passing then you do not need to do anything. The existing hacking version cap will work fine. > > > > -gmann > > This is also failing on some stable branches that had not moved to > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > the repo's test-requirements.txt file rather than backporting a major > bump in hacking and dealing with the need to make a lot of code changes. > > Here is an example of that approach: > > https://review.opendev.org/#/c/727265/ It will only fail stable/ussuri not older stable as hacking flake8 cap issue is released in hacking 2.0 which is in ussuri. So all other stable branches will be using hacking <1.20 which does not have this issue. having flake8 in project test-requirement also need more maintainance to keep both place (hacking as well as project's test-requirements) in sync otherwise it can break anytime. example: https://review.opendev.org/#/c/727206/ -gmann > > > From smooney at redhat.com Tue May 12 17:20:19 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 12 May 2020 18:20:19 +0100 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> Message-ID: <1030736bb14fa90970d877fa5753ed3ffe4386f4.camel@redhat.com> On Tue, 2020-05-12 at 13:00 -0400, Brian Haley wrote: > On 5/12/20 12:25 PM, Sean McGinnis wrote: > > On 5/12/20 11:14 AM, Ghanshyam Mann wrote: > > > Hello Everyone. > > > > > > You might have noticed that few or most of the projects pep8 job > > > started failing. > > > > > > That is because flake8 new version 3.8.0 added the new pycodestyle > > > with new rules. Hacking capped > > > it with 4.0.0 version not with the minor version for 3.*. > > > > > > The new hacking version 3.0.1 is released which cap the flake8<3.8.0. > > > Thanks to dtantsur and stephenfin for > > > tacking care of it. > > > > > > To fix your pep8 job you can, > > > > > > - Either fix the pep8 error in code if easy and fast to fix. > > > > > > - Or bump the hacking minimum version to 3.0.1. I have proposed it for > > > a few projects failing pep8. > > > - > > > https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) > > > > > > > > > If pep8 job is passing then you do not need to do anything. The > > > existing hacking version cap will work fine. > > > > > > -gmann > > > > This is also failing on some stable branches that had not moved to > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > the repo's test-requirements.txt file rather than backporting a major > > bump in hacking and dealing with the need to make a lot of code changes. > > > > Here is an example of that approach: > > > > https://review.opendev.org/#/c/727265/ > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > didn't work, but capping pycodestyle did. So that's another option. > > -pycodestyle>=2.0.0 # MIT > +pycodestyle>=2.0.0,<2.6.0 # MIT ya i created a test env on ubuntu 20.04 with python 3.8 and i had the issue and then if i downgraded flake8 to even 3.5.0 if i do /opt/repos/nova$ .tox/venv/bin/pip install -U pycodestyle\<2.6 flake8\<3.8 and then run pep8 in nova it still fails not resovle the issue ubuntu at devstack-ubuntu-latest:/opt/repos/nova$ .tox/venv/bin/pip freeze | grep -E "flake8|pycodestyle" flake8==3.7.9 pycodestyle==2.5.0 and i went older too ubuntu at devstack-ubuntu-latest:/opt/repos/nova$ .tox/venv/bin/pip freeze | grep -E "flake8|pycodestyle" flake8==3.6.0 pycodestyle==2.4.0 on ubuntu 20.04 under python 3.8 all of the above fail with the same errors for nova. so there is something more going on then just flake8 and pycodestyle i think. > > https://review.opendev.org/#/c/727274/ > > -Brian > From gmann at ghanshyammann.com Tue May 12 17:23:15 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 12:23:15 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> Message-ID: <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> ---- On Tue, 12 May 2020 12:00:11 -0500 Brian Haley wrote ---- > On 5/12/20 12:25 PM, Sean McGinnis wrote: > > On 5/12/20 11:14 AM, Ghanshyam Mann wrote: > >> Hello Everyone. > >> > >> You might have noticed that few or most of the projects pep8 job > >> started failing. > >> > >> That is because flake8 new version 3.8.0 added the new pycodestyle > >> with new rules. Hacking capped > >> it with 4.0.0 version not with the minor version for 3.*. > >> > >> The new hacking version 3.0.1 is released which cap the flake8<3.8.0. > >> Thanks to dtantsur and stephenfin for > >> tacking care of it. > >> > >> To fix your pep8 job you can, > >> > >> - Either fix the pep8 error in code if easy and fast to fix. > >> > >> - Or bump the hacking minimum version to 3.0.1. I have proposed it for > >> a few projects failing pep8. > >> - > >> https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) > >> > >> > >> If pep8 job is passing then you do not need to do anything. The > >> existing hacking version cap will work fine. > >> > >> -gmann > > > > This is also failing on some stable branches that had not moved to > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > the repo's test-requirements.txt file rather than backporting a major > > bump in hacking and dealing with the need to make a lot of code changes. > > > > Here is an example of that approach: > > > > https://review.opendev.org/#/c/727265/ > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > didn't work, but capping pycodestyle did. So that's another option. > > -pycodestyle>=2.0.0 # MIT > +pycodestyle>=2.0.0,<2.6.0 # MIT > > https://review.opendev.org/#/c/727274/ I will say remove rhe pycodestyle from neutron test-reqruiement and let hacking via flake8 cap handle the compatible pycodestyle. Otherwise we end up maintaining and fixing it project side for future. -gmann > > -Brian > > From smooney at redhat.com Tue May 12 17:27:46 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 12 May 2020 18:27:46 +0100 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> Message-ID: <50873e57273ab3f0d93fdf0508420049dc34fc7d.camel@redhat.com> On Tue, 2020-05-12 at 12:23 -0500, Ghanshyam Mann wrote: > > > ---- On Tue, 12 May 2020 12:00:11 -0500 Brian Haley wrote ---- > > On 5/12/20 12:25 PM, Sean McGinnis wrote: > > > On 5/12/20 11:14 AM, Ghanshyam Mann wrote: > > >> Hello Everyone. > > >> > > >> You might have noticed that few or most of the projects pep8 job > > >> started failing. > > >> > > >> That is because flake8 new version 3.8.0 added the new pycodestyle > > >> with new rules. Hacking capped > > >> it with 4.0.0 version not with the minor version for 3.*. > > >> > > >> The new hacking version 3.0.1 is released which cap the flake8<3.8.0. > > >> Thanks to dtantsur and stephenfin for > > >> tacking care of it. > > >> > > >> To fix your pep8 job you can, > > >> > > >> - Either fix the pep8 error in code if easy and fast to fix. > > >> > > >> - Or bump the hacking minimum version to 3.0.1. I have proposed it for > > >> a few projects failing pep8. > > >> - > > >> https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) > > >> > > >> > > >> If pep8 job is passing then you do not need to do anything. The > > >> existing hacking version cap will work fine. > > >> > > >> -gmann > > > > > > This is also failing on some stable branches that had not moved to > > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > > the repo's test-requirements.txt file rather than backporting a major > > > bump in hacking and dealing with the need to make a lot of code changes. > > > > > > Here is an example of that approach: > > > > > > https://review.opendev.org/#/c/727265/ > > > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > > didn't work, but capping pycodestyle did. So that's another option. > > > > -pycodestyle>=2.0.0 # MIT > > +pycodestyle>=2.0.0,<2.6.0 # MIT > > > > https://review.opendev.org/#/c/727274/ > > I will say remove rhe pycodestyle from neutron test-reqruiement and let hacking > via flake8 cap handle the compatible pycodestyle. Otherwise we end up maintaining and > fixing it project side for future. just in case it was nto seen from my minimal testing if i create a tox env with master. then i downgrade both flake8 and pycodestyle in the virtual enve the nova tests still fail. do i need to explcitly down grade hacking too? .tox/venv/bin/pip install -U pycodestyle\<2.6 flake8\<3.8 and re running the test is failing for me even if i do find -name *.pyc -delete to make sure there are no .pyc files cached anywhere > > > -gmann > > > > > > -Brian > > > > > From gmann at ghanshyammann.com Tue May 12 17:41:28 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 12:41:28 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <1030736bb14fa90970d877fa5753ed3ffe4386f4.camel@redhat.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> <1030736bb14fa90970d877fa5753ed3ffe4386f4.camel@redhat.com> Message-ID: <17209fa26b8.104cea6ec57448.1882299448538299593@ghanshyammann.com> ---- On Tue, 12 May 2020 12:20:19 -0500 Sean Mooney wrote ---- > On Tue, 2020-05-12 at 13:00 -0400, Brian Haley wrote: > > On 5/12/20 12:25 PM, Sean McGinnis wrote: > > > On 5/12/20 11:14 AM, Ghanshyam Mann wrote: > > > > Hello Everyone. > > > > > > > > You might have noticed that few or most of the projects pep8 job > > > > started failing. > > > > > > > > That is because flake8 new version 3.8.0 added the new pycodestyle > > > > with new rules. Hacking capped > > > > it with 4.0.0 version not with the minor version for 3.*. > > > > > > > > The new hacking version 3.0.1 is released which cap the flake8<3.8.0. > > > > Thanks to dtantsur and stephenfin for > > > > tacking care of it. > > > > > > > > To fix your pep8 job you can, > > > > > > > > - Either fix the pep8 error in code if easy and fast to fix. > > > > > > > > - Or bump the hacking minimum version to 3.0.1. I have proposed it for > > > > a few projects failing pep8. > > > > - > > > > https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) > > > > > > > > > > > > If pep8 job is passing then you do not need to do anything. The > > > > existing hacking version cap will work fine. > > > > > > > > -gmann > > > > > > This is also failing on some stable branches that had not moved to > > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > > the repo's test-requirements.txt file rather than backporting a major > > > bump in hacking and dealing with the need to make a lot of code changes. > > > > > > Here is an example of that approach: > > > > > > https://review.opendev.org/#/c/727265/ > > > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > > didn't work, but capping pycodestyle did. So that's another option. > > > > -pycodestyle>=2.0.0 # MIT > > +pycodestyle>=2.0.0,<2.6.0 # MIT > ya i created a test env on ubuntu 20.04 with python 3.8 and i had the issue and then > if i downgraded flake8 to even 3.5.0 > > if i do > /opt/repos/nova$ .tox/venv/bin/pip install -U pycodestyle\<2.6 flake8\<3.8 > and then run pep8 in nova it still fails not resovle the issue > > ubuntu at devstack-ubuntu-latest:/opt/repos/nova$ .tox/venv/bin/pip freeze | grep -E "flake8|pycodestyle" > flake8==3.7.9 > pycodestyle==2.5.0 > > and i went older too > ubuntu at devstack-ubuntu-latest:/opt/repos/nova$ .tox/venv/bin/pip freeze | grep -E "flake8|pycodestyle" > flake8==3.6.0 > pycodestyle==2.4.0 > > on ubuntu 20.04 under python 3.8 all of the above fail with the same errors for nova. > > so there is something more going on then just flake8 and pycodestyle i think. This might be something else. hacking 3.0.1 will pull the flake8 2.7.9 and pycodestyle 2.5.0 [1] that is what I see in python-novaclient changes - https://zuul.opendev.org/t/openstack/build/c9c3ed3010d64af685ce8aecc699864c/log/job-output.txt#538 [1] - https://gitlab.com/pycqa/flake8/-/blob/ee2920d775df18481d638c2da084d229d56f95b9/setup.cfg#L17 > > > > > https://review.opendev.org/#/c/727274/ > > > > -Brian > > > > > From gmann at ghanshyammann.com Tue May 12 17:48:24 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 12:48:24 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <17209e4cf44.bbf6bf9856596.3530296876704322268@ghanshyammann.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> <17209e4cf44.bbf6bf9856596.3530296876704322268@ghanshyammann.com> Message-ID: <1720a007f26.b99c8f5a57710.8265609877531533213@ghanshyammann.com> ---- On Tue, 12 May 2020 12:18:09 -0500 Ghanshyam Mann wrote ---- > ---- On Tue, 12 May 2020 11:25:14 -0500 Sean McGinnis wrote ---- > > On 5/12/20 11:14 AM, Ghanshyam Mann wrote: > > > Hello Everyone. > > > > > > You might have noticed that few or most of the projects pep8 job started failing. > > > > > > That is because flake8 new version 3.8.0 added the new pycodestyle with new rules. Hacking capped > > > it with 4.0.0 version not with the minor version for 3.*. > > > > > > The new hacking version 3.0.1 is released which cap the flake8<3.8.0. Thanks to dtantsur and stephenfin for > > > tacking care of it. > > > > > > To fix your pep8 job you can, > > > > > > - Either fix the pep8 error in code if easy and fast to fix. > > > > > > - Or bump the hacking minimum version to 3.0.1. I have proposed it for a few projects failing pep8. > > > - https://review.opendev.org/#/q/topic:hacking-fix+(status:open+OR+status:merged) > > > > > > If pep8 job is passing then you do not need to do anything. The existing hacking version cap will work fine. > > > > > > -gmann > > > > This is also failing on some stable branches that had not moved to > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > the repo's test-requirements.txt file rather than backporting a major > > bump in hacking and dealing with the need to make a lot of code changes. > > > > Here is an example of that approach: > > > > https://review.opendev.org/#/c/727265/ > > It will only fail stable/ussuri not older stable as hacking flake8 cap issue is released > in hacking 2.0 which is in ussuri. So all other stable branches will be using hacking <1.20 which does not > have this issue. > > having flake8 in project test-requirement also need more maintainance to keep both place (hacking as well as project's > test-requirements) in sync otherwise it can break anytime. > example: https://review.opendev.org/#/c/727206/ Or even stable/ussuri for many projects has older hacking so we are safe there. i checked neutron, keystone at least. - https://github.com/openstack/neutron/blob/86b57718966dc2165b4cfb54bcae21b515ffe68f/test-requirements.txt#L4 So we can backport the hacking min version to only projects have hacking >2.0 cap in stable/ussuri. All older stable branch are safe until projects explicitly having flake8 (in ironic case) or pycodestyle (in neutron case). -gmann > > -gmann > > > > > > > > > > > From haleyb.dev at gmail.com Tue May 12 18:07:43 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 12 May 2020 14:07:43 -0400 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> Message-ID: <9658a301-824e-4271-4ab1-02b742cc4dcc@gmail.com> On 5/12/20 1:23 PM, Ghanshyam Mann wrote: > > > This is also failing on some stable branches that had not moved to > > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > > the repo's test-requirements.txt file rather than backporting a major > > > bump in hacking and dealing with the need to make a lot of code changes. > > > > > > Here is an example of that approach: > > > > > > https://review.opendev.org/#/c/727265/ > > > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > > didn't work, but capping pycodestyle did. So that's another option. > > > > -pycodestyle>=2.0.0 # MIT > > +pycodestyle>=2.0.0,<2.6.0 # MIT > > > > https://review.opendev.org/#/c/727274/ > > I will say remove rhe pycodestyle from neutron test-reqruiement and let hacking > via flake8 cap handle the compatible pycodestyle. Otherwise we end up maintaining and > fixing it project side for future. So the problem in this case was having both of these in test-requirements.txt: flake8>=3.6.0,<3.8.0 # MIT pycodestyle>=2.0.0 # MIT Test versions were: flake8==3.7.9 pycodestyle==2.6.0 Removing the pycodestyle line altogether worked however, it pulled pycodestye 3.5.0 then. -Brian From gmann at ghanshyammann.com Tue May 12 19:43:07 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 14:43:07 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <9658a301-824e-4271-4ab1-02b742cc4dcc@gmail.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> <9658a301-824e-4271-4ab1-02b742cc4dcc@gmail.com> Message-ID: <1720a69856b.1110a223e60913.4505198435459269895@ghanshyammann.com> ---- On Tue, 12 May 2020 13:07:43 -0500 Brian Haley wrote ---- > On 5/12/20 1:23 PM, Ghanshyam Mann wrote: > > > > > > This is also failing on some stable branches that had not moved to > > > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > > > the repo's test-requirements.txt file rather than backporting a major > > > > bump in hacking and dealing with the need to make a lot of code changes. > > > > > > > > Here is an example of that approach: > > > > > > > > https://review.opendev.org/#/c/727265/ > > > > > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > > > didn't work, but capping pycodestyle did. So that's another option. > > > > > > -pycodestyle>=2.0.0 # MIT > > > +pycodestyle>=2.0.0,<2.6.0 # MIT > > > > > > https://review.opendev.org/#/c/727274/ Adding what we discussed on IRC about the workable solution. flake8 2.6.2 which is pulled by older hacking in stable/train and less does not have cap for pycodestyle so if test-requirement.txt has "pycodestyle>***" then you need to cap it explicitly with what Brian proposed in https://review.opendev.org/#/c/727274/ otherwise flake8 2.6.2 will pull the new pycodestyle and break. -gmann > > > > I will say remove rhe pycodestyle from neutron test-reqruiement and let hacking > > via flake8 cap handle the compatible pycodestyle. Otherwise we end up maintaining and > > fixing it project side for future. > > So the problem in this case was having both of these in > test-requirements.txt: > > flake8>=3.6.0,<3.8.0 # MIT > pycodestyle>=2.0.0 # MIT > > Test versions were: > > flake8==3.7.9 > pycodestyle==2.6.0 > > Removing the pycodestyle line altogether worked however, it pulled > pycodestye 3.5.0 then. > > -Brian > > From gmann at ghanshyammann.com Tue May 12 21:22:46 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 16:22:46 -0500 Subject: [horizon][qa][infra] horizon master gate failure In-Reply-To: <4887479.GXAFRqVoOG@whitebase.usersys.redhat.com> References: <4887479.GXAFRqVoOG@whitebase.usersys.redhat.com> Message-ID: <1720ac4c425.acdc525d62928.7201851687857188262@ghanshyammann.com> ---- On Mon, 11 May 2020 03:47:28 -0500 Luigi Toscano wrote ---- > On Monday, 11 May 2020 06:41:51 CEST Akihiro Motoki wrote: > > Hi, > > > > horizon master branch is now failing due to permission defined during > > installing tempest-horizon into the tempest venv [1]. > > It happens since May 9. > > > > The source tempest-horizon repo is located in > > /home/zuul/src/opendev.org/openstack/tempest-horizon/. > > I digged into the detail for a while but haven't figured out the root cause > > yet. > > > > I have the following questions on this: > > > > (1) Did we have any changes on the job configuration recently? > > stable/ussuri and master branches are similar but it only happens in > > the master branch. > > I've seen that in another project (ironic) as well, and it was fixed by using > tempest_plugins. > > > > > (2) When I changed the location of the tempest-horizon directory > > specified as tempest-plugins, > > the failure has gone [2]. Is there any recommendation from the tempest team? > > As you can see from the logs, tempest_plugins translates that value to: > > TEMPEST_PLUGINS="/opt/stack/tempest-horizon" > > which is the location where devstack expects to find the code, and where the > deployment copies it, as prepared by the setup-devstack-source-dirs role. > > I can't pinpoint what happened exactly (something in zuul maybe) but that path > currently used isn't the expected path anyway. As you are going to change it, > tempest_plugins is the recommended way (and it works from pike onwards, in > case you need to backport it). The issue is about the location of plugins. I am not 100% sure about why old location start failing which is - /home/zuul/src/opendev.org/openstack/tempest-horizon But it will fix by updating the location to /opt/stack/tempest-horizon where devstack already check out the plugins. Either you can update the location[1] or move to use new var 'tempest_plugins' which is easy and much readable along with fixing the issue. [1] https://review.opendev.org/#/c/725539/1 -gmann > > Ciao > -- > Luigi > > > > From gmann at ghanshyammann.com Wed May 13 03:14:28 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 May 2020 22:14:28 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <1720a69856b.1110a223e60913.4505198435459269895@ghanshyammann.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> <9658a301-824e-4271-4ab1-02b742cc4dcc@gmail.com> <1720a69856b.1110a223e60913.4505198435459269895@ghanshyammann.com> Message-ID: <1720c06c1fe.f968123d66058.4634045955357610785@ghanshyammann.com> ---- On Tue, 12 May 2020 14:43:07 -0500 Ghanshyam Mann wrote ---- > ---- On Tue, 12 May 2020 13:07:43 -0500 Brian Haley wrote ---- > > On 5/12/20 1:23 PM, Ghanshyam Mann wrote: > > > > > > > > > This is also failing on some stable branches that had not moved to > > > > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > > > > the repo's test-requirements.txt file rather than backporting a major > > > > > bump in hacking and dealing with the need to make a lot of code changes. > > > > > > > > > > Here is an example of that approach: > > > > > > > > > > https://review.opendev.org/#/c/727265/ > > > > > > > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > > > > didn't work, but capping pycodestyle did. So that's another option. > > > > > > > > -pycodestyle>=2.0.0 # MIT > > > > +pycodestyle>=2.0.0,<2.6.0 # MIT > > > > > > > > https://review.opendev.org/#/c/727274/ > > Adding what we discussed on IRC about the workable solution. > > flake8 2.6.2 which is pulled by older hacking in stable/train and less does not have > cap for pycodestyle so if test-requirement.txt has "pycodestyle>***" then you need > to cap it explicitly with what Brian proposed in https://review.opendev.org/#/c/727274/ > > otherwise flake8 2.6.2 will pull the new pycodestyle and break. If your project is using flake8-import-order plugin then also we need to cap pycodestyle explicitly. flake8-import-order does not cap pycodestyle and pull the latest pycodestyle, I have proposed PR there at least to be safe for the future version. - https://github.com/PyCQA/flake8-import-order/pull/172 -gmann > > -gmann > > > > > > > I will say remove rhe pycodestyle from neutron test-reqruiement and let hacking > > > via flake8 cap handle the compatible pycodestyle. Otherwise we end up maintaining and > > > fixing it project side for future. > > > > So the problem in this case was having both of these in > > test-requirements.txt: > > > > flake8>=3.6.0,<3.8.0 # MIT > > pycodestyle>=2.0.0 # MIT > > > > Test versions were: > > > > flake8==3.7.9 > > pycodestyle==2.6.0 > > > > Removing the pycodestyle line altogether worked however, it pulled > > pycodestye 3.5.0 then. > > > > -Brian > > > > > > From ltomasbo at redhat.com Wed May 13 07:42:28 2020 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Wed, 13 May 2020 09:42:28 +0200 Subject: [neutron] parent ports for trunks being claimed by instances In-Reply-To: <20200512140525.khc32evagyb7366r@firewall> References: <20200512140525.khc32evagyb7366r@firewall> Message-ID: Hi Nate, I think I'm getting configure with the use of "instances" here. With instances, you refer to VMs? Note there is a need to first create the parent, then the trunk and then the instance using that parent. Also, deleting the VM is not a problem, it will just move the port to down (and the trunk). The problem is deleting the parent port (if it is part of the trunk). I think the problem here is that the VM should not try to delete the port as port was existing before VM creation, right? Note too that if the parent port is not "eligible" after being part of a trunk, how can you boot a VM with a parent port given that there is a need for having the trunk before booting the VM? Cheers, Luis On Tue, May 12, 2020 at 4:13 PM Nate Johnston wrote: > Neutron developers, > > I am currently working on an issue with trunk ports that has come up a few > times > in my direct experience, and I hope that we can create a long term > solution. I > am hoping that developers with experience in trunk ports can validate my > approach here, especially regarding fixing current behavior without > introducing > an API regression. > > By way of introduction to the specifics of the issue, let me blockquote > from the > LP bug I raised for this [1]: > > ---- > > When you create a trunk in Neutron you create a parent port for the > trunk and > attach the trunk to the parent. Then subports can be created on the > trunk. When > instances are created on the trunk, first a port is created and then > an instance > is associated with a free port. It looks to me that's this is the > oversight in > the logic. > > From the perspective of the code, the parent port looks like any other > port > attached to the trunk bridge. It doesn't have an instance attached to > it so it > looks like it's not being used for anything (which is technically > correct). So > it becomes an eligible port for an instance to bind to. That is all > fine and > dandy until you go to delete the instance and you get the "Port > [port-id] is > currently a parent port for trunk [trunk-id]" exception just as > happened here. > Anecdotally, it's seems rare that an instance will actually bind to > it, but that > is what happened for the user in this case and I have had several > pings over the > past year about people in a similar state. > > I propose that when a port is made parent port for a trunk, that the > trunk be > established as the owner of the port. That way it will be ineligible > for > instances seeking to bind to the port. > > ---- > > Clearly the above behavior indicates buggy issue that should be rectified > in > master and stable branches. Nobody wants a VM that can't be fully deleted > because the port can't ever be deleted. This is especially egregious when > it > causes heat stack deletion failures. > > I am mostly concerned that by adding the trunk as an owner of the parent > port, > then the trunk will need to be deleted before the parent port can be > deleted, > otherwise a PortInUse error will occur when the port is deleted (i.e. on > tempest > test teardown). That to me seems indicative of an inadvertent API > change. Do > you think it's all right to say that if you delete a port that is a parent > port > of a trunk, and that trunk has no other subports, that the trunk deletion > is > implicit? Is that the lowest impact to the API that we can incur to > resolve > this issue? > > Your wisdom is appreciated, > > Nate > > [1] https://bugs.launchpad.net/neutron/+bug/1878031 > > > -- LUIS TOMÁS BOLÍVAR Senior Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Wed May 13 08:05:23 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 13 May 2020 10:05:23 +0200 Subject: [TripleO][Rdo][Train] Deploy OpenStack using only provisioning network In-Reply-To: References: Message-ID: I see interesting errors in merging layers: "Trying to pull 10.120.129.222:8787/tripleotrain/centos-binary-swift-object:current-tripleo...", "Copying blob sha256:ac006fc45022b6ea54439313e919f34caa69b5ddc8477bf8df95d3ecc153c7a7", "Copying blob sha256:e7bd43c6fde6f22a702045f429e3c09be300bb787f884d81808ff681f9ef95c5", "Copying config sha256:699c142370d645ccbf9d41dfb0f2f841a8d3db4f175fee8999dafb07b10f174a", "net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", "net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", "ovsdb-tool: failed to read schema: \"/usr/share/openvswitch/ovn-sb.ovsschema\" could not be read as JSON (error opening \"/usr/share/openvswitch/ovn-sb.ovsschema\": No such file or directory)", "Deprecated: Option \"logdir\" from group \"DEFAULT\" is deprecated. Use option \"log-dir\" from group \"DEFAULT\".", "+ sudo -E kolla_set_configs", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", I am not even a novice user of containers, I do not know if it is something "ok" or something which I should care about? my main issues with this: 1) when deploying overcloud, it do not add ssh key to authorized host, and gets timeout, but I can work with that. 1.solution) while running installation I ssh into it, before ansible tries... shitty workaround, but should be ok for POC, need to fix it also. *2) As you see from config files, I use local undercloud as repo for container images, but it is not able to fetch data from there, as it is marked as secure, but undercloud configures it as unsecure. Can I somehow specify to installed, so it would modify /etc/container(s)/repositories.conf to add undercloud IP and url to insecure repo list. cause it helps tp fix my issues. but then cannot proceed as it has part of things up, so I need to do fresh setup, which is without insecure repos.* *2.solution) no ideas.* 3) Problem: when setting up undercloud with proxy variables exported, it adds them into containers, but even I have no_prpoxy which has idrac IP specified, or range, ironic-conductor sends request to redfish using proxy... 3.solution) I think solution would be to use undercloud repo (predownload images) and make undercloud install from it, but when I even add 'insecure' repos value to $local_ip it drops error [1] trying to connect to repo....docker.io Any thoughts? [1] Retrying (Retry(total=7, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)': /v2 ... raise ConnectionError(e, request=request) ConnectionError: HTTPSConnectionPool(host='registries-1.docker.io', port=443): Max retries exceeded with url: /v2/ (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Or should I start several "threads"? to keep each in separate track? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Wed May 13 10:06:03 2020 From: bence.romsics at gmail.com (Bence Romsics) Date: Wed, 13 May 2020 12:06:03 +0200 Subject: [neutron] parent ports for trunks being claimed by instances In-Reply-To: <20200512140525.khc32evagyb7366r@firewall> References: <20200512140525.khc32evagyb7366r@firewall> Message-ID: Hi Nate, We did have a chat on #neutron yesterday [1], but let me summarize my thoughts here for a broader audience and add some details I tried since: Let me differentiate three API workflows: A) workflow 1) in order: * create net and subnet for parent port * create parent port (more precisely a port that will become a parent port in the next step) * create trunk with parent port (and optionally subports) * boot vm with parent port (server create --nic port-id=) side note: Neither the trunk, nor subports should ever be passed to 'server create'. Nova only ever knows about the parent port. Generally nova does not treat this port specially in any way, except for ovs, where (based on what neutron tells nova in port attributes) nova plugs the vif into a trunk bridge instead of br-int. In this case the parent port existed before nova ever learned about it, therefore nova does not want to delete it when deleting the server. As with other use cases where a port needs to be somehow special, but we did not want to change nova to support it, this is the canonical way to use trunk ports. Here you lose the convenience of '--nic net-id='. And you lose some flexibility by not (always) being able to turn a plain old port into a trunk parent. I think the bug is not present when using this workflow so this is a possible workaround. workflow 2) * create net and subnet * boot vm with network (server --nic port-id=) * delete vm Here no trunk is created at all, so the bug is obviously not present. I just wanted to mention that in this case, nova creates the port for the server and it also wants to delete the port when deleting the server. Originally I think we considered this as unsupported (for trunks), and that not being a problem since it is only for convenience. workflow 3) in order: * create net and subnet * boot vm with net (server create --nic net-id=) * find the port nova created * create trunk with port found as parent (and optionally subports) * delete the vm This workflow reproduces (part of) the bug. Please see [2] for exact commands. Not all (ml2+trunk) drivers allow trunk creation on an already bound port. Here I used ovn which allows it, but ovs would not. While we may consider this a bug, and fix it, please note that the error would not occur if a port had been pre-created. IIRC we considered the ability to turn a bound port into a trunk parent a feature, but we considered the use of '--nic net-id=' only a convenience. Obviusly we may argue that we lost some flexibility since the user must create this port before boot and before he may have realized he wanted a trunk. B) The bug report mentions a "second interface", which never happens even in workflow 3) above. So I suspect there's something else going on beyond workflow 3). I have no proof but a hunch that the API is used in some way that it is not supposed to. Unfortunately the trunk API is not (and could not be made) intuitive and some important error conditions are silent, because of the enormous work they would have entailed. One such situation I see quite frequently is that people try to add subports to a server boot. But that is not needed and should never be done. That is an error but unfortunately with no immediate error message. But this is just based on a hunch. It still would be nice to know what actual API workflow led to the errors reported in the bug. I'd be happy to review it and incorporate the learnings into our docs. Cheers, Bence [1] http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2020-05-12.log.html#t2020-05-12T12:27:38 [2] https://etherpad.opendev.org/p/lp-1878031 From ruslanas at lpic.lt Wed May 13 10:44:58 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 13 May 2020 12:44:58 +0200 Subject: [TripleO][Rdo][Train] Deploy OpenStack using only provisioning network In-Reply-To: References: Message-ID: Also when running deploy command, I have faced: "Trying to pull 10.120.129.222:8787/tripleotrain/centos-binary-swift-container:current-tripleo...", "Copying blob sha256:7c1aaf9e1dd5d21e366f1e9ec6bfd7251dedf4ad1c9fa21e66bb7b17e7060425", "Copying blob sha256:f0dddb4a2243d95a1ffde5ede166e7168af9d8345dd8412c3330e55071ddb852", "Copying blob sha256:3327a60ab9298da45b84a8f68ddd48fdff04deff5ec11fcbf140ded19da1acd6", "Copying blob sha256:73882c28398930d66a7adc8b2da4dd894244368882eb4a0c37abb1b38271fe03", "Copying blob sha256:6d682c3c6cb3990d35d979cf946f3deaf99a66483a1d9d70734cff311c0c30dd", "Copying blob sha256:c9ee96c73701c302518a39e01d236cc38b906b215756a735df41a9a02e290e70", "Copying blob sha256:ad94aaa7c1074c86e1c3d477395bff3074dd5bab2b42053301ffa165e4982d3f", "Copying blob sha256:31344becd6f560f8c04495aff94b92ec2191f900c66647b5a2fd9cdabb5dd25a", "Copying config sha256:a9c99e00250bc9038482dc2d6fb425db71a5a2dce54b28e8b433defc146cec91", "WARNING: The same type, major and minor should not be used for multiple devices.", "Error: cannot exec into container that is not running: container state improper", * "Error: exec failed: container_linux.go:349: starting container process caused \"process_linux.go:101: executing setns process caused \\\"exit status 1\\\"\": OCI runtime error",* "+ command -v python3", "+ command -v python2", "+ python2 /container-config-scripts/placement_wait_for_service.py", "+ python2 /container-config-scripts/nova_wait_for_api_service.py" could you help me to run that container manually? is it just podman run IMAGE_ID or smth else? as parameters which I could see from? which file? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed May 13 11:18:09 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 13 May 2020 13:18:09 +0200 Subject: [neutron] parent ports for trunks being claimed by instances In-Reply-To: References: <20200512140525.khc32evagyb7366r@firewall> Message-ID: <20200513111809.4f4lyrd7zg3mecbr@skaplons-mac> Hi, On Wed, May 13, 2020 at 09:42:28AM +0200, Luis Tomas Bolivar wrote: > Hi Nate, > > I think I'm getting configure with the use of "instances" here. With > instances, you refer to VMs? Note there is a need to first create the > parent, then the trunk and then the instance using that parent. > > Also, deleting the VM is not a problem, it will just move the port to down > (and the trunk). The problem is deleting the parent port (if it is part of > the trunk). I think the problem here is that the VM should not try to > delete the port as port was existing before VM creation, right? It is like that if You create port in neutron, and then pass this port to the Nova when booting instance. But if You first create instance by giving network_id to Nova, it will create port on this network for You and that port will be deleted by Nova when instance will be deleted. > > Note too that if the parent port is not "eligible" after being part of a > trunk, how can you boot a VM with a parent port given that there is a need > for having the trunk before booting the VM? > > Cheers, > Luis > > On Tue, May 12, 2020 at 4:13 PM Nate Johnston > wrote: > > > Neutron developers, > > > > I am currently working on an issue with trunk ports that has come up a few > > times > > in my direct experience, and I hope that we can create a long term > > solution. I > > am hoping that developers with experience in trunk ports can validate my > > approach here, especially regarding fixing current behavior without > > introducing > > an API regression. > > > > By way of introduction to the specifics of the issue, let me blockquote > > from the > > LP bug I raised for this [1]: > > > > ---- > > > > When you create a trunk in Neutron you create a parent port for the > > trunk and > > attach the trunk to the parent. Then subports can be created on the > > trunk. When > > instances are created on the trunk, first a port is created and then > > an instance > > is associated with a free port. It looks to me that's this is the > > oversight in > > the logic. > > > > From the perspective of the code, the parent port looks like any other > > port > > attached to the trunk bridge. It doesn't have an instance attached to > > it so it > > looks like it's not being used for anything (which is technically > > correct). So > > it becomes an eligible port for an instance to bind to. That is all > > fine and > > dandy until you go to delete the instance and you get the "Port > > [port-id] is > > currently a parent port for trunk [trunk-id]" exception just as > > happened here. > > Anecdotally, it's seems rare that an instance will actually bind to > > it, but that > > is what happened for the user in this case and I have had several > > pings over the > > past year about people in a similar state. > > > > I propose that when a port is made parent port for a trunk, that the > > trunk be > > established as the owner of the port. That way it will be ineligible > > for > > instances seeking to bind to the port. > > > > ---- > > > > Clearly the above behavior indicates buggy issue that should be rectified > > in > > master and stable branches. Nobody wants a VM that can't be fully deleted > > because the port can't ever be deleted. This is especially egregious when > > it > > causes heat stack deletion failures. > > > > I am mostly concerned that by adding the trunk as an owner of the parent > > port, > > then the trunk will need to be deleted before the parent port can be > > deleted, > > otherwise a PortInUse error will occur when the port is deleted (i.e. on > > tempest > > test teardown). That to me seems indicative of an inadvertent API > > change. Do > > you think it's all right to say that if you delete a port that is a parent > > port > > of a trunk, and that trunk has no other subports, that the trunk deletion > > is > > implicit? Is that the lowest impact to the API that we can incur to > > resolve > > this issue? > > > > Your wisdom is appreciated, > > > > Nate > > > > [1] https://bugs.launchpad.net/neutron/+bug/1878031 > > > > > > > > -- > LUIS TOMÁS BOLÍVAR > Senior Software Engineer > Red Hat > Madrid, Spain > ltomasbo at redhat.com -- Slawek Kaplonski Senior software engineer Red Hat From allison at openstack.org Wed May 13 13:29:45 2020 From: allison at openstack.org (Allison Price) Date: Wed, 13 May 2020 08:29:45 -0500 Subject: [all] OpenStack Ussuri Community Meeting in 30 Minutes Message-ID: <0B92AE7C-8014-4845-91EA-0CB9E6A684C1@openstack.org> Hi everyone, I just wanted to remind you that there is a community meeting in 30 minutes to cover OpenStack Ussuri highlights. The meeting will be moderated by Mohammed Naser from the OpenStack Technical Committee and include project updates from: - Slawek Kaplonski, Neutron - Michael Johnson, Octavia - Goutham Pacha Ravi, Manila - Mark Goddard, Kolla - Balazs Gibizer, Nova - Brian Rosmaita, Cinder You can find dial-in information in this etherpad [1]. The recording and slides will be shared on the mailing list after the meeting. There will also be another meeting tomorrow, Thursday May 14 at 0200 UTC, moderated by Rico Lin from the OpenStack TC, and you can see who will be presenting here [1]. [1] https://etherpad.opendev.org/p/CommunityMeeting_Ussuri Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed May 13 14:47:24 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 13 May 2020 09:47:24 -0500 Subject: OpenStack Ussuri is officially released! Message-ID: <20200513144724.GA1355721@sm-workstation> The official OpenStack Ussuri release announcement has been sent out: http://lists.openstack.org/pipermail/openstack-announce/2020-May/002035.html Thanks to all who were part of making the Train series a success! This marks the official opening of the releases repo for Victoria, and freezes are now lifted. Ussuri is now a full stable branch. Thanks! Sean From sean.mcginnis at gmx.com Wed May 13 14:49:09 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 13 May 2020 09:49:09 -0500 Subject: OpenStack Ussuri is officially released! In-Reply-To: <20200513144724.GA1355721@sm-workstation> References: <20200513144724.GA1355721@sm-workstation> Message-ID: <20200513144909.GB1355721@sm-workstation> On Wed, May 13, 2020 at 09:47:24AM -0500, Sean McGinnis wrote: > The official OpenStack Ussuri release announcement has been sent out: > > http://lists.openstack.org/pipermail/openstack-announce/2020-May/002035.html > > Thanks to all who were part of making the Train series a success! And thanks to all who were part of making Ussuri a success too! :] > > This marks the official opening of the releases repo for Victoria, and freezes > are now lifted. Ussuri is now a full stable branch. > > Thanks! > Sean > From amy at demarco.com Wed May 13 15:07:51 2020 From: amy at demarco.com (Amy Marrich) Date: Wed, 13 May 2020 10:07:51 -0500 Subject: OpenStack Ussuri is officially released! In-Reply-To: <20200513144724.GA1355721@sm-workstation> References: <20200513144724.GA1355721@sm-workstation> Message-ID: Great job everyone! Amy (spotz) > On May 13, 2020, at 9:50 AM, Sean McGinnis wrote: > > The official OpenStack Ussuri release announcement has been sent out: > > http://lists.openstack.org/pipermail/openstack-announce/2020-May/002035.html > > Thanks to all who were part of making the Train series a success! > > This marks the official opening of the releases repo for Victoria, and freezes > are now lifted. Ussuri is now a full stable branch. > > Thanks! > Sean > From juliaashleykreger at gmail.com Wed May 13 15:28:27 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 13 May 2020 08:28:27 -0700 Subject: [ironic] trusted delgation cores? Message-ID: Greetings awesome people and AIs! Earlier today, I noticed Iury (iurygregory) only had +1 rights on the ironic-prometheus-exporter. I noted this in IRC and went to go see about adding him to the group, and realized we didn't have a separate group already defined for ironic-prometheus-exporter, which left a question, do we create a new group, or just grant ironic-core membership. And the question of starting to engage in trusted delegation of core rights came up in IRC[0]. I think it makes a lot of sense, but wanted to see what everyone thought? Specifically in Iury's case: I feel he has proven himself, in ironic and non-ironic cases, and I think it makes sense to grant him core rights under the premise that he is unlikely to approve non-trivial changes to merge that he is not confident in. Thoughts, concerns, congratulations? For both questions? -Julia [0]: http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2020-05-13.log.html#t2020-05-13T13:35:45 From strigazi at gmail.com Wed May 13 15:42:03 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Wed, 13 May 2020 18:42:03 +0300 Subject: OpenStack Ussuri is officially released! In-Reply-To: References: <20200513144724.GA1355721@sm-workstation> Message-ID: Hello, The release notes [0] for magnum and other projects are missing from [1][2]. Do we need to do some extra step? Cheers, Spyros [0] https://docs.openstack.org/releasenotes/magnum/ussuri.html [1] https://releases.openstack.org/ussuri/ [2] https://releases.openstack.org/teams/magnum.html On Wed, May 13, 2020 at 6:08 PM Amy Marrich wrote: > Great job everyone! > > Amy (spotz) > > > On May 13, 2020, at 9:50 AM, Sean McGinnis > wrote: > > > > The official OpenStack Ussuri release announcement has been sent out: > > > > > http://lists.openstack.org/pipermail/openstack-announce/2020-May/002035.html > > > > Thanks to all who were part of making the Train series a success! > > > > This marks the official opening of the releases repo for Victoria, and > freezes > > are now lifted. Ussuri is now a full stable branch. > > > > Thanks! > > Sean > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed May 13 15:44:50 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 13 May 2020 10:44:50 -0500 Subject: OpenStack Ussuri is officially released! In-Reply-To: References: <20200513144724.GA1355721@sm-workstation> Message-ID: <46ef692a-9c8d-758d-4a77-f0ecbf922aa8@gmx.com> On 5/13/20 10:42 AM, Spyros Trigazis wrote: > Hello, > > The release notes [0] for magnum and other projects are missing > from [1][2]. Do we need to do some extra step? > > Cheers, > Spyros > > [0] https://docs.openstack.org/releasenotes/magnum/ussuri.html > [1] https://releases.openstack.org/ussuri/ > [2] https://releases.openstack.org/teams/magnum.html Those links need to be added to the deliverable files to be reflected on the release page. We run a script periodically to pick up ones that were not manually added and/or have been created since the last time we checked. I've run our script and submitted https://review.opendev.org/#/c/727802/ So we should have those on the site shortly after that merges. Sean From Arkady.Kanevsky at dell.com Wed May 13 16:12:37 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 13 May 2020 16:12:37 +0000 Subject: OpenStack Ussuri is officially released! In-Reply-To: References: <20200513144724.GA1355721@sm-workstation> Message-ID: <006c8f5a9c8944c7815fff2a27269c38@AUSX13MPS308.AMER.DELL.COM> +2 -----Original Message----- From: Amy Marrich Sent: Wednesday, May 13, 2020 10:08 AM To: Sean McGinnis Cc: openstack-discuss at lists.openstack.org Subject: Re: OpenStack Ussuri is officially released! [EXTERNAL EMAIL] Great job everyone! Amy (spotz) > On May 13, 2020, at 9:50 AM, Sean McGinnis wrote: > > The official OpenStack Ussuri release announcement has been sent out: > > http://lists.openstack.org/pipermail/openstack-announce/2020-May/00203 > 5.html > > Thanks to all who were part of making the Train series a success! > > This marks the official opening of the releases repo for Victoria, and > freezes are now lifted. Ussuri is now a full stable branch. > > Thanks! > Sean > From mark at stackhpc.com Wed May 13 16:22:12 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 13 May 2020 17:22:12 +0100 Subject: [kolla] Kolla klub meeting 21st May Message-ID: Hi, In today's Kolla IRC meeting we agreed to host a project onboarding session for the next Kolla Klub meeting on the 21st May. We'll cover a variety of topics on the many different ways you can contribute to the project. Gaël Therond will also fill us in on some of the questions put to him since the last meeting about his case studies. https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw Thanks, Mark From zigo at debian.org Wed May 13 18:00:26 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 13 May 2020 20:00:26 +0200 Subject: OpenStack Ussuri is officially released! In-Reply-To: <20200513144724.GA1355721@sm-workstation> References: <20200513144724.GA1355721@sm-workstation> Message-ID: On 5/13/20 4:47 PM, Sean McGinnis wrote: > The official OpenStack Ussuri release announcement has been sent out: > > http://lists.openstack.org/pipermail/openstack-announce/2020-May/002035.html > > Thanks to all who were part of making the Train series a success! > > This marks the official opening of the releases repo for Victoria, and freezes > are now lifted. Ussuri is now a full stable branch. > > Thanks! > Sean Thanks everyone! FYI, I updated the packages in Debian Sid, and in the buster-ussuri.debian.net repositories, so Debian is already up-to-date with all of the latest release. :) Cheers, Thomas Goirand (zigo) From ruslanas at lpic.lt Wed May 13 18:14:54 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 13 May 2020 20:14:54 +0200 Subject: [TripleO][rdo][train][centos7] fails at ovn_south_db_server: Start containers for step 4 using paunch Message-ID: Hi all, I am running this deployment described here [1]. I am getting error on: Start containers for step 4 using paunch Error in link bellow [2] Running podman containers [3] links: 1 - https://github.com/qw3r3wq/homelab/tree/master/overcloud 2 - https://pastebin.com/HTUbz7Ry 3 - https://pastebin.com/1ApfiEyE -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Wed May 13 18:35:17 2020 From: allison at openstack.org (Allison Price) Date: Wed, 13 May 2020 13:35:17 -0500 Subject: [all] OpenStack Ussuri Community Meeting Recording and time for Meeting 2 Message-ID: <33DD044B-ADDE-40A2-B56E-D2E17D6C6F44@openstack.org> Thank you to everyone who joined the community meeting earlier today to learn more about OpenStack Ussuri as well as the contributors who presented the release highlights! If you were unable to attend, you can check out the recording [1] which includes a link to the slides or check out the attached deck. There is also going to be another community meeting in 7.5 hours at 0200 UTC on Thursday, May 14. You can find the list of presenters as well as the dial-in information here [2]. [1] https://www.youtube.com/watch?v=T5-Dr1lxPB0&feature=youtu.be [2] https://etherpad.opendev.org/p/CommunityMeeting_Ussuri Thanks! Allison Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenStack Ussuri Community Meeting 1.pdf Type: application/pdf Size: 2248602 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu May 14 01:36:57 2020 From: melwittt at gmail.com (melanie witt) Date: Wed, 13 May 2020 18:36:57 -0700 Subject: [nova][gate] openstack-tox-docs job failure Message-ID: <867db980-53fe-8a27-6f83-34d8d41fd052@gmail.com> Howdy all, There's a upper-constraints bump to openstackdocstheme 2.1.0 which has recently merged [1] which is going to break the openstack-tox-docs job for nova. I've proposed a patch to fix it here: https://review.opendev.org/727898 if people could please take a look. Cheers, -melanie [1] https://review.opendev.org/727850 From allison at openstack.org Thu May 14 01:42:09 2020 From: allison at openstack.org (Allison Price) Date: Wed, 13 May 2020 20:42:09 -0500 Subject: [all] OpenStack Ussuri Community Meeting Recording and time for Meeting 2 In-Reply-To: <33DD044B-ADDE-40A2-B56E-D2E17D6C6F44@openstack.org> References: <33DD044B-ADDE-40A2-B56E-D2E17D6C6F44@openstack.org> Message-ID: If you missed the earlier Ussuri community meeting, the next meeting starts in 20 minutes! Dial-in information: https://etherpad.opendev.org/p/CommunityMeeting_Ussuri See you there! Allison Allison Price OpenStack Foundation allison at openstack.org > On May 13, 2020, at 1:35 PM, Allison Price wrote: > > Thank you to everyone who joined the community meeting earlier today to learn more about OpenStack Ussuri as well as the contributors who presented the release highlights! > > If you were unable to attend, you can check out the recording [1] which includes a link to the slides or check out the attached deck. There is also going to be another community meeting in 7.5 hours at 0200 UTC on Thursday, May 14. You can find the list of presenters as well as the dial-in information here [2]. > > > [1] https://www.youtube.com/watch?v=T5-Dr1lxPB0&feature=youtu.be > [2] https://etherpad.opendev.org/p/CommunityMeeting_Ussuri > > > > Thanks! > Allison > > > Allison Price > OpenStack Foundation > allison at openstack.org > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu May 14 02:45:36 2020 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 13 May 2020 22:45:36 -0400 Subject: [all] OpenStack Ussuri Community Meeting in 30 Minutes In-Reply-To: <0B92AE7C-8014-4845-91EA-0CB9E6A684C1@openstack.org> References: <0B92AE7C-8014-4845-91EA-0CB9E6A684C1@openstack.org> Message-ID: I totally missed this meeting, Is there a way i can get access of recording? Thanks On Wed, May 13, 2020 at 9:37 AM Allison Price wrote: > > Hi everyone, > > I just wanted to remind you that there is a community meeting in 30 minutes to cover OpenStack Ussuri highlights. The meeting will be moderated by Mohammed Naser from the OpenStack Technical Committee and include project updates from: > > - Slawek Kaplonski, Neutron > - Michael Johnson, Octavia > - Goutham Pacha Ravi, Manila > - Mark Goddard, Kolla > - Balazs Gibizer, Nova > - Brian Rosmaita, Cinder > > You can find dial-in information in this etherpad [1]. The recording and slides will be shared on the mailing list after the meeting. There will also be another meeting tomorrow, Thursday May 14 at 0200 UTC, moderated by Rico Lin from the OpenStack TC, and you can see who will be presenting here [1]. > > [1] https://etherpad.opendev.org/p/CommunityMeeting_Ussuri > > > Allison Price > OpenStack Foundation > allison at openstack.org > > > > From jimmy at openstack.org Thu May 14 03:10:46 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 13 May 2020 22:10:46 -0500 Subject: [all] OpenStack Ussuri Community Meeting in 30 Minutes In-Reply-To: References: <0B92AE7C-8014-4845-91EA-0CB9E6A684C1@openstack.org> Message-ID: <6be98e9d-ccfd-9b0e-e1a9-e7d35a14ba06@openstack.org> Satish, Here you go: https://www.youtube.com/watch?v=T5-Dr1lxPB0 Cheers, Jimmy Satish Patel wrote on 5/13/20 9:45 PM: > I totally missed this meeting, Is there a way i can get access of recording? > > Thanks > > On Wed, May 13, 2020 at 9:37 AM Allison Price wrote: >> Hi everyone, >> >> I just wanted to remind you that there is a community meeting in 30 minutes to cover OpenStack Ussuri highlights. The meeting will be moderated by Mohammed Naser from the OpenStack Technical Committee and include project updates from: >> >> - Slawek Kaplonski, Neutron >> - Michael Johnson, Octavia >> - Goutham Pacha Ravi, Manila >> - Mark Goddard, Kolla >> - Balazs Gibizer, Nova >> - Brian Rosmaita, Cinder >> >> You can find dial-in information in this etherpad [1]. The recording and slides will be shared on the mailing list after the meeting. There will also be another meeting tomorrow, Thursday May 14 at 0200 UTC, moderated by Rico Lin from the OpenStack TC, and you can see who will be presenting here [1]. >> >> [1] https://etherpad.opendev.org/p/CommunityMeeting_Ussuri >> >> >> Allison Price >> OpenStack Foundation >> allison at openstack.org >> >> >> >> From ykarel at redhat.com Thu May 14 06:10:32 2020 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 14 May 2020 11:40:32 +0530 Subject: [RDO][OpenStack] Removal of Pike and Ocata Trunk repos Message-ID: Hi, The topic was discussed in yesterday's RDO meeting[1] and to reach a wider audience it's being raised here. Ocata and Pike are in the Extended maintenance phase[2] for more than a year now, The promotion jobs which used to test these repos were dropped long ago[3][4]. Now we are planning to drop these trunk repos[5][6] in the first week of June, 2020. We already stopped building new commits for pike/ocata[7]. So if anyone still using these repos consider moving to upgrade to queens or any later releases. If something is blocking you to stop relying on ocata/pike repos raise your voice here so we can consider it before dropping the repos. [1] https://lists.rdoproject.org/pipermail/dev/2020-May/009379.html [2] https://releases.openstack.org/ [3] https://review.rdoproject.org/r/#/q/topic:remove_pike [4] https://review.rdoproject.org/r/#/c/16485/ [5] http://trunk.rdoproject.org/centos7-ocata/ [6] http://trunk.rdoproject.org/centos7-pike/ [7] https://softwarefactory-project.io/r/#/c/18347/ On Behalf of RDO Team Thanks and Regards Yatin Karel From ltomasbo at redhat.com Thu May 14 06:45:56 2020 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Thu, 14 May 2020 08:45:56 +0200 Subject: [neutron] parent ports for trunks being claimed by instances In-Reply-To: <20200513111809.4f4lyrd7zg3mecbr@skaplons-mac> References: <20200512140525.khc32evagyb7366r@firewall> <20200513111809.4f4lyrd7zg3mecbr@skaplons-mac> Message-ID: On Wed, May 13, 2020 at 1:18 PM Slawek Kaplonski wrote: > Hi, > > On Wed, May 13, 2020 at 09:42:28AM +0200, Luis Tomas Bolivar wrote: > > Hi Nate, > > > > I think I'm getting configure with the use of "instances" here. With > > instances, you refer to VMs? Note there is a need to first create the > > parent, then the trunk and then the instance using that parent. > > > > Also, deleting the VM is not a problem, it will just move the port to > down > > (and the trunk). The problem is deleting the parent port (if it is part > of > > the trunk). I think the problem here is that the VM should not try to > > delete the port as port was existing before VM creation, right? > > It is like that if You create port in neutron, and then pass this port to > the > Nova when booting instance. > But if You first create instance by giving network_id to Nova, it will > create > port on this network for You and that port will be deleted by Nova when > instance > will be deleted. > You mean that it is possible to later make that VM port (created by nova) a parent port of a trunk? That is not supported in ml2/ovs (or it was not before), I never tried with OVN > > > > > Note too that if the parent port is not "eligible" after being part of a > > trunk, how can you boot a VM with a parent port given that there is a > need > > for having the trunk before booting the VM? > > > > Cheers, > > Luis > > > > On Tue, May 12, 2020 at 4:13 PM Nate Johnston > > wrote: > > > > > Neutron developers, > > > > > > I am currently working on an issue with trunk ports that has come up a > few > > > times > > > in my direct experience, and I hope that we can create a long term > > > solution. I > > > am hoping that developers with experience in trunk ports can validate > my > > > approach here, especially regarding fixing current behavior without > > > introducing > > > an API regression. > > > > > > By way of introduction to the specifics of the issue, let me blockquote > > > from the > > > LP bug I raised for this [1]: > > > > > > ---- > > > > > > When you create a trunk in Neutron you create a parent port for the > > > trunk and > > > attach the trunk to the parent. Then subports can be created on the > > > trunk. When > > > instances are created on the trunk, first a port is created and > then > > > an instance > > > is associated with a free port. It looks to me that's this is the > > > oversight in > > > the logic. > > > > > > From the perspective of the code, the parent port looks like any > other > > > port > > > attached to the trunk bridge. It doesn't have an instance attached > to > > > it so it > > > looks like it's not being used for anything (which is technically > > > correct). So > > > it becomes an eligible port for an instance to bind to. That is all > > > fine and > > > dandy until you go to delete the instance and you get the "Port > > > [port-id] is > > > currently a parent port for trunk [trunk-id]" exception just as > > > happened here. > > > Anecdotally, it's seems rare that an instance will actually bind to > > > it, but that > > > is what happened for the user in this case and I have had several > > > pings over the > > > past year about people in a similar state. > > > > > > I propose that when a port is made parent port for a trunk, that > the > > > trunk be > > > established as the owner of the port. That way it will be > ineligible > > > for > > > instances seeking to bind to the port. > > > > > > ---- > > > > > > Clearly the above behavior indicates buggy issue that should be > rectified > > > in > > > master and stable branches. Nobody wants a VM that can't be fully > deleted > > > because the port can't ever be deleted. This is especially egregious > when > > > it > > > causes heat stack deletion failures. > > > > > > I am mostly concerned that by adding the trunk as an owner of the > parent > > > port, > > > then the trunk will need to be deleted before the parent port can be > > > deleted, > > > otherwise a PortInUse error will occur when the port is deleted (i.e. > on > > > tempest > > > test teardown). That to me seems indicative of an inadvertent API > > > change. Do > > > you think it's all right to say that if you delete a port that is a > parent > > > port > > > of a trunk, and that trunk has no other subports, that the trunk > deletion > > > is > > > implicit? Is that the lowest impact to the API that we can incur to > > > resolve > > > this issue? > > > > > > Your wisdom is appreciated, > > > > > > Nate > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1878031 > > > > > > > > > > > > > -- > > LUIS TOMÃ�S BOLÃ�VAR > > Senior Software Engineer > > Red Hat > > Madrid, Spain > > ltomasbo at redhat.com > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > -- LUIS TOMÁS BOLÍVAR Senior Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu May 14 08:41:33 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 May 2020 09:41:33 +0100 Subject: [neutron] parent ports for trunks being claimed by instances In-Reply-To: References: <20200512140525.khc32evagyb7366r@firewall> <20200513111809.4f4lyrd7zg3mecbr@skaplons-mac> Message-ID: <5d2abd0fcb7a387bf51cd3da00a783ab7c259e3e.camel@redhat.com> On Thu, 2020-05-14 at 08:45 +0200, Luis Tomas Bolivar wrote: > On Wed, May 13, 2020 at 1:18 PM Slawek Kaplonski > wrote: > > > Hi, > > > > On Wed, May 13, 2020 at 09:42:28AM +0200, Luis Tomas Bolivar wrote: > > > Hi Nate, > > > > > > I think I'm getting configure with the use of "instances" here. With > > > instances, you refer to VMs? Note there is a need to first create the > > > parent, then the trunk and then the instance using that parent. > > > > > > Also, deleting the VM is not a problem, it will just move the port to > > > > down > > > (and the trunk). The problem is deleting the parent port (if it is part > > > > of > > > the trunk). I think the problem here is that the VM should not try to > > > delete the port as port was existing before VM creation, right? > > > > It is like that if You create port in neutron, and then pass this port to > > the > > Nova when booting instance. > > But if You first create instance by giving network_id to Nova, it will > > create > > port on this network for You and that port will be deleted by Nova when > > instance > > will be deleted. > > > > You mean that it is possible to later make that VM port (created by nova) a > parent port of a trunk? That is not supported in ml2/ovs (or it was not > before), I never tried with OVN the behavior should not depend on the backend used. i.e. form an interoperabltiy point of view there should be no obsverable difference in how the trunk ports api works if you are using ml2/ovs or ml2/ovn. my recollection was we do not allow a standar port to be used as a parent for a trunk port if its attached to a vm currently. so i think you would have to first detach the port form the vm then make it a trunk port partent and then reattach it to the vm. i think that workflow is "supported" but only in the sense that it is valid to detach a port and make it a trunk port. if that ortininal port was created by nova in the boot request you should still expect it to get deleted and no clean up of the sub ports to be done. hence my "suported" in scare quotes comment above since realistically its a user error to convert a nova created port into a trunk port parent. at least form a nova point of view we dont support that. it might technically work but its not inteneded to an any odd behavior as a result is not a bug. > > > > > > > > > > Note too that if the parent port is not "eligible" after being part of a > > > trunk, how can you boot a VM with a parent port given that there is a > > > > need > > > for having the trunk before booting the VM? > > > > > > Cheers, > > > Luis > > > > > > On Tue, May 12, 2020 at 4:13 PM Nate Johnston > > > wrote: > > > > > > > Neutron developers, > > > > > > > > I am currently working on an issue with trunk ports that has come up a > > > > few > > > > times > > > > in my direct experience, and I hope that we can create a long term > > > > solution. I > > > > am hoping that developers with experience in trunk ports can validate > > > > my > > > > approach here, especially regarding fixing current behavior without > > > > introducing > > > > an API regression. > > > > > > > > By way of introduction to the specifics of the issue, let me blockquote > > > > from the > > > > LP bug I raised for this [1]: > > > > > > > > ---- > > > > > > > > When you create a trunk in Neutron you create a parent port for the > > > > trunk and > > > > attach the trunk to the parent. Then subports can be created on the > > > > trunk. When > > > > instances are created on the trunk, first a port is created and > > > > then > > > > an instance > > > > is associated with a free port. It looks to me that's this is the > > > > oversight in > > > > the logic. > > > > > > > > From the perspective of the code, the parent port looks like any > > > > other > > > > port > > > > attached to the trunk bridge. It doesn't have an instance attached > > > > to > > > > it so it > > > > looks like it's not being used for anything (which is technically > > > > correct). So > > > > it becomes an eligible port for an instance to bind to. That is all > > > > fine and > > > > dandy until you go to delete the instance and you get the "Port > > > > [port-id] is > > > > currently a parent port for trunk [trunk-id]" exception just as > > > > happened here. > > > > Anecdotally, it's seems rare that an instance will actually bind to > > > > it, but that > > > > is what happened for the user in this case and I have had several > > > > pings over the > > > > past year about people in a similar state. > > > > > > > > I propose that when a port is made parent port for a trunk, that > > > > the > > > > trunk be > > > > established as the owner of the port. That way it will be > > > > ineligible > > > > for > > > > instances seeking to bind to the port. > > > > > > > > ---- > > > > > > > > Clearly the above behavior indicates buggy issue that should be > > > > rectified > > > > in > > > > master and stable branches. Nobody wants a VM that can't be fully > > > > deleted > > > > because the port can't ever be deleted. This is especially egregious > > > > when > > > > it > > > > causes heat stack deletion failures. > > > > > > > > I am mostly concerned that by adding the trunk as an owner of the > > > > parent > > > > port, > > > > then the trunk will need to be deleted before the parent port can be > > > > deleted, > > > > otherwise a PortInUse error will occur when the port is deleted (i.e. > > > > on > > > > tempest > > > > test teardown). That to me seems indicative of an inadvertent API > > > > change. Do > > > > you think it's all right to say that if you delete a port that is a > > > > parent > > > > port > > > > of a trunk, and that trunk has no other subports, that the trunk > > > > deletion > > > > is > > > > implicit? Is that the lowest impact to the API that we can incur to > > > > resolve > > > > this issue? > > > > > > > > Your wisdom is appreciated, > > > > > > > > Nate > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1878031 > > > > > > > > > > > > > > > > > > -- > > > LUIS TOMÃ�S BOLÃ�VAR > > > Senior Software Engineer > > > Red Hat > > > Madrid, Spain > > > ltomasbo at redhat.com > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > From dtantsur at redhat.com Thu May 14 09:21:33 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 14 May 2020 11:21:33 +0200 Subject: [ironic] [release] Proposal for a new release model for ironic projects Message-ID: Hi folks, I would like to formally present this proposal: https://review.opendev.org/#/c/725547/ It is largely driven by a desire to produce frequent supported and tested intermediate releases for standalone consumers, as well as address a few minor issues we've found over time. I would like to try applying it to Victoria already. Please read and comment. Thanks, Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 14 13:00:45 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 May 2020 15:00:45 +0200 Subject: [neutron] 15-05-2020 drivers meeting agenda Message-ID: <20200514130045.zfunjh3qeymiwsix@skaplons-mac> Hi, For tomorrow's drivers meeting we have only one RFE to discuss: * https://bugs.launchpad.net/neutron/+bug/1877301 - L3 Router support ndp proxy It was initially triaged by Brian (Thank You) and discussed already on L3 subteam meeting. See You all tomorrow and have a nice day :) -- Slawek Kaplonski Senior software engineer Red Hat From stephenfin at redhat.com Thu May 14 14:01:38 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 May 2020 15:01:38 +0100 Subject: [oslo][nova][vmware] Replacement for suds library In-Reply-To: <9c2e1119-f224-d450-07ae-6d496157ae69@sap.com> References: <3616729F-8084-45BD-AA13-3E5487A4937D@vmware.com> <9c2e1119-f224-d450-07ae-6d496157ae69@sap.com> Message-ID: On Tue, 2020-05-12 at 13:08 +0200, Johannes Kulik wrote: > > > 在 2020/3/20 下午10:10,“Stephen Finucane” 写入: > > > The suds-jurko library used by oslo.vmware is emitting the following > > > warnings in nova tests. > > > > > > /nova/.tox/py36/lib/python3.6/site-packages/suds/resolver.py:89: DeprecationWarning: invalid escape sequence \% > > > self.splitp = re.compile('({.+})*[^\%s]+' % ps[0]) > > > /nova/.tox/py36/lib/python3.6/site-packages/suds/wsdl.py:619: DeprecationWarning: invalid escape sequence \s > > > body.parts = re.split('[\s,]', parts) > > > > > > These warnings are going to be errors in Python 3.10 [1]. We have over > > > 18 months before we need to worry about this [2], but I'd like to see > > > some movement on this well before then. It seems the suds-jurko fork is > > > dead [3] and movement on yet another fork, suds-community, is slow [4]. > > > How difficult would it be to switch over to something that does seem > > > maintained like zeep [5] and, assuming it's doable, is anyone from > > > VMWare/SAP able to do this work? > > > > > > Stephen > > > > Stephen, > > > > Thank you very much for pointing this out. Lichao (xuel at vmware.com) and I from VMware will involve into this issue. > > > > Do you think zeep is a good alternative of suds ? Or did the replacement already take place on other project ? > > > > We would like to make assessment to the zeep first and then make an action plan. > > > > Yingji. > > Hi Yingji, Stephen, > > we've been working downstream on switching oslo.vmware away from suds. > We're still on queens, but oslo.vmware didn't change too much since then ... > > We've opted for zeep, because it's > 1) currently still maintained > 2) promises python 3 support > 3) uses lxml at it's base, thus giving a performance boost, which we need > > In our tests, a script doing some simple queries against vSphere 6.5 > finished in ~5s (zeep) instead of ~10s (suds). Looking at nova-compute > nodes, the CPU-usage decreased drastically. Which is what we were aiming > for. We haven't run it in production, yet, but are planning to do so > gradually. > > We're willing to put some more work into getting the changes upstream, > if someone can assist in the process and if you're fine with that. > > To get a glimpse at the changes necessary for oslo.vmare, have a look > at [1]. These are for queens, though. > We've also put some work in to make the transition easier, by moving > suds-specific code from nova into oslo.vmare and providing some > helper-functions for the differences in ManagedObjectReference > attribute access [2], [3], [4], [5]. > > Obviously, there are changes needed in nova and cinder, if we need > to use helper-functions. For nova, we've got a couple of patches, > that are not public, yet. > > Sorry for coming into this with a "solution" already, but we have > a direct need for switching downstream, as explained above. I'd missed Lichao and Yingji's reply to my original post. If you have something ready to go, I suggest you work with them to get patches pushed up and ready for review. I'm happy to review them along with the other oslo maintainers. Feel free to ping us on #openstack-oslo (nick: stephenfin) if you run into any issues. Cheers, Stephen > Have nice day otherwise, > Johannes > > [1] https://github.com/sapcc/oslo.vmware/pull/4/files > [2] https://github.com/sapcc/oslo.vmware/commit/84d3e3177affa8dbffdbc0ecf0cbc2aea6b3dbde > [3] https://github.com/sapcc/oslo.vmware/commit/385a0352beab3ddb8273138abd31f0788638bb76 > [4] https://github.com/sapcc/oslo.vmware/commit/6a531ba84e8db43db1cb9ff433e6d18cdd98a4c6 > [5] https://github.com/sapcc/oslo.vmware/commit/993fe5f98a7b8172710af4f27b6c1a3eabb1c7d4 From stephenfin at redhat.com Thu May 14 14:04:22 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 May 2020 15:04:22 +0100 Subject: [oslo][nova][vmware] Replacement for suds library In-Reply-To: <3616729F-8084-45BD-AA13-3E5487A4937D@vmware.com> References: <3616729F-8084-45BD-AA13-3E5487A4937D@vmware.com> Message-ID: <4ec9c62aaf00f251624645370f32e6c47ad30b2a.camel@redhat.com> On Tue, 2020-03-24 at 03:54 +0000, Yingji Sun wrote: > > 在 2020/3/20 下午10:10,“Stephen Finucane” 写入: > > The suds-jurko library used by oslo.vmware is emitting the following > > warnings in nova tests. > > > > /nova/.tox/py36/lib/python3.6/site-packages/suds/resolver.py:89: DeprecationWarning: invalid escape sequence \% > > self.splitp = re.compile('({.+})*[^\%s]+' % ps[0]) > > /nova/.tox/py36/lib/python3.6/site-packages/suds/wsdl.py:619: DeprecationWarning: invalid escape sequence \s > > body.parts = re.split('[\s,]', parts) > > > > These warnings are going to be errors in Python 3.10 [1]. We have over > > 18 months before we need to worry about this [2], but I'd like to see > > some movement on this well before then. It seems the suds-jurko fork is > > dead [3] and movement on yet another fork, suds-community, is slow [4]. > > How difficult would it be to switch over to something that does seem > > maintained like zeep [5] and, assuming it's doable, is anyone from > > VMWare/SAP able to do this work? > > > > Stephen > > Stephen, > > Thank you very much for pointing this out. Lichao (xuel at vmware.com) and I from VMware will involve into this issue. > > Do you think zeep is a good alternative of suds ? Or did the replacement already take place on other project ? > > We would like to make assessment to the zeep first and then make an action plan. > > Yingji. Apologies for missing this response, Yingji. From Johannes' reply, it seems zeep is indeed the way to go. I can't actually say if it's the best option, but from a cursory look it did seem to be best maintained and best documented of the options and is therefore possibly worth the effort of migrating for. From what I can tell, oslo.vmware appears to be the only project using suds so it would be the only one that needs to be migrated. Stephen > > > [1] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.python.org%2F3.9%2Fwhatsnew%2F3.9.html%23you-should-check-for-deprecationwarning-in-your-code&data=02%7C01%7Cyingjisun%40vmware.com%7C95008f1ccf0a43198e5a08d7ccd87134%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637203102337393211&sdata=79f%2B3FFTgC275gINmA3aCvWdTe%2BdN8uZ39%2BPM0l85FU%3D&reserved=0 > > [2] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.python.org%2Fdev%2Fpeps%2Fpep-0596%2F&data=02%7C01%7Cyingjisun%40vmware.com%7C95008f1ccf0a43198e5a08d7ccd87134%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637203102337393211&sdata=d0RyU21oygeBi3xxhw20k%2FZTX0xHXQ0Hp7Z2WZb6YEE%3D&reserved=0 > > [3] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbitbucket.org%2Fjurko%2Fsuds%2Fsrc%2Fdefault%2F&data=02%7C01%7Cyingjisun%40vmware.com%7C95008f1ccf0a43198e5a08d7ccd87134%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637203102337393211&sdata=lqv3TRF76TL%2B8978gamjson%2FK8B4KnztukYoCNxqSAQ%3D&reserved=0 > > [4] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fsuds-community%2Fsuds%2Fpull%2F32&data=02%7C01%7Cyingjisun%40vmware.com%7C95008f1ccf0a43198e5a08d7ccd87134%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637203102337393211&sdata=2bWFr5R1e3paJ8Bzrf7fFhkrjKrhYWRJXYpzrZAf45w%3D&reserved=0 > > [5] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpython-zeep.readthedocs.io%2Fen%2Fmaster%2F&data=02%7C01%7Cyingjisun%40vmware.com%7C95008f1ccf0a43198e5a08d7ccd87134%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637203102337403198&sdata=G5AtJG%2FZTi2ZgFwZYhKLrfhNren1LliCBEFqa44xcAo%3D&reserved=0 > > From bence.romsics at gmail.com Thu May 14 14:08:29 2020 From: bence.romsics at gmail.com (Bence Romsics) Date: Thu, 14 May 2020 16:08:29 +0200 Subject: [neutron] parent ports for trunks being claimed by instances In-Reply-To: <5d2abd0fcb7a387bf51cd3da00a783ab7c259e3e.camel@redhat.com> References: <20200512140525.khc32evagyb7366r@firewall> <20200513111809.4f4lyrd7zg3mecbr@skaplons-mac> <5d2abd0fcb7a387bf51cd3da00a783ab7c259e3e.camel@redhat.com> Message-ID: Hi, > > You mean that it is possible to later make that VM port (created by nova) a > > parent port of a trunk? That is not supported in ml2/ovs (or it was not > > before), I never tried with OVN > the behavior should not depend on the backend used. But it does and for good reason. Ideally we wanted all drivers to allow putting a trunk even on a bound port. But for ovs that would have required unfeasible complexity of re-wiring upgrades (and network downtime while that re-wiring happens). Even in the api-ref we have error code 409 with reason "A system configuration prevents the operation from succeeding": https://docs.openstack.org/api-ref/network/v2/?expanded=create-trunk-detail The gist of my point in the other sub-thread was that an already bound port can be made parent of a trunk with most drivers (except ml2/ovs). But if you do this on a nova-created port (as opposed to a port created manually or by anybody but nova) then nova won't be able to delete that port when deleting the vm. Cheers, Bence From mihalis68 at gmail.com Thu May 14 17:42:54 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Thu, 14 May 2020 13:42:54 -0400 Subject: [ops][infra] Meetpad (was: ops meetups team meeting 2020-5-5) In-Reply-To: <20200505164906.igthboyfjiureuf2@yuggoth.org> References: <20200505164906.igthboyfjiureuf2@yuggoth.org> Message-ID: We on the ops meetups team had a trial meeting on meetpad.opendev.org this morning and for the most part it worked very well (detailed feedback below). Speaking personally, I am very happy to see an open source solution for video conferencing being adopted by the foundation to some extent. I had and continue to have reservations about Zoom, but at the end of the day no matter how well they respond to the security and privacy concerns it will still be a proprietary solution and no more true to the openstack tenets than slack is as a replacement for irc. feedback - etherpad integration is cool but several of us found the window seemed to disappear inexplicably - colored highlighting of fragments on the etherpad showed up overlapped for some but not all meeting members, obscuring some text - background blurring seemed very heavy for some participants computers and this possibly lead to some sessions locking up - it's not clear what named meetings persistence is, for example a named meeting from yesterday is still shown today, but doesn't have the password I applied yesterday My guess is some (all?) of this is just how Jitsi is right now. The team (openstack ops meetups) is now talking about possibly hosting a global ops meetup on this platform. How can we determine when and if the infra for this is ready for it, and how many participants is a reasonable cap? What about streaming, can we stream the whole thing continuously to youtube? We are thinking that each topic would have a small number of presenters/participants in the video conference itself, but allow a larger group to see it on youtube and contribute via the etherpad. Is this a reasonable plan? Thanks for doing this! Chris On Tue, May 5, 2020 at 12:57 PM Jeremy Stanley wrote: > On 2020-05-05 12:06:42 -0400 (-0400), Chris Morgan wrote: > [...] > > we had a quick IRC meeting today, and also a trial run at an open > > source based video conference meeting using jitsi via an instance > > running on infra provided by Erik McCormick. This seems to be > > promising. We'll look into trialling some ops related events > > leveraging this. > [...] > > It's probably been flying under the radar a bit so far, but the > OpenDev community has put together an Etherpad-integrated Jitsi-Meet > service at https://meetpad.opendev.org/ which you're free to try out > as well. We'd love feedback and help tuning it. Also if you want to > reuse anything we've done to set it up, the Ansible playbook we use > is here: > > > https://opendev.org/opendev/system-config/src/branch/master/playbooks/service-meetpad.yaml > > It utilizes this role to install and configure jitsi-meet containers > with docker-compose (mostly official docker.io/jitsi images, though > we build our own jitsi-meet-web published under docker.io/opendevorg > which applies our Etherpad integration patch): > > > https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/jitsi-meet > > We're not making any stability or reusability guarantees on the > Ansible orchestration (or our custom image which will hopefully > disappear once https://github.com/jitsi/jitsi-meet/pull/5270 is > accepted upstream), but like everything we run in OpenDev we publish > it for the sake of transparency, in case anyone else wants to help > us or take some ideas for their own efforts. > -- > Jeremy Stanley > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Thu May 14 17:59:18 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 14 May 2020 13:59:18 -0400 Subject: [ops][infra] Meetpad (was: ops meetups team meeting 2020-5-5) In-Reply-To: References: <20200505164906.igthboyfjiureuf2@yuggoth.org> Message-ID: On Thu, May 14, 2020, 1:44 PM Chris Morgan wrote: > We on the ops meetups team had a trial meeting on meetpad.opendev.org > this morning and for the most part it worked very well (detailed feedback > below). Speaking personally, I am very happy to see an open source solution > for video conferencing being adopted by the foundation to some extent. I > had and continue to have reservations about Zoom, but at the end of the day > no matter how well they respond to the security and privacy concerns it > will still be a proprietary solution and no more true to the openstack > tenets than slack is as a replacement for irc. > > feedback > > - etherpad integration is cool but several of us found the window seemed > to disappear inexplicably > - colored highlighting of fragments on the etherpad showed up overlapped > for some but not all meeting members, obscuring some text > - background blurring seemed very heavy for some participants computers > and this possibly lead to some sessions locking up > - it's not clear what named meetings persistence is, for example a named > meeting from yesterday is still shown today, but doesn't have the password > I applied yesterday > I can answer this one. The list on the landing page is merely a list of your history, and yours alone. It has no bearing on the persistence of a room. Rooms are normally ephemeral and vanish from the server side when the last person leaves it. > My guess is some (all?) of this is just how Jitsi is right now. > > The team (openstack ops meetups) is now talking about possibly hosting a > global ops meetup on this platform. How can we determine when and if the > infra for this is ready for it, and how many participants is a reasonable > cap? What about streaming, can we stream the whole thing continuously to > youtube? We are thinking that each topic would have a small number of > presenters/participants in the video conference itself, but allow a larger > group to see it on youtube and contribute via the etherpad. Is this a > reasonable plan? > > Thanks for doing this! > > Chris > > On Tue, May 5, 2020 at 12:57 PM Jeremy Stanley wrote: > >> On 2020-05-05 12:06:42 -0400 (-0400), Chris Morgan wrote: >> [...] >> > we had a quick IRC meeting today, and also a trial run at an open >> > source based video conference meeting using jitsi via an instance >> > running on infra provided by Erik McCormick. This seems to be >> > promising. We'll look into trialling some ops related events >> > leveraging this. >> [...] >> >> It's probably been flying under the radar a bit so far, but the >> OpenDev community has put together an Etherpad-integrated Jitsi-Meet >> service at https://meetpad.opendev.org/ which you're free to try out >> as well. We'd love feedback and help tuning it. Also if you want to >> reuse anything we've done to set it up, the Ansible playbook we use >> is here: >> >> >> https://opendev.org/opendev/system-config/src/branch/master/playbooks/service-meetpad.yaml >> >> It utilizes this role to install and configure jitsi-meet containers >> with docker-compose (mostly official docker.io/jitsi images, though >> we build our own jitsi-meet-web published under docker.io/opendevorg >> which applies our Etherpad integration patch): >> >> >> https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/jitsi-meet >> >> We're not making any stability or reusability guarantees on the >> Ansible orchestration (or our custom image which will hopefully >> disappear once https://github.com/jitsi/jitsi-meet/pull/5270 is >> accepted upstream), but like everything we run in OpenDev we publish >> it for the sake of transparency, in case anyone else wants to help >> us or take some ideas for their own efforts. >> -- >> Jeremy Stanley >> > > > -- > Chris Morgan > -Erik > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Thu May 14 18:30:30 2020 From: aj at suse.com (Andreas Jaeger) Date: Thu, 14 May 2020 20:30:30 +0200 Subject: [nova][gate] openstack-tox-docs job failure In-Reply-To: <867db980-53fe-8a27-6f83-34d8d41fd052@gmail.com> References: <867db980-53fe-8a27-6f83-34d8d41fd052@gmail.com> Message-ID: <26f77231-c8e4-5485-01da-6b0d436819a6@suse.com> On 14.05.20 03:36, melanie witt wrote: > Howdy all, > > There's a upper-constraints bump to openstackdocstheme 2.1.0 which has > recently merged [1] which is going to break the openstack-tox-docs job > for nova. openstackdocstheme 2.1.1 was released today to address that failure, constraints update is currently in the gate queue, Andreas > I've proposed a patch to fix it here: > > https://review.opendev.org/727898 > > if people could please take a look. > > Cheers, > -melanie > > [1] https://review.opendev.org/727850 > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From cboylan at sapwetik.org Thu May 14 20:17:53 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 14 May 2020 13:17:53 -0700 Subject: [ops][infra] Meetpad (was: ops meetups team meeting 2020-5-5) In-Reply-To: References: <20200505164906.igthboyfjiureuf2@yuggoth.org> Message-ID: <829a54b3-8beb-4bf0-8003-01b746b1e17c@www.fastmail.com> On Thu, May 14, 2020, at 10:42 AM, Chris Morgan wrote: > We on the ops meetups team had a trial meeting on meetpad.opendev.org > this morning and for the most part it worked very well (detailed > feedback below). Speaking personally, I am very happy to see an open > source solution for video conferencing being adopted by the foundation > to some extent. I had and continue to have reservations about Zoom, but > at the end of the day no matter how well they respond to the security > and privacy concerns it will still be a proprietary solution and no > more true to the openstack tenets than slack is as a replacement for > irc. > > feedback > > - etherpad integration is cool but several of us found the window > seemed to disappear inexplicably It seems that when a call starts the etherpad starts "pinned" but then certain events can cause jitsi to go back to its normal "focus on the person talking" mode of operation. In the bottom right is a "3 dot" menu and from there you can open and close the "shared document" this allows you to toggle the etherpad manually. Even after toggling it directly it seems that jitsi wants to keep focus on speakers in some cases. I've been fiddling with it to try and understand that better and it seems like some of these things may help (though I can't say for sure): * Click in the document directly * Collapse the right hand column of mini webcams using the > in the bottom right > - colored highlighting of fragments on the etherpad showed up > overlapped for some but not all meeting members, obscuring some text I've noticed this too. Toggling authorship colors in etherpad's settings menu seems to correct this. > - background blurring seemed very heavy for some participants computers > and this possibly lead to some sessions locking up I believe this is specifically listed as an "experimental" feature in the menu. I expect this is why. Should probably avoid using it. > - it's not clear what named meetings persistence is, for example a > named meeting from yesterday is still shown today, but doesn't have the > password I applied yesterday > > My guess is some (all?) of this is just how Jitsi is right now. Few other things we've noticed: * Chrome/Chromium seem more reliable than Firefox * The jitsi logo watermark in the overlay can clip text in the etherpad. * If your webcam is operating in a resolution that your browser and/or jitsi don't like then you'll get a message about the webcam not satisfying required constraints and it will refuse to work. At least one person had this happen because their webcam resolution was tiny (100x100 ish pixels). I expect these issues will improve over time. If people know how to address these problems we're more than happy to help incorporate fixes into this deployment. > > The team (openstack ops meetups) is now talking about possibly hosting > a global ops meetup on this platform. How can we determine when and if > the infra for this is ready for it, and how many participants is a > reasonable cap? What about streaming, can we stream the whole thing > continuously to youtube? We are thinking that each topic would have a > small number of presenters/participants in the video conference itself, > but allow a larger group to see it on youtube and contribute via the > etherpad. Is this a reasonable plan? I think we are ready for people to use it understanding its a new service we are trying to run and there may be hiccups. In particular I don't think we've been able to do any serious scale testing yet. If you are using it and find those limits we'd love to hear about it. Also feedback like what you posted above is great too. Jitsi does have a component, jibri, which enables streaming (and recording). I don't think we've deployed any of that toolchain yet. It appears that a jibri process per recording is required and distributing jibri instances across servers is recommended for performance reasons. If there is interest in pushing patches to add a jibri instances I expect you'll find plenty of help. Zuul acts as our continuous deployment management system and all of the configuration for that is in the open at https://opendev.org/opendev/system-config. playbooks/roles/jitsi-meet/ is probably a good place to start looking. One drawback to the "few presenters, many participants" model for collaborative discussions is that you're preselecting who can get their voice heard. It might be best to find the limits of the existing tool before we design solutions for potential bottlenecks? I expect there will be limitations, we just don't know what they are yet so designing solutions is difficult. > > Thanks for doing this! > > Chris > > On Tue, May 5, 2020 at 12:57 PM Jeremy Stanley wrote: > > On 2020-05-05 12:06:42 -0400 (-0400), Chris Morgan wrote: > > [...] > > > we had a quick IRC meeting today, and also a trial run at an open > > > source based video conference meeting using jitsi via an instance > > > running on infra provided by Erik McCormick. This seems to be > > > promising. We'll look into trialling some ops related events > > > leveraging this. > > [...] > > > > It's probably been flying under the radar a bit so far, but the > > OpenDev community has put together an Etherpad-integrated Jitsi-Meet > > service at https://meetpad.opendev.org/ which you're free to try out > > as well. We'd love feedback and help tuning it. Also if you want to > > reuse anything we've done to set it up, the Ansible playbook we use > > is here: > > > > https://opendev.org/opendev/system-config/src/branch/master/playbooks/service-meetpad.yaml > > > > It utilizes this role to install and configure jitsi-meet containers > > with docker-compose (mostly official docker.io/jitsi images, though > > we build our own jitsi-meet-web published under docker.io/opendevorg > > which applies our Etherpad integration patch): > > > > https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/jitsi-meet > > > > We're not making any stability or reusability guarantees on the > > Ansible orchestration (or our custom image which will hopefully > > disappear once https://github.com/jitsi/jitsi-meet/pull/5270 is > > accepted upstream), but like everything we run in OpenDev we publish > > it for the sake of transparency, in case anyone else wants to help > > us or take some ideas for their own efforts. > > -- > > Jeremy Stanley > > > -- > Chris Morgan From fungi at yuggoth.org Thu May 14 20:47:37 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 May 2020 20:47:37 +0000 Subject: [ops][infra] Meetpad In-Reply-To: <829a54b3-8beb-4bf0-8003-01b746b1e17c@www.fastmail.com> References: <20200505164906.igthboyfjiureuf2@yuggoth.org> <829a54b3-8beb-4bf0-8003-01b746b1e17c@www.fastmail.com> Message-ID: <20200514204737.djq64kfbpkfciou2@yuggoth.org> On 2020-05-14 13:17:53 -0700 (-0700), Clark Boylan wrote: > On Thu, May 14, 2020, at 10:42 AM, Chris Morgan wrote: [...] > > - etherpad integration is cool but several of us found the window > > seemed to disappear inexplicably > > It seems that when a call starts the etherpad starts "pinned" but > then certain events can cause jitsi to go back to its normal > "focus on the person talking" mode of operation. In the bottom > right is a "3 dot" menu and from there you can open and close the > "shared document" this allows you to toggle the etherpad manually. > > Even after toggling it directly it seems that jitsi wants to keep > focus on speakers in some cases. I've been fiddling with it to try > and understand that better and it seems like some of these things > may help (though I can't say for sure): > > * Click in the document directly > > * Collapse the right hand column of mini webcams using the > in > the bottom right Even doing these, I found that at times it would switch back to an active speaker faster than I could click in the pad after redisplaying. > > - colored highlighting of fragments on the etherpad showed up > > overlapped for some but not all meeting members, obscuring some > > text > > I've noticed this too. Toggling authorship colors in etherpad's > settings menu seems to correct this. [...] Temporarily at least, though it does seem to recur. Also this is a local setting, it does not alter how other users see the pad (it's only changing the author colors setting for your view of the pad). Leaving author colors off may work around it, though at a loss of that feature of course. > * Chrome/Chromium seem more reliable than Firefox At least that's how it was for me when I tried it. I normally use FF (76 at present) and so have it configured for incognito mode always, all sorts of possible snooping avenues disabled in preferences, and several security/privacy-oriented extensions for blocking unwanted content and access. Things worked well for me in Chromium (81), but that may be because it's on default settings with no extensions. It's quite possible FireFox works just fine for this, as long as it's not configured by a paranoid nutcase like me. FF kept showing my camera footage to me, and seemed to be picking up my microphone and outputting to my speakers, but did not send or receive any video or audio streams over the network whatsoever. I'll likely stick with Chromium for meetpad use (and use it only for that), as it provides me a nice privacy buffer from all my other browsing by being an entirely separate application. > I think we are ready for people to use it understanding its a new > service we are trying to run and there may be hiccups. In > particular I don't think we've been able to do any serious scale > testing yet. If you are using it and find those limits we'd love > to hear about it. Also feedback like what you posted above is > great too. [...] I think Kendall Nelson is planning to use it as part of the community release celebration later today, so hopefully we'll get a bit more scale testing that way. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu May 14 21:01:26 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 May 2020 21:01:26 +0000 Subject: [ops][infra] Meetpad In-Reply-To: <20200514204737.djq64kfbpkfciou2@yuggoth.org> References: <20200505164906.igthboyfjiureuf2@yuggoth.org> <829a54b3-8beb-4bf0-8003-01b746b1e17c@www.fastmail.com> <20200514204737.djq64kfbpkfciou2@yuggoth.org> Message-ID: <20200514210126.hjdpbxvxsnh2olaf@yuggoth.org> On 2020-05-14 20:47:37 +0000 (+0000), Jeremy Stanley wrote: [...] > I think Kendall Nelson is planning to use it as part of the > community release celebration later today, so hopefully we'll get > a bit more scale testing that way. I guess that's actually tomorrow (Friday) at 20:00 UTC. I've lost all track of time. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Thu May 14 21:07:19 2020 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 May 2020 14:07:19 -0700 Subject: [nova][gate] openstack-tox-docs job failure In-Reply-To: <26f77231-c8e4-5485-01da-6b0d436819a6@suse.com> References: <867db980-53fe-8a27-6f83-34d8d41fd052@gmail.com> <26f77231-c8e4-5485-01da-6b0d436819a6@suse.com> Message-ID: On 5/14/20 11:30, Andreas Jaeger wrote: > On 14.05.20 03:36, melanie witt wrote: >> Howdy all, >> >> There's a upper-constraints bump to openstackdocstheme 2.1.0 which has >> recently merged [1] which is going to break the openstack-tox-docs job >> for nova. > > openstackdocstheme 2.1.1 was released today to address that failure, > constraints update is currently in the gate queue, Update: docs jobs are still failing because the app.config.latex_engine setting is still not set to 'xelatex' needed for unicode chars in our docs. I suspect this is because by the time the 'app.config.latex_engine' gets to the openstackdocstheme extension, it's already been defaulted to 'pdflatex' [2] and the logic will set it to 'xelatex' only if it's not been set. I've commented on the openstackdocstheme patch [3]. -melanie [2] https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-latex_engine [3] https://review.opendev.org/#/c/727992/1/openstackdocstheme/ext.py > Andreas > >> I've proposed a patch to fix it here: >> >> https://review.opendev.org/727898 >> >> if people could please take a look. >> >> Cheers, >> -melanie >> >> [1] https://review.opendev.org/727850 >> > > From allison at openstack.org Thu May 14 21:22:35 2020 From: allison at openstack.org (Allison Price) Date: Thu, 14 May 2020 16:22:35 -0500 Subject: OpenStack Ussuri Community Meeting Recordings & Presentation Decks Message-ID: <34B2E41F-774F-461C-937F-FFDBD842D76C@openstack.org> Hi everyone, Thanks to all who joined or created content and presented at the community meetings yesterday! There were quite a few project updates presented for the Ussuri release. If you missed the meetings, you can find the recordings in this playlist [1] and links to the presentations are also included in the video abstracts. We have also posted both meeting recordings on Tencent Cloud, so you can watch the first community meeting [2] or the second community meeting [3] there as well. If you have any trouble accessing these resources, please let me know. Cheers, Allison [1] https://www.youtube.com/playlist?list=PLKqaoAnDyfgpYADSiOfIVwgKb5zbL0GJE [2] https://v.qq.com/x/page/q0966o61rz9.html [3] https://v.qq.com/x/page/d09667otaf1.html Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri May 15 00:21:49 2020 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 May 2020 17:21:49 -0700 Subject: [nova][gate] openstack-tox-pep8 job failing Message-ID: <41b74834-5b5f-7e65-dcf9-c39094875b92@gmail.com> Hey all, The openstack-tox-pep8 job is now failing too since hacking 3.1.0 was released [1], which brings in a newer flake8 version, which adds new checks to the code and we have a few failures. We have a change that would pin hacking >= 3.0.1 and < 3.1.0 [2] to prevent new checks from rolling in automatically, but it is stuck behind the failing openstack-tox-docs job [3]. This is unfortunately a chicken and egg situation. The proposed nova patch for the docs job, as mentioned from my other ML post: https://review.opendev.org/727898 fails the pep8 job and the proposed nova patch for the pep8 job [2] fails the docs job. I'm not sure how people will want to proceed here because a nova-side-only gate fix would involve squashing ^ and [2]. If we can get another fix or revert to openstackdocstheme to take care of the docs job, we'd then only need [2] on the nova side to unblock the nova gate. Cheers, -melanie [1] https://review.opendev.org/728016 [2] https://review.opendev.org/727347 [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014876.html From gmann at ghanshyammann.com Fri May 15 00:53:11 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 14 May 2020 19:53:11 -0500 Subject: [all] pep8 job failing due to flake8 3.8.0 In-Reply-To: <1720c06c1fe.f968123d66058.4634045955357610785@ghanshyammann.com> References: <17209aaba67.c99f156653792.6381316587675976541@ghanshyammann.com> <17209e979b9.1002742a556789.8183786768349286689@ghanshyammann.com> <9658a301-824e-4271-4ab1-02b742cc4dcc@gmail.com> <1720a69856b.1110a223e60913.4505198435459269895@ghanshyammann.com> <1720c06c1fe.f968123d66058.4634045955357610785@ghanshyammann.com> Message-ID: <17215d2200a.bf4c8e8741386.7269492125089923305@ghanshyammann.com> ---- On Tue, 12 May 2020 22:14:28 -0500 Ghanshyam Mann wrote ---- > ---- On Tue, 12 May 2020 14:43:07 -0500 Ghanshyam Mann wrote ---- > > ---- On Tue, 12 May 2020 13:07:43 -0500 Brian Haley wrote ---- > > > On 5/12/20 1:23 PM, Ghanshyam Mann wrote: > > > > > > > > > > > > This is also failing on some stable branches that had not moved to > > > > > > hacking 3.0 yet. In this case, it may be better to add a flake8 cap to > > > > > > the repo's test-requirements.txt file rather than backporting a major > > > > > > bump in hacking and dealing with the need to make a lot of code changes. > > > > > > > > > > > > Here is an example of that approach: > > > > > > > > > > > > https://review.opendev.org/#/c/727265/ > > > > > > > > > > I found in the neutron stable/ussuri repo that capping flake8<3.8.0 > > > > > didn't work, but capping pycodestyle did. So that's another option. > > > > > > > > > > -pycodestyle>=2.0.0 # MIT > > > > > +pycodestyle>=2.0.0,<2.6.0 # MIT > > > > > > > > > > https://review.opendev.org/#/c/727274/ > > > > Adding what we discussed on IRC about the workable solution. > > > > flake8 2.6.2 which is pulled by older hacking in stable/train and less does not have > > cap for pycodestyle so if test-requirement.txt has "pycodestyle>***" then you need > > to cap it explicitly with what Brian proposed in https://review.opendev.org/#/c/727274/ > > > > otherwise flake8 2.6.2 will pull the new pycodestyle and break. > > If your project is using flake8-import-order plugin then also we need to cap pycodestyle explicitly. > flake8-import-order does not cap pycodestyle and pull the latest pycodestyle, I have proposed PR > there at least to be safe for the future version. > > - https://github.com/PyCQA/flake8-import-order/pull/172 hacking 3.1.0 is released now which will bring flake8 3.8.0 new chekcs. So if pep8 job is failing on your repo then, merge the already proposed patch which cap hacking >=3.0.1 <3.1.0. I have proposed the patches and should be ready to merge. And later you can fix the code and adopt new hacking 3.1.0 whenever you want to do but that is not urgent. -gmann > > -gmann > > > > -gmann > > > > > > > > > > I will say remove rhe pycodestyle from neutron test-reqruiement and let hacking > > > > via flake8 cap handle the compatible pycodestyle. Otherwise we end up maintaining and > > > > fixing it project side for future. > > > > > > So the problem in this case was having both of these in > > > test-requirements.txt: > > > > > > flake8>=3.6.0,<3.8.0 # MIT > > > pycodestyle>=2.0.0 # MIT > > > > > > Test versions were: > > > > > > flake8==3.7.9 > > > pycodestyle==2.6.0 > > > > > > Removing the pycodestyle line altogether worked however, it pulled > > > pycodestye 3.5.0 then. > > > > > > -Brian > > > > > > > > > > > > From melwittt at gmail.com Fri May 15 03:40:29 2020 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 May 2020 20:40:29 -0700 Subject: [nova][gate] openstack-tox-pep8 job failing In-Reply-To: <41b74834-5b5f-7e65-dcf9-c39094875b92@gmail.com> References: <41b74834-5b5f-7e65-dcf9-c39094875b92@gmail.com> Message-ID: <627ae9fb-6194-f40a-505c-303ca04dff19@gmail.com> On 5/14/20 17:21, melanie witt wrote: > If we can get another fix or revert to openstackdocstheme to take care > of the docs job, we'd then only need [2] on the nova side to unblock the > nova gate. A roll back to openstackdocstheme 2.0.2 in upper-constraints has been proposed to unblock the nova and cinder gates in the meantime: https://review.opendev.org/728335 -melanie > [1] https://review.opendev.org/728016 > [2] https://review.opendev.org/727347 > [3] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014876.html From melwittt at gmail.com Fri May 15 03:45:36 2020 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 May 2020 20:45:36 -0700 Subject: [nova][gate] openstack-tox-pep8 job failing In-Reply-To: <627ae9fb-6194-f40a-505c-303ca04dff19@gmail.com> References: <41b74834-5b5f-7e65-dcf9-c39094875b92@gmail.com> <627ae9fb-6194-f40a-505c-303ca04dff19@gmail.com> Message-ID: <78db899d-3904-f09d-0a72-f842cb4b4654@gmail.com> On 5/14/20 20:40, melanie witt wrote: > On 5/14/20 17:21, melanie witt wrote: >> If we can get another fix or revert to openstackdocstheme to take care >> of the docs job, we'd then only need [2] on the nova side to unblock >> the nova gate. > > A roll back to openstackdocstheme 2.0.2 in upper-constraints has been > proposed to unblock the nova and cinder gates in the meantime: > > https://review.opendev.org/728335 Once ^ merges, our docs job will start passing. Then someone will need to recheck: https://review.opendev.org/727347 to fix our pep8 job. -melanie >> [1] https://review.opendev.org/728016 >> [2] https://review.opendev.org/727347 >> [3] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014876.html >> > From mark.kirkwood at catalyst.net.nz Fri May 15 05:34:18 2020 From: mark.kirkwood at catalyst.net.nz (Mark Kirkwood) Date: Fri, 15 May 2020 17:34:18 +1200 Subject: [swift] Adding disks - one by one or all lightly weighted? In-Reply-To: <0dd0dedde4ba0017c5fb3ee0b07308046ec397a6.camel@swiftstack.com> References: <30bf2fd4-bedb-e73c-874f-d4a3efc68b13@catalyst.net.nz> <0dd0dedde4ba0017c5fb3ee0b07308046ec397a6.camel@swiftstack.com> Message-ID: <48f39267-cb79-85ef-0032-d674186ba268@catalyst.net.nz> On 25/01/20 10:23 am, Tim Burke wrote: > On Thu, 2020-01-23 at 12:34 +1300, Mark Kirkwood wrote: >> Hi, >> >> We are wanting to increase the number of disks in each of our >> storage >> nodes - from 4 to 12. >> >> I'm wondering whether it is better to: >> >> 1/ Add 1st new disk (with a reduced weight)...increase the weight >> until >> full, then repeat for next disk etc >> >> 2/ Add 'em all with a (much i.e 1/8 of that in 1/ ) reduced >> weight...increase the weights until done >> >> Thoughts? >> >> regards >> >> Mark >> >> > Hi Mark, > > I'd go with option 2 -- the quicker you can get all of the new disks > helping with load, the better. Gradual weight adjustments seem like a > good idea; they should help keep your replication traffic reasonable. > Note that as long as you're waiting a full replication cycle between > rebalances, though, swift should only be moving a single replica at a > time, even if you added the new devices at full weight. > > Of course, tripling capacity like this (assuming that the new disks are > the same size as the existing ones) tends to take a while. You should > probably familiarize yourself with the emergency replication options > and consider enabling some of them until your rings reflect the new > topology; see > > * > https://github.com/openstack/swift/blob/2.23.0/etc/object-server.conf-sample#L290-L298 > * > https://github.com/openstack/swift/blob/2.23.0/etc/object-server.conf-sample#L300-L307 > and > * > https://github.com/openstack/swift/blob/2.23.0/etc/object-server.conf-sample#L353-L364 > > These can be really useful to speed up rebalances, though swift's > durability guarantees take a bit of a hit -- so turn them back off once > you've had a cycle or two with the drives at full weight! If the > existing drives are full or nearly so (which IME tends to be the case > when there's a large capacity increase), those may be necessary to get > the system back to a state where it can make good progress. > Thanks Tim! What I ended I doing (and is still in progress) is: - initially adding a single disk at 50%, and waiting until a replication cycle had completed (pretty much just assessing impact), then I quickly changed to - adding 3 disks (1 per region) at 16.66% each as this worked much better - as more disks were added I was able to increase the addition % (50% now) as the bottleneck seemed to be read rate from the source (existing) disks The disks are enterprise level SATA, plugged into a raid controller with a battery cache. We are not using the raid facility, only its battery to accelerate writes. So far I'm not maxing out the write ability on the target (new) disks. So I'm hoping to keep increasing the addition % to the point where I can add 3 disks @ 100% each and still have a replication cycle complete before I add the next lot! regards Mark From zhangbailin at inspur.com Fri May 15 09:40:45 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Fri, 15 May 2020 09:40:45 +0000 Subject: =?utf-8?B?562U5aSNOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1SZTogW25vdmFd?= =?utf-8?Q?[gate]_openstack-tox-pep8_job_failing?= In-Reply-To: <78db899d-3904-f09d-0a72-f842cb4b4654@gmail.com> References: <78db899d-3904-f09d-0a72-f842cb4b4654@gmail.com> Message-ID: <2cd7cb398b8d443d9ae860a8423f342a@inspur.com> https://review.opendev.org/#/c/727589/ bumped hacking min version to 3.1.0. > 主题: [lists.openstack.org代发]Re: [nova][gate] openstack-tox-pep8 job failing > > On 5/14/20 20:40, melanie witt wrote: > > On 5/14/20 17:21, melanie witt wrote: > >> If we can get another fix or revert to openstackdocstheme to take > >> care of the docs job, we'd then only need [2] on the nova side to > >> unblock the nova gate. > > > > A roll back to openstackdocstheme 2.0.2 in upper-constraints has been > > proposed to unblock the nova and cinder gates in the meantime: > > > > https://review.opendev.org/728335 > > Once ^ merges, our docs job will start passing. > > Then someone will need to recheck: > > https://review.opendev.org/727347 > > to fix our pep8 job. > > -melanie > > >> [1] https://review.opendev.org/728016 [2] > >> https://review.opendev.org/727347 [3] > >> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/01487 > >> 6.html > >> > > > From yamamoto at midokura.com Fri May 15 11:54:41 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 15 May 2020 20:54:41 +0900 Subject: [neutron] 15-05-2020 drivers meeting agenda In-Reply-To: <20200514130045.zfunjh3qeymiwsix@skaplons-mac> References: <20200514130045.zfunjh3qeymiwsix@skaplons-mac> Message-ID: hi, On Thu, May 14, 2020 at 10:01 PM Slawek Kaplonski wrote: > > Hi, > > For tomorrow's drivers meeting we have only one RFE to discuss: > > * https://bugs.launchpad.net/neutron/+bug/1877301 - L3 Router support ndp proxy > It was initially triaged by Brian (Thank You) and discussed already on L3 > subteam meeting. thank you for the agenda. i commented a question on LP. i don't think i will attend the meeting in this evening. i'm exhausted by unrelated things today. sorry! > > See You all tomorrow and have a nice day :) > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > From smooney at redhat.com Fri May 15 12:13:42 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 15 May 2020 13:13:42 +0100 Subject: [ops][infra] Meetpad (was: ops meetups team meeting 2020-5-5) In-Reply-To: References: <20200505164906.igthboyfjiureuf2@yuggoth.org> Message-ID: <67a8c6820f1cb7bf32c13c2b37b892ee0cbdb4d1.camel@redhat.com> On Thu, 2020-05-14 at 13:59 -0400, Erik McCormick wrote: > > - it's not clear what named meetings persistence is, for example a named > > meeting from yesterday is still shown today, but doesn't have the password > > I applied yesterday > > just a a genereral pratice we proably should not be using passwords for our meetings. we pratice open development and design across openstack and opendev projects ingeneral so unless jitsi require a password to have a meeting we shoudl proably not set one. if we do set one we shoudl list it publicaly is the correstpoding etherpad , wiki meeting email or where ever the meeting is annoched/tracked so people can join freely. From root.mch at gmail.com Fri May 15 14:25:33 2020 From: root.mch at gmail.com (=?UTF-8?Q?=C4=B0zzettin_Erdem?=) Date: Fri, 15 May 2020 17:25:33 +0300 Subject: [Heat] Heat Stack Create Authorization Failed Message-ID: Hello everyone, When I launch a cluster on Sahara, it gives a heat stack authorization error. I also use Murano service and it is working with heat. I could not find the solution and I discuss this error with the Sahara-devel team. They are also searching for this. Could you help me, please? Both the error log of Sahara and Heat are below. Sahara-engine: http://paste.openstack.org/show/793671/ Heat-engine: http://paste.openstack.org/show/793673/ Thanks. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Fri May 15 14:33:13 2020 From: neil at tigera.io (Neil Jerram) Date: Fri, 15 May 2020 15:33:13 +0100 Subject: [neutron] Recent incompatible change in stable/rocky branch Message-ID: I'm sorry, but this is a moan. This merge - https://opendev.org/openstack/neutron/commit/a6fb2faaa5d46656db9085ad6bcfc65ded807871 - to the Neutron stable/rocky branch on April 23rd, has broken my team's Neutron plugin, by requiring 3rd party LinuxInterfaceDriver subclasses to take a new 'link_up' argument in their 'plug_new' method. IMO, it should have been obvious to folk proposing or reviewing this, that it would cause breakage. Does Neutron have a different understanding of "stable" than I do? Or do plugins other than OVN not matter anymore? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri May 15 15:17:31 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 15 May 2020 17:17:31 +0200 Subject: [neutron] Recent incompatible change in stable/rocky branch In-Reply-To: References: Message-ID: <20200515151731.tfmwi5qtop4t6f67@skaplons-mac> Hi, That is my fault as I proposed this backport actually. I know it was mistake and we should do it a bit different in stable branches to avoid breaking third party drivers really. Maybe we should also think about moving such base driver classes to neutron-lib to avoid such issues in the future. Once again sorry for that. I will be more careful in the future. On Fri, May 15, 2020 at 03:33:13PM +0100, Neil Jerram wrote: > I'm sorry, but this is a moan. > > This merge - > https://opendev.org/openstack/neutron/commit/a6fb2faaa5d46656db9085ad6bcfc65ded807871 > - > to the Neutron stable/rocky branch on April 23rd, has broken my team's > Neutron plugin, by requiring 3rd party LinuxInterfaceDriver subclasses to > take a new 'link_up' argument in their 'plug_new' method. > > IMO, it should have been obvious to folk proposing or reviewing this, that > it would cause breakage. > > Does Neutron have a different understanding of "stable" than I do? Or do > plugins other than OVN not matter anymore? -- Slawek Kaplonski Senior software engineer Red Hat From melwittt at gmail.com Fri May 15 15:26:08 2020 From: melwittt at gmail.com (melanie witt) Date: Fri, 15 May 2020 08:26:08 -0700 Subject: [nova][gate] openstack-tox-pep8 job failing In-Reply-To: <78db899d-3904-f09d-0a72-f842cb4b4654@gmail.com> References: <41b74834-5b5f-7e65-dcf9-c39094875b92@gmail.com> <627ae9fb-6194-f40a-505c-303ca04dff19@gmail.com> <78db899d-3904-f09d-0a72-f842cb4b4654@gmail.com> Message-ID: <24086ee7-1485-1beb-2864-083427c34bb6@gmail.com> On 5/14/20 20:45, melanie witt wrote: > On 5/14/20 20:40, melanie witt wrote: >> On 5/14/20 17:21, melanie witt wrote: >>> If we can get another fix or revert to openstackdocstheme to take >>> care of the docs job, we'd then only need [2] on the nova side to >>> unblock the nova gate. >> >> A roll back to openstackdocstheme 2.0.2 in upper-constraints has been >> proposed to unblock the nova and cinder gates in the meantime: >> >> https://review.opendev.org/728335 > > Once ^ merges, our docs job will start passing. > > Then someone will need to recheck: > > https://review.opendev.org/727347 > > to fix our pep8 job. Both of these changes have since merged and the nova gate should be unblocked now. -melanie >>> [1] https://review.opendev.org/728016 >>> [2] https://review.opendev.org/727347 >>> [3] >>> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014876.html >>> >> > From emiller at genesishosting.com Fri May 15 17:01:46 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Fri, 15 May 2020 12:01:46 -0500 Subject: [kolla] Ceph transition plan Message-ID: <046E9C0290DD9149B106B72FC9156BEA04771F7E@gmsxchsvr01.thecreation.com> Hi, I noticed that in Kolla's master branch that Ceph has been removed, as expected for a deprecated component (I'm sad, though, since it works so well!). In the release notes, it is noted that "Prior to this we will ensure a migration path to another tool such as Ceph Ansible is available." Is this migration path documented since it appears that the latest release will not have the Kolla Ceph container available? Thanks! Eric From neil at tigera.io Fri May 15 17:20:07 2020 From: neil at tigera.io (Neil Jerram) Date: Fri, 15 May 2020 18:20:07 +0100 Subject: [neutron] Recent incompatible change in stable/rocky branch In-Reply-To: <20200515151731.tfmwi5qtop4t6f67@skaplons-mac> References: <20200515151731.tfmwi5qtop4t6f67@skaplons-mac> Message-ID: Thanks Slawek. Are you planning to leave this change in place? I can update my plugin's code, but that still leaves the likelihood of breakage if - there's a new Rocky patch release - a deployer is using an out-of-tree plugin with its own interface driver, and upgrades to the Rocky patch release - either they don't also upgrade their plugin code, or there isn't a plugin update available because the plugin author hasn't noticed this problem yet. Do you know if there will be another Rocky patch release, and if so when? Best wishes, Neil On Fri, May 15, 2020 at 4:17 PM Slawek Kaplonski wrote: > Hi, > > That is my fault as I proposed this backport actually. > I know it was mistake and we should do it a bit different in stable > branches to > avoid breaking third party drivers really. > Maybe we should also think about moving such base driver classes to > neutron-lib > to avoid such issues in the future. > > Once again sorry for that. I will be more careful in the future. > > On Fri, May 15, 2020 at 03:33:13PM +0100, Neil Jerram wrote: > > I'm sorry, but this is a moan. > > > > This merge - > > > https://opendev.org/openstack/neutron/commit/a6fb2faaa5d46656db9085ad6bcfc65ded807871 > > - > > to the Neutron stable/rocky branch on April 23rd, has broken my team's > > Neutron plugin, by requiring 3rd party LinuxInterfaceDriver subclasses to > > take a new 'link_up' argument in their 'plug_new' method. > > > > IMO, it should have been obvious to folk proposing or reviewing this, > that > > it would cause breakage. > > > > Does Neutron have a different understanding of "stable" than I do? Or do > > plugins other than OVN not matter anymore? > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeacock at redhat.com Fri May 15 17:22:07 2020 From: dpeacock at redhat.com (David Peacock) Date: Fri, 15 May 2020 13:22:07 -0400 Subject: [Heat] Heat Stack Create Authorization Failed In-Reply-To: References: Message-ID: Hi izzettin, That seems pretty fundamental; are you sure you're trying to authenticate this with the correct user? I'd be checking keystone configuration and logs to see what's going on. Thanks, David On Fri, May 15, 2020 at 10:27 AM İzzettin Erdem wrote: > Hello everyone, > > When I launch a cluster on Sahara, it gives a heat stack authorization > error. I also use Murano service and it is working with heat. I could not > find the solution and I discuss this error with the Sahara-devel team. They > are also searching for this. Could you help me, please? > > Both the error log of Sahara and Heat are below. > > Sahara-engine: > http://paste.openstack.org/show/793671/ > > Heat-engine: > http://paste.openstack.org/show/793673/ > > Thanks. Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri May 15 17:42:55 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 15 May 2020 19:42:55 +0200 Subject: [neutron] Recent incompatible change in stable/rocky branch In-Reply-To: References: <20200515151731.tfmwi5qtop4t6f67@skaplons-mac> Message-ID: <20200515174255.4x5es6msr4xgsajo@skaplons-mac> Hi, On Fri, May 15, 2020 at 06:20:07PM +0100, Neil Jerram wrote: > Thanks Slawek. Are you planning to leave this change in place? I can > update my plugin's code, but that still leaves the likelihood of breakage if > - there's a new Rocky patch release > - a deployer is using an out-of-tree plugin with its own interface driver, > and upgrades to the Rocky patch release > - either they don't also upgrade their plugin code, or there isn't a plugin > update available because the plugin author hasn't noticed this problem yet. Can You maybe open Launchpad bug for that? It will be the same issue for all other stable branches like Stein or Train so we should fix it there too. > > Do you know if there will be another Rocky patch release, and if so when? Rocky is in EM phase now so we will not release it anymore. > > Best wishes, > Neil > > > On Fri, May 15, 2020 at 4:17 PM Slawek Kaplonski > wrote: > > > Hi, > > > > That is my fault as I proposed this backport actually. > > I know it was mistake and we should do it a bit different in stable > > branches to > > avoid breaking third party drivers really. > > Maybe we should also think about moving such base driver classes to > > neutron-lib > > to avoid such issues in the future. > > > > Once again sorry for that. I will be more careful in the future. > > > > On Fri, May 15, 2020 at 03:33:13PM +0100, Neil Jerram wrote: > > > I'm sorry, but this is a moan. > > > > > > This merge - > > > > > https://opendev.org/openstack/neutron/commit/a6fb2faaa5d46656db9085ad6bcfc65ded807871 > > > - > > > to the Neutron stable/rocky branch on April 23rd, has broken my team's > > > Neutron plugin, by requiring 3rd party LinuxInterfaceDriver subclasses to > > > take a new 'link_up' argument in their 'plug_new' method. > > > > > > IMO, it should have been obvious to folk proposing or reviewing this, > > that > > > it would cause breakage. > > > > > > Does Neutron have a different understanding of "stable" than I do? Or do > > > plugins other than OVN not matter anymore? > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > -- Slawek Kaplonski Senior software engineer Red Hat From kennelson11 at gmail.com Fri May 15 19:50:42 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 15 May 2020 12:50:42 -0700 Subject: [all] Virtual Ussuri Celebration In-Reply-To: References: Message-ID: See you all in ten minutes! -Kendall (diablo_rojo) On Fri, May 8, 2020 at 5:13 PM Kendall Nelson wrote: > Hello Everyone! > > I wanted to invite you all to a virtual celebration of the release! Next > Friday, May 15th at 20:00 UTC, I invite you to join me in celebrating :) > > The purpose of this meeting will be two fold: > > 1. To celebrate all that our community has accomplished over the last > release since we can't get together in person in June. I was thinking with > trivia and a baking contest (I was going to attempt a cake in the shape of > the OpenStack logo or maybe the actual Ussuri logo) :) There was also a > request for piano karaoke which is also still on the table.. > > 2. To test out the OpenDev team's meetpad instance and work out kinks so > that it can be vetted for PTG use. > > Here is the room we will be testing: > https://meetpad.opendev.org/virtual-ussuri-celebration > > > Worst case scenario, I'll share my zoom. > > Can't wait to see you all there (and what you've baked)! > > -Kendall (diablo_rojo) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri May 15 20:00:44 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 15 May 2020 20:00:44 +0000 (UTC) Subject: [all][InteropWG] Request to PTLs for highligting Core Feature add-remove References: <1713503644.1008877.1589572844084.ref@mail.yahoo.com> Message-ID: <1713503644.1008877.1589572844084@mail.yahoo.com> Hi all, Interop WG needs patches to complete the 2020.06 Ussuri guidelines. PTL or team members of CoreNovaNeutronCinderKeystoneGlanceSwiftadd-onHeat -OrchedtrationDesignate - DNS potential for Victoria add-onsIronicZunSenlinKolla & KuryrPlease suggest others Open Infra (Potential Bare Metal & Container APIs)StarlingXAirshipKata We are reviewing your release notes, but if you can advice us on non-amin APIs for Objects that you can qualify as core for current or future release please send Advisory with your module name. eg.Nova: Advisory - Cell API is potential candidate for core. etc. Alternate visit the TODO plan as on etherpad for interop next week callhttps://etherpad.opendev.org/p/interop Validate your APIs in community sitehttps://refstack.openstack.org/#/community_results Add your items for next week Fri 10 AM PDT call for Fri May 22nd. Plus if you want to attend and Present in PTG interopWG please add your topics and +1 those you like to see on June 1st 6-8 AM PDThttps://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 Thankson behalf ofChair - InteropWG  PrakashVice Chair - Mark T Voelker Sent from Yahoo Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmendiza at redhat.com Fri May 15 20:03:00 2020 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Fri, 15 May 2020 15:03:00 -0500 Subject: =?UTF-8?Q?=5bbarbican=5d_Nominating_Mois=c3=a9s_Guimar=c3=a3es_for_?= =?UTF-8?Q?barbican-core?= Message-ID: <6cfff326-0f33-7ab7-b537-c1f26f2b08c0@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I would like to nominate Moisés Guimarães to be added to the barbican-core team. Moisés has been very helpful in helping us review barbican patches, and has done an excellent job of helping maintain the castellan library. If there are no objections, we will officially add Moisés to the team next week. Thanks, Douglas Mendizábal Barbican PTL -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl6+9XMACgkQgB6WFOq/ OrdzzxAAgxxymvuA+yGDuitEcjqwlggPJBYGb5PlHqqTkXcrS6IEcneG1eEJ/Rp2 JovTMN5sv3PwhtSMxjEmyrBVR9g6MKbQoQvDk1p77fFIL9ofIFV3xKOpZutnb3ig Y2/gSMWNDfuz7DPpNofUsyzvK5eBNWC8aBbpSeN0R4qLAlyOPFe6ftOAI1RvIX4W s6CwOiMQGGGNUK8TZgvG9BOOJorgMZ+maq4tFRN8P6bLA6q7kcsUob/LBIwCpoEF xdl9EO/IaRM4kMP8y/quIzDnBJV9Im4t8WrJ5sYNPnpfS+0yXseuCACAMYWSNp63 Zf9oRyIuv3Z/+z6qjcpkct/xhijdtB+Sc0MpmGv43RSoEHkYAGU6ue2nlZkVIow9 GO5Itq38WbORBIFwlGK+J+u40ikY+EjldzIwsxoBZDvuNMsGltpVclAkyW2VUwL3 HOyKPCm0dLiVz3qYo2dTH2eC2cdSbGxF91gckke1m5o69RDK9vYi6YQxtO0ms7Em yZoVQ6OnA4KCZUEd4sby7jWm/1ueXZq05BQgnB1tXbdMa6hDp6CwVKSAfH0fHS6D 5dCuo1EoDWxCTsh3lH0wogeODcCBReNTNCWEy/ia19aOO4cXrlpLznP6GONENwEo zbaPfwnmdw5Q1ARmLcEpgsbVF+mIymVkqGpNZ8DFA/wR+mQ/fkU= =Syoj -----END PGP SIGNATURE----- From Arkady.Kanevsky at dell.com Fri May 15 21:02:10 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 15 May 2020 21:02:10 +0000 Subject: [all] Virtual Ussuri Celebration In-Reply-To: References: Message-ID: <98bc91eade084507bfe57202c3446e78@AUSX13MPS308.AMER.DELL.COM> Meetpad did not worked for me From: Kendall Nelson Sent: Friday, May 15, 2020 2:51 PM To: OpenStack Discuss Subject: Re: [all] Virtual Ussuri Celebration [EXTERNAL EMAIL] See you all in ten minutes! -Kendall (diablo_rojo) On Fri, May 8, 2020 at 5:13 PM Kendall Nelson > wrote: Hello Everyone! I wanted to invite you all to a virtual celebration of the release! Next Friday, May 15th at 20:00 UTC, I invite you to join me in celebrating :) The purpose of this meeting will be two fold: 1. To celebrate all that our community has accomplished over the last release since we can't get together in person in June. I was thinking with trivia and a baking contest (I was going to attempt a cake in the shape of the OpenStack logo or maybe the actual Ussuri logo) :) There was also a request for piano karaoke which is also still on the table.. 2. To test out the OpenDev team's meetpad instance and work out kinks so that it can be vetted for PTG use. Here is the room we will be testing: https://meetpad.opendev.org/virtual-ussuri-celebration Worst case scenario, I'll share my zoom. Can't wait to see you all there (and what you've baked)! -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri May 15 21:02:45 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 15 May 2020 14:02:45 -0700 Subject: [all] Virtual Ussuri Celebration In-Reply-To: <98bc91eade084507bfe57202c3446e78@AUSX13MPS308.AMER.DELL.COM> References: <98bc91eade084507bfe57202c3446e78@AUSX13MPS308.AMER.DELL.COM> Message-ID: What problem did you have? Did you get an error? What browser did you use? -Kendall (diablo_rojo) On Fri, May 15, 2020 at 2:02 PM wrote: > Meetpad did not worked for me > > > > *From:* Kendall Nelson > *Sent:* Friday, May 15, 2020 2:51 PM > *To:* OpenStack Discuss > *Subject:* Re: [all] Virtual Ussuri Celebration > > > > [EXTERNAL EMAIL] > > See you all in ten minutes! > > > > -Kendall (diablo_rojo) > > > > On Fri, May 8, 2020 at 5:13 PM Kendall Nelson > wrote: > > Hello Everyone! > > > > I wanted to invite you all to a virtual celebration of the release! Next > Friday, May 15th at 20:00 UTC, I invite you to join me in celebrating :) > > > > The purpose of this meeting will be two fold: > > > > 1. To celebrate all that our community has accomplished over the last > release since we can't get together in person in June. I was thinking with > trivia and a baking contest (I was going to attempt a cake in the shape of > the OpenStack logo or maybe the actual Ussuri logo) :) There was also a > request for piano karaoke which is also still on the table.. > > > > 2. To test out the OpenDev team's meetpad instance and work out kinks so > that it can be vetted for PTG use. > > > > Here is the room we will be testing: > https://meetpad.opendev.org/virtual-ussuri-celebration > > > > Worst case scenario, I'll share my zoom. > > > > Can't wait to see you all there (and what you've baked)! > > > > -Kendall (diablo_rojo) > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Fri May 15 21:05:01 2020 From: alee at redhat.com (Ade Lee) Date: Fri, 15 May 2020 17:05:01 -0400 Subject: [barbican] Nominating =?ISO-8859-1?Q?Mois=E9s?= =?ISO-8859-1?Q?_Guimar=E3es?= for barbican-core In-Reply-To: <6cfff326-0f33-7ab7-b537-c1f26f2b08c0@redhat.com> References: <6cfff326-0f33-7ab7-b537-c1f26f2b08c0@redhat.com> Message-ID: <6b7c2f31383706149c8736ae5b9b8a772fd760eb.camel@redhat.com> +1 On Fri, 2020-05-15 at 15:03 -0500, Douglas Mendizabal wrote: > I would like to nominate Moisés Guimarães to be added to the > barbican-core team. Moisés has been very helpful in helping us > review > barbican patches, and has done an excellent job of helping maintain > the castellan library. > > If there are no objections, we will officially add Moisés to the team > next week. > > Thanks, > Douglas Mendizábal > Barbican PTL > From cboylan at sapwetik.org Fri May 15 21:12:51 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 15 May 2020 14:12:51 -0700 Subject: [ops][infra] Meetpad (was: ops meetups team meeting 2020-5-5) In-Reply-To: <829a54b3-8beb-4bf0-8003-01b746b1e17c@www.fastmail.com> References: <20200505164906.igthboyfjiureuf2@yuggoth.org> <829a54b3-8beb-4bf0-8003-01b746b1e17c@www.fastmail.com> Message-ID: <4bca82d1-911a-4309-acdb-c038336444da@www.fastmail.com> On Thu, May 14, 2020, at 1:17 PM, Clark Boylan wrote: > On Thu, May 14, 2020, at 10:42 AM, Chris Morgan wrote: > > We on the ops meetups team had a trial meeting on meetpad.opendev.org > > this morning and for the most part it worked very well (detailed > > feedback below). Speaking personally, I am very happy to see an open > > source solution for video conferencing being adopted by the foundation > > to some extent. I had and continue to have reservations about Zoom, but > > at the end of the day no matter how well they respond to the security > > and privacy concerns it will still be a proprietary solution and no > > more true to the openstack tenets than slack is as a replacement for > > irc. > > > > feedback > > > > - etherpad integration is cool but several of us found the window > > seemed to disappear inexplicably > > It seems that when a call starts the etherpad starts "pinned" but then > certain events can cause jitsi to go back to its normal "focus on the > person talking" mode of operation. In the bottom right is a "3 dot" > menu and from there you can open and close the "shared document" this > allows you to toggle the etherpad manually. > > Even after toggling it directly it seems that jitsi wants to keep focus > on speakers in some cases. I've been fiddling with it to try and > understand that better and it seems like some of these things may help > (though I can't say for sure): > > * Click in the document directly > * Collapse the right hand column of mini webcams using the > in the bottom right > > > > - colored highlighting of fragments on the etherpad showed up > > overlapped for some but not all meeting members, obscuring some text > > I've noticed this too. Toggling authorship colors in etherpad's > settings menu seems to correct this. > > > - background blurring seemed very heavy for some participants computers > > and this possibly lead to some sessions locking up > > I believe this is specifically listed as an "experimental" feature in > the menu. I expect this is why. Should probably avoid using it. > > > - it's not clear what named meetings persistence is, for example a > > named meeting from yesterday is still shown today, but doesn't have the > > password I applied yesterday > > > > My guess is some (all?) of this is just how Jitsi is right now. > > Few other things we've noticed: > > * Chrome/Chromium seem more reliable than Firefox Apparently there is some technical reason for this around how chrom* and firefox handle webrtc video? I don't have the details but apparently this is expected. Additionally, if you've found that the web browser version is giving you trouble, the mobile (iOS and Android) jitsi meet apps apparently work quite well. Since we're hosting our own server you need to update the settings to point to our server and not the default server, but otherwise people have reported this works well. On Android you can change the server by opening the in app menu, going to settings, and changing the server URL to https://meetpad.opendev.org. I assume its similar on iOS but don't have an iOS device to confirm. Hope this helps, Clark From rsharma1818 at outlook.com Sat May 16 17:05:19 2020 From: rsharma1818 at outlook.com (Rahul Sharma) Date: Sat, 16 May 2020 17:05:19 +0000 Subject: [Neutron] How to change the MAC address of Gateway interface of the router Message-ID: Hi, I have setup a multi-host openstack cloud on AWS consisting of 3 servers i.e. Controller, Compute & Network Everything is working as expected. My requirement is that the compute instances should be able to communicate with the internet and vice-versa. However, AWS due to its security policies will drop all traffic that is sourced from the VMs because the VM traffic will have the MAC address of the gateway interface of the router when it hits the AWS switch. This MAC address is not know to AWS hence it drops this traffic. AWS will allow only that traffic that contains the registered MAC address as its source address So I need to change the MAC address of the gateway interface of the L3 router on the network node. I tried googling but could not find any solution. Is there any solution/command to do this ? Thanks, Kaushik -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Sat May 16 21:39:13 2020 From: melwittt at gmail.com (melanie witt) Date: Sat, 16 May 2020 14:39:13 -0700 Subject: [gate] parallel releasenotes job builds Message-ID: Hi all, We recently merged a change [1] that will run the build-openstack-releasenotes job with 'sphinx-build -j auto' to allow the build to run on multiple CPUs. With the recent releases of reno 3.1.0 and openstackdocstheme 2.1.2, both libraries advertise that they are parallel_read_safe and thus can support the '-j' option. This effort was inspired by the nova build-openstack-releasenotes job which typically takes 52m to > 1h to run. The job timeout is 1h, so we were occasionally seeing TIMED_OUT job failures. With parallel sphinx-build, the nova job run time is reduced down to about 15m (awesome!). I wanted to say thanks to smcginnis, dhellmann, clarkb, and AJaeger for helping this newbie to reno and docs builds. If anyone encounters any issues around the parallel reno builds in their projects, please let us know. Cheers, -melanie [1] https://review.opendev.org/727473 From doug at doughellmann.com Sat May 16 22:35:29 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 16 May 2020 18:35:29 -0400 Subject: [gate] parallel releasenotes job builds In-Reply-To: References: Message-ID: <937D54AD-FA15-42D5-8DB6-F7D3986C4BDF@doughellmann.com> > On May 16, 2020, at 5:48 PM, melanie witt wrote: > > Hi all, > > We recently merged a change [1] that will run the build-openstack-releasenotes job with 'sphinx-build -j auto' to allow the build to run on multiple CPUs. With the recent releases of reno 3.1.0 and openstackdocstheme 2.1.2, both libraries advertise that they are parallel_read_safe and thus can support the '-j' option. > > This effort was inspired by the nova build-openstack-releasenotes job > which typically takes 52m to > 1h to run. The job timeout is 1h, so we > were occasionally seeing TIMED_OUT job failures. > > With parallel sphinx-build, the nova job run time is reduced down to about 15m (awesome!). > > I wanted to say thanks to smcginnis, dhellmann, clarkb, and AJaeger for helping this newbie to reno and docs builds. > > If anyone encounters any issues around the parallel reno builds in their projects, please let us know. > > Cheers, > -melanie > > [1] https://review.opendev.org/727473 > Nice results! Thanks for pushing this forward! From smooney at redhat.com Sun May 17 23:25:59 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 18 May 2020 00:25:59 +0100 Subject: [Neutron] How to change the MAC address of Gateway interface of the router In-Reply-To: References: Message-ID: <9c2453a3f9531dccf8d6da219fa672428eef8668.camel@redhat.com> On Sat, 2020-05-16 at 17:05 +0000, Rahul Sharma wrote: > Hi, > > I have setup a multi-host openstack cloud on AWS consisting of 3 servers i.e. Controller, Compute & Network > > Everything is working as expected. My requirement is that the compute instances should be able to communicate with the > internet and vice-versa. > > However, AWS due to its security policies will drop all traffic that is sourced from the VMs because the VM traffic > will have the MAC address of the gateway interface of the router when it hits the AWS switch. This MAC address is not > know to AWS hence it drops this traffic. AWS will allow only that traffic that contains the registered MAC address as > its source address > > So I need to change the MAC address of the gateway interface of the L3 router on the network node. I tried googling > but could not find any solution. > > Is there any solution/command to do this ? you might be able to do a neutorn port update to update the neutron port mac of the router your other options is to not add an interface directly to br-ex and instead assign the wan netwroks gateway ip to the br-ex directly and nat the traffic https://www.rdoproject.org/networking/networking-in-too-much-detail/#nat-to-host-addres > > Thanks, > Kaushik From mark at stackhpc.com Mon May 18 07:49:53 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 18 May 2020 08:49:53 +0100 Subject: [kolla] Ceph transition plan In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04771F7E@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04771F7E@gmsxchsvr01.thecreation.com> Message-ID: On Fri, 15 May 2020 at 18:02, Eric K. Miller wrote: > > Hi, > > I noticed that in Kolla's master branch that Ceph has been removed, as > expected for a deprecated component (I'm sad, though, since it works so > well!). > > In the release notes, it is noted that "Prior to this we will ensure a > migration path to another tool such as Ceph Ansible is available." Is > this migration path documented since it appears that the latest release > will not have the Kolla Ceph container available? Hi Eric, I'm afraid we don't currently have a migration path to another tool. There were a few factors that got us to this point. First, the move to CentOS 8 meant that continuing to support Ceph in Ussuri would have required some additional work, and finding a path to migrate a Ceph cluster from CentOS 7 to 8, only for us to then drop support. Second, there is considerable churn in the Ceph deployment arena currently, with the Ceph team working on cephadm [1] and appearing less interested in ceph-ansible. This made it hard for us to pick a tool as the target for a migration. I would suggest finding some other users of kolla ceph clusters, and trying to come up with a plan. There are two options I can see. 1. develop and test a migration path to another tool 2. extract ceph images and deployment support from kolla and kolla-ansible into a separate project I'm sure the upstream community would be able to assist in either of these paths. [1] https://docs.ceph.com/docs/master/cephadm/ Thanks, Mark > > Thanks! > > Eric > > > > From mark at stackhpc.com Mon May 18 07:53:45 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 18 May 2020 08:53:45 +0100 Subject: [ironic] trusted delgation cores? In-Reply-To: References: Message-ID: On Wed, 13 May 2020 at 16:29, Julia Kreger wrote: > > Greetings awesome people and AIs! > > Earlier today, I noticed Iury (iurygregory) only had +1 rights on the > ironic-prometheus-exporter. I noted this in IRC and went to go see > about adding him to the group, and realized we didn't have a separate > group already defined for ironic-prometheus-exporter, which left a > question, do we create a new group, or just grant ironic-core > membership. > > And the question of starting to engage in trusted delegation of core > rights came up in IRC[0]. I think it makes a lot of sense, but wanted > to see what everyone thought? I think it's worth trying trusted delegation. > > Specifically in Iury's case: I feel he has proven himself, in ironic > and non-ironic cases, and I think it makes sense to grant him core > rights under the premise that he is unlikely to approve non-trivial > changes to merge that he is not confident in +2 > > Thoughts, concerns, congratulations? For both questions? > > -Julia > > [0]: http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2020-05-13.log.html#t2020-05-13T13:35:45 > From thierry at openstack.org Mon May 18 08:54:01 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 18 May 2020 10:54:01 +0200 Subject: [largescale-sig] Next meeting: May 20, 8utc Message-ID: <3aae1db3-625e-4c78-e408-feed94a3a506@openstack.org> Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, May 20 at 8 UTC[1] in the #openstack-meeting-3 channel on IRC: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200520T08 Feel free to add topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting A reminder of the TODOs we had from last meeting, in case you have time to make progress on them: - amorin to propose patch against Nova doc - ttx to create an empty oslo-metric repository - masahito to finalize oslo.metric POC code release Talk to you all on Wednesday, -- Thierry Carrez From arne.wiebalck at cern.ch Mon May 18 10:31:13 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 18 May 2020 12:31:13 +0200 Subject: [ironic] trusted delgation cores? In-Reply-To: References: Message-ID: <8f130699-2df6-0e9c-834e-9c3f82e50db0@cern.ch> On 13.05.20 17:28, Julia Kreger wrote: > Greetings awesome people and AIs! > > Earlier today, I noticed Iury (iurygregory) only had +1 rights on the > ironic-prometheus-exporter. I noted this in IRC and went to go see > about adding him to the group, and realized we didn't have a separate > group already defined for ironic-prometheus-exporter, which left a > question, do we create a new group, or just grant ironic-core > membership. > > And the question of starting to engage in trusted delegation of core > rights came up in IRC[0]. I think it makes a lot of sense, but wanted > to see what everyone thought? > > Specifically in Iury's case: I feel he has proven himself, in ironic > and non-ironic cases, and I think it makes sense to grant him core > rights under the premise that he is unlikely to approve non-trivial > changes to merge that he is not confident in. +2 from me, well deserved Iury! From neil at tigera.io Mon May 18 11:15:05 2020 From: neil at tigera.io (Neil Jerram) Date: Mon, 18 May 2020 12:15:05 +0100 Subject: [neutron] Recent incompatible change in stable/rocky branch In-Reply-To: <20200515174255.4x5es6msr4xgsajo@skaplons-mac> References: <20200515151731.tfmwi5qtop4t6f67@skaplons-mac> <20200515174255.4x5es6msr4xgsajo@skaplons-mac> Message-ID: On Fri, May 15, 2020 at 6:43 PM Slawek Kaplonski wrote: > Hi, > > On Fri, May 15, 2020 at 06:20:07PM +0100, Neil Jerram wrote: > > Thanks Slawek. Are you planning to leave this change in place? I can > > update my plugin's code, but that still leaves the likelihood of > breakage if > > - there's a new Rocky patch release > > - a deployer is using an out-of-tree plugin with its own interface > driver, > > and upgrades to the Rocky patch release > > - either they don't also upgrade their plugin code, or there isn't a > plugin > > update available because the plugin author hasn't noticed this problem > yet. > > Can You maybe open Launchpad bug for that? It will be the same issue for > all > other stable branches like Stein or Train so we should fix it there too. > Thanks Slawek, I've opened a bug here: https://bugs.launchpad.net/neutron/+bug/1879307 Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon May 18 12:25:31 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 18 May 2020 07:25:31 -0500 Subject: [gate] parallel releasenotes job builds In-Reply-To: References: Message-ID: <56f9df88-9183-6164-26f3-70172a436d5a@gmx.com> On 5/16/20 4:39 PM, melanie witt wrote: > Hi all, > > We recently merged a change [1] that will run the > build-openstack-releasenotes job with 'sphinx-build -j auto' to allow > the build to run on multiple CPUs. With the recent releases of reno > 3.1.0 and openstackdocstheme 2.1.2, both libraries advertise that they > are parallel_read_safe and thus can support the '-j' option. > > This effort was inspired by the nova build-openstack-releasenotes job > which typically takes 52m to > 1h to run. The job timeout is 1h, so we > were occasionally seeing TIMED_OUT job failures. > > With parallel sphinx-build, the nova job run time is reduced down to > about 15m (awesome!). Wow, that is better than I was expecting. Great improvement! From balazs.gibizer at est.tech Mon May 18 13:12:06 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 18 May 2020 15:12:06 +0200 Subject: [nova][gate] openstack-tox-pep8 job failing In-Reply-To: <24086ee7-1485-1beb-2864-083427c34bb6@gmail.com> References: <41b74834-5b5f-7e65-dcf9-c39094875b92@gmail.com> <627ae9fb-6194-f40a-505c-303ca04dff19@gmail.com> <78db899d-3904-f09d-0a72-f842cb4b4654@gmail.com> <24086ee7-1485-1beb-2864-083427c34bb6@gmail.com> Message-ID: <6C3JAQ.J5MDI3NG4OI3@est.tech> On Fri, May 15, 2020 at 08:26, melanie witt wrote: > On 5/14/20 20:45, melanie witt wrote: >> On 5/14/20 20:40, melanie witt wrote: >>> On 5/14/20 17:21, melanie witt wrote: >>>> If we can get another fix or revert to openstackdocstheme to take >>>> care of the docs job, we'd then only need [2] on the nova side >>>> to unblock the nova gate. >>> >>> A roll back to openstackdocstheme 2.0.2 in upper-constraints has >>> been proposed to unblock the nova and cinder gates in the >>> meantime: >>> >>> https://review.opendev.org/728335 >> >> Once ^ merges, our docs job will start passing. >> >> Then someone will need to recheck: >> >> https://review.opendev.org/727347 >> >> to fix our pep8 job. > > Both of these changes have since merged and the nova gate should be > unblocked now. We also needed to merge the backport of the flake8 / hacking fix [1] to the stable/ussuri as that branch was also affected. [1] https://review.opendev.org/#/c/728803/ > > -melanie > >>>> [1] https://review.opendev.org/728016 >>>> [2] https://review.opendev.org/727347 >>>> [3] >>>> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014876.html >>>>  >>> >> > > From mordred at inaugust.com Mon May 18 13:31:39 2020 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 18 May 2020 08:31:39 -0500 Subject: Dropping python2.7 from diskimage-builder Message-ID: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> Heya, I just pushed up: https://review.opendev.org/728889 Drop support for python2 Which drops support for installing diskimage-builder using python2. It doesn’t drop support for in-image python2, that would be a whole different story. It seems that since the two largest DIB users, OpenStack and Zuul, are both now python3 only, it’s a safe move to make. IBM PowerKVM CI is running third-party CI with python2-based tests. We should probably either update those or just drop it? Thoughts? Monty From bence.romsics at gmail.com Mon May 18 14:52:09 2020 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 18 May 2020 16:52:09 +0200 Subject: [neutron] bug deputy report for week of 2020-05-11 Message-ID: Hi Neutrinos, Here's the deputy report for last week: High: * https://bugs.launchpad.net/neutron/+bug/1878042 SRIOV agent does not parse correctly "ip link show " fix proposed: https://review.opendev.org/726918 * https://bugs.launchpad.net/neutron/+bug/1878160 [OVN] Functional tests environment is using old OVN fix proposed (wip): https://review.opendev.org/727193 Medium: * https://bugs.launchpad.net/neutron/+bug/1877977 [DVR] Recovery from openvswitch restart fails when veth are used for bridges interconnection unassigned * https://bugs.launchpad.net/neutron/+bug/1875981 Admin deleting servers or ports leaves orphaned DNS records fix proposed (wip): https://review.opendev.org/728385 related test: https://review.opendev.org/728409 * https://bugs.launchpad.net/neutron/+bug/1878299 [floatingip port_forwarding] changing external port to used value hangs with retriable exception unassigned * https://bugs.launchpad.net/neutron/+bug/1878358 OVN migration doesn't clean up the existing ml2-ovs agents fix proposed: https://review.opendev.org/727648 * https://bugs.launchpad.net/neutron/+bug/1878632 Race condition in subnet and segment delete: The segment is still bound with port(s) assigned, no fix proposed yet test reproducing the bug: https://review.opendev.org/728904 * https://bugs.launchpad.net/neutron/+bug/1878916 When deleting a network, delete the segment RP only when the segment is deleted fix proposed: https://review.opendev.org/728507 Incomplete: * https://bugs.launchpad.net/neutron/+bug/1877978 SNAT Problem Floating IP * https://bugs.launchpad.net/neutron/+bug/1878719 DHCP Agent's iptables CHECKSUM rule causes skb_warn_bad_offload kernel * https://bugs.launchpad.net/neutron/+bug/1879009 attaching extra port to server raise duplicate dns-name error See you on the meeting, Bence From pierre at stackhpc.com Mon May 18 15:19:33 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 18 May 2020 17:19:33 +0200 Subject: [blazar] IRC meeting cancelled on May 19 Message-ID: Hello, Due to a scheduling conflict, I will not be able to chair the Blazar IRC meeting on Tuesday May 19. I propose to cancel and meet as usual on May 26, when we will finalise the agenda for the PTG meeting. Best wishes, Pierre Riteau (priteau) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon May 18 15:25:42 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 18 May 2020 17:25:42 +0200 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> Message-ID: This might break bifrost stable branches, as bifrost uses DIB from master by default, even for older releases. On Mon, 18 May 2020 at 15:41, Monty Taylor wrote: > Heya, > > I just pushed up: > > https://review.opendev.org/728889 Drop support for python2 > > Which drops support for installing diskimage-builder using python2. It > doesn’t drop support for in-image python2, that would be a whole different > story. It seems that since the two largest DIB users, OpenStack and Zuul, > are both now python3 only, it’s a safe move to make. > > IBM PowerKVM CI is running third-party CI with python2-based tests. We > should probably either update those or just drop it? > > Thoughts? > Monty > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon May 18 15:42:04 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 18 May 2020 17:42:04 +0200 Subject: [nova][ptg] Runway process in Victoria Message-ID: <4AAJAQ.8C39QP5J2M3Z@est.tech> Hi, [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally kept short, let's try to discuss it or even conclude it before the PTG] In the last 4 cycles we used a process called runway to focus and timebox of the team's feature review effort. However compared to the previous cycles in ussuri we did not really keep the process running. Just compare the length of the Log section of each etherpad [1][2][3][4] to see the difference. So I have two questions: 1) Do we want to keep the process in Victoria? 2) If yes, how we can make the process running? 2.1) How can we keep the runway etherpad up-to-date? 2.2) How to make sure that the team is focusing on the reviews that are in the runway slots? Personally I don't want to advertise this process for contributors if the core team is not agreed and committed to keep the process running as it would lead to unnecessary disappointment. Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg [1] https://etherpad.opendev.org/p/nova-runways-rocky [2] https://etherpad.opendev.org/p/nova-runways-stein [3] https://etherpad.opendev.org/p/nova-runways-train [4] https://etherpad.opendev.org/p/nova-runways-ussuri From balazs.gibizer at est.tech Mon May 18 15:50:50 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 18 May 2020 17:50:50 +0200 Subject: [nova][ptg] Feature Liaison Message-ID: Hi, [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally kept short, let's try to discuss it or even conclude it before the PTG] Last cycle we introduced the Feature Liaison process [1]. I think this is time to reflect on it. Did it helped? Do we need to tweak it? Personally for me it did not help much but I think this is a fairly low cost process so I'm OK to keep it as is. Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg [1] https://review.opendev.org/#/c/685857/ From iurygregory at gmail.com Mon May 18 15:55:47 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 18 May 2020 17:55:47 +0200 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> Message-ID: @Pierre, I think we can change stable branches in bifrost to use a specific tag from diskimage-builder and the problem would be solved =) Em seg., 18 de mai. de 2020 às 17:28, Pierre Riteau escreveu: > This might break bifrost stable branches, as bifrost uses DIB from master > by default, even for older releases. > > On Mon, 18 May 2020 at 15:41, Monty Taylor wrote: > >> Heya, >> >> I just pushed up: >> >> https://review.opendev.org/728889 Drop support for python2 >> >> Which drops support for installing diskimage-builder using python2. It >> doesn’t drop support for in-image python2, that would be a whole different >> story. It seems that since the two largest DIB users, OpenStack and Zuul, >> are both now python3 only, it’s a safe move to make. >> >> IBM PowerKVM CI is running third-party CI with python2-based tests. We >> should probably either update those or just drop it? >> >> Thoughts? >> Monty >> > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon May 18 15:56:14 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 18 May 2020 17:56:14 +0200 Subject: [nova][ptg] Ussuri Retrospective Message-ID: Hi, There is a common but seemingly not that popular topic on each nova PTG, the retrospective. I added the retrospective to the PTG etherpad [0]. Please collect retrospective topics there or tell me privately if you have a sensitive topic you want me to bring up. If there won't be any topic collected until the PTG then we will simply skip the retrospective. Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg From gmann at ghanshyammann.com Mon May 18 16:01:58 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 May 2020 11:01:58 -0500 Subject: [tc][tricircle] Retiring the Tricircle project In-Reply-To: <1719ee0b5b8.df897a25137120.8453552300711631772@ghanshyammann.com> References: <1719ee0b5b8.df897a25137120.8453552300711631772@ghanshyammann.com> Message-ID: <17228853739.12217f1e9147302.9169750847147806335@ghanshyammann.com> ---- On Tue, 21 Apr 2020 17:34:18 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > As you know, Tricirlce is a leaderless project for the Victoria cycle means there is no PTL > candidate for Victoria cycle. 'No PTL' is one of the criteria which triggers TC to start checking the > health, maintainers of the project for dropping project from OpenStack Governance[1]. > > TC discussed the Tricircle project in today's meeting[2] and checked if the project has > maintainers but no PTL or it is missing the maintainer completely. By looking at the > Ussuri cycle development work, it seems no activity related to Tricircle except few > gate fixes or community goal commits[2]. > > Based on all these checks, we are going to start the process of removing Tricircle from governance in the Victoria > cycle as specified in the Mandatory Repository Retirement resolution [4] and detailed in the infra manual [5]. > > NOTE: release team is trying to do Ussuri release for Tricircle's deliverables if we can get this reno fix merge > - https://review.opendev.org/#/c/721697/ > > From the Victoria cycle, Tricircle will move out of OpenStack governance by keeping their > repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. > If someone from old or new maintainers shows interest to continue its development then it can be re-added > to OpenStack governance. > > With that thanks to all Tricircle contributors and PTLs for maintaining this project. As we are in Victoria cycle, I have started the process of retiring the project: - https://review.opendev.org/#/q/topic:retire-tricircle+(status:open+OR+status:merged) -gmann > > [1] https://governance.openstack.org/tc/reference/dropping-projects.html > [2] http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-04-21-15.01.log.html > [3] https://www.stackalytics.com/?metric=commits&release=ussuri&module=tricircle-group > [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html > [5] https://docs.opendev.org/opendev/infra-manual/latest/drivers.html#retiring-a-project > > -gmann > > > From balazs.gibizer at est.tech Mon May 18 16:11:11 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 18 May 2020 18:11:11 +0200 Subject: [nova][ptg] Can we close old bugs? Message-ID: Hi, [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally kept short, let's try to discuss it or even conclude it before the PTG] We have more than 800 open bugs in nova [1] and the oldest is 8 years old. Can we close old bugs? If yes, what would be the closing criteria? Age and status? Personally I would close every bug that is not updated in the last 3 years and not in INPROGRESS state. Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg [1] https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE From openstack at nemebean.com Mon May 18 17:09:48 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 18 May 2020 12:09:48 -0500 Subject: [oslo] Ping List for Victoria Message-ID: <58089512-5990-d0a8-35ab-876bace6fd91@nemebean.com> Hi, With the start of a new cycle, we refresh our courtesy ping list for the start of the meeting. We do this to avoid spamming people who may no longer be working on Oslo but haven't explicitly removed their name from the ping list. To that end, I've added a new ping list above the agenda template[0]. If you wish to continue receiving courtesy pings, please add your name there. In a couple of weeks we will switch to using this new list. Thanks. -Ben 0: https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_Template From smooney at redhat.com Mon May 18 17:49:57 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 18 May 2020 18:49:57 +0100 Subject: [nova][ptg] Can we close old bugs? In-Reply-To: References: Message-ID: <549bf8aa4a7dfa3316ea3516e12f70873a21411b.camel@redhat.com> On Mon, 2020-05-18 at 18:11 +0200, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > We have more than 800 open bugs in nova [1] and the oldest is 8 years > old. > Can we close old bugs? realitically i think yes but we might want to mark them in some way so we know it may or may not have been fixed when we do. > If yes, what would be the closing criteria? Age and status? so downstream we debate this from time too time. ultimately if a release in no longer supported and we dont have customer using it then we can close "bugs" for those older release provided we dont think they affect current/supported release too. we have a 30 rule for bugs that are in "need info" e.g. if i asked the reporter to provide more info such as logs and they dont do so in 30 days we close it. they are free to reopen it if they eventually provide the info requested. i think this would be equiveltnt to the incomplete state upstream where the bug report is marked as incompelte because we are missing info we need. upstream we might want to extend the time frame form 30 days to say 6 months/one cycle but after a cycle if a bug is still in incomplete its likely that any upstream momentum that may have existed to go fix it has long since fizzeled out. there might still genuinly be an issue that we shoudl fix which is why i think we should have some way to mark the bug as closed without resolution due to age such as a tag but i dont think it makes sense to leave them open for ever. > > Personally I would close every bug that is not updated in the last 3 > years and not in INPROGRESS state. you are rather geourse in your tiem as i would close any bug in that state which si not on a maintianed branch. which would be 18 months to 2 years ish but we could start with 3 years. > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] > https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE > > > From iurygregory at gmail.com Mon May 18 19:29:35 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 18 May 2020 21:29:35 +0200 Subject: [ironic] III SPUC - The Sanity Preservation Un-Conference Message-ID: Hello everyone \o/ It's time for the third edition of SPUC! It will happen this Friday (May 22) at 5pm UTC! For those who doesn't know what is SPUC check [0]. We will be using Julia's bluejeans: https://bluejeans.com/u/jkreger Etherpad for ideas: https://etherpad.opendev.org/p/IIISanityPreservationUnConference [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013521.html -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon May 18 20:48:12 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 18 May 2020 15:48:12 -0500 Subject: [all] Virtual Ussuri Celebration In-Reply-To: References: Message-ID: It was fun, and now I have a better understanding of how meetpad works. :-) Thanks for organizing! On 5/15/20 2:50 PM, Kendall Nelson wrote: > See you all in ten minutes! > > -Kendall (diablo_rojo) > > On Fri, May 8, 2020 at 5:13 PM Kendall Nelson > wrote: > > Hello Everyone! > > I wanted to invite you all to a virtual celebration of the release! > Next Friday, May 15th at 20:00 UTC, I invite you to join me in > celebrating :) > > The purpose of this meeting will be two fold: > > 1. To celebrate all that our community has accomplished over the > last release since we can't get together in person in June. I was > thinking with trivia and a baking contest (I was going to attempt a > cake in the shape of the OpenStack logo or maybe the actual Ussuri > logo) :) There was also a request  for piano karaoke which is also > still on the table.. > > 2. To test out the OpenDev team's meetpad instance and work out > kinks so that it can be vetted for PTG use. > > Here is the room we will be testing: > https://meetpad.opendev.org/virtual-ussuri-celebration > > > Worst case scenario, I'll share my zoom. > > Can't wait to see you all there (and what you've baked)! > > -Kendall (diablo_rojo) > > > From juliaashleykreger at gmail.com Mon May 18 22:29:55 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 18 May 2020 15:29:55 -0700 Subject: [ironic] III SPUC - The Sanity Preservation Un-Conference In-Reply-To: References: Message-ID: To fill in a brief amount of missing context! During the last two SPUCs, while we've discussed a variety of things, we've also had some useful operator feedback and identification of issues. So we want to kind of continue the theme by encouraging proposal of crazy ideas during the SPUC! So bring crazy ideas, wants, or even needs! -Julia On Mon, May 18, 2020 at 12:32 PM Iury Gregory wrote: > > Hello everyone \o/ > > It's time for the third edition of SPUC! It will happen this Friday (May 22) at 5pm UTC! > For those who doesn't know what is SPUC check [0]. > We will be using Julia's bluejeans: https://bluejeans.com/u/jkreger > Etherpad for ideas: https://etherpad.opendev.org/p/IIISanityPreservationUnConference > > [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013521.html > > -- > Att[]'s > Iury Gregory Melo Ferreira > MSc in Computer Science at UFCG > Part of the puppet-manager-core team in OpenStack > Software Engineer at Red Hat Czech > Social: https://www.linkedin.com/in/iurygregory > E-mail: iurygregory at gmail.com From dev.faz at gmail.com Tue May 19 05:47:29 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Tue, 19 May 2020 07:47:29 +0200 Subject: [neutron] WSREP: referenced FK check fail: Lock wait index Message-ID: Hi, im just upgrading my galera nodes from 5.6 to 5.7 and pike -> queens. Now im seeing this lines hitting my mysql.log: -- 2020-05-19T05:41:39.516879Z 11861 [ERROR] InnoDB: WSREP: referenced FK check fail: Lock wait index `PRIMARY` table `neutron`.`provisioningblocks` -- Is this a known issue? Anything I may/should do? Thank a lot, Fabian -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Tue May 19 06:08:42 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Tue, 19 May 2020 08:08:42 +0200 Subject: [neutron] WSREP: referenced FK check fail: Lock wait index In-Reply-To: References: Message-ID: Hi, seems not to be a neutron issue => https://jira.mariadb.org/browse/MDEV-18562 But if anyone is having some hints - thanks a lot :) Fabian Am Di., 19. Mai 2020 um 07:47 Uhr schrieb Fabian Zimmermann < dev.faz at gmail.com>: > Hi, > > im just upgrading my galera nodes from 5.6 to 5.7 and pike -> queens. > > Now im seeing this lines hitting my mysql.log: > > -- > 2020-05-19T05:41:39.516879Z 11861 [ERROR] InnoDB: WSREP: referenced FK > check fail: Lock wait index `PRIMARY` table `neutron`.`provisioningblocks` > -- > > Is this a known issue? Anything I may/should do? > > Thank a lot, > > Fabian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stendulker at gmail.com Tue May 19 06:35:21 2020 From: stendulker at gmail.com (Shivanand Tendulker) Date: Tue, 19 May 2020 12:05:21 +0530 Subject: [ironic] trusted delgation cores? In-Reply-To: References: Message-ID: Its +2 from me. Congratulations Iury !! Thanks and Regards Shiv On Wed, May 13, 2020 at 8:59 PM Julia Kreger wrote: > Greetings awesome people and AIs! > > Earlier today, I noticed Iury (iurygregory) only had +1 rights on the > ironic-prometheus-exporter. I noted this in IRC and went to go see > about adding him to the group, and realized we didn't have a separate > group already defined for ironic-prometheus-exporter, which left a > question, do we create a new group, or just grant ironic-core > membership. > > And the question of starting to engage in trusted delegation of core > rights came up in IRC[0]. I think it makes a lot of sense, but wanted > to see what everyone thought? > > Specifically in Iury's case: I feel he has proven himself, in ironic > and non-ironic cases, and I think it makes sense to grant him core > rights under the premise that he is unlikely to approve non-trivial > changes to merge that he is not confident in. > > Thoughts, concerns, congratulations? For both questions? > > -Julia > > [0]: > http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2020-05-13.log.html#t2020-05-13T13:35:45 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue May 19 07:13:26 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 19 May 2020 08:13:26 +0100 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> Message-ID: On Mon, 18 May 2020 at 16:56, Iury Gregory wrote: > > @Pierre, I think we can change stable branches in bifrost to use a specific tag from diskimage-builder and the problem would be solved =) I would urge caution over dropping Python 2 from branchless projects. We tried it for Tenks, and within weeks had created a branch from the last release supporting Python 2 for bug fixes. > > Em seg., 18 de mai. de 2020 às 17:28, Pierre Riteau escreveu: >> >> This might break bifrost stable branches, as bifrost uses DIB from master by default, even for older releases. >> >> On Mon, 18 May 2020 at 15:41, Monty Taylor wrote: >>> >>> Heya, >>> >>> I just pushed up: >>> >>> https://review.opendev.org/728889 Drop support for python2 >>> >>> Which drops support for installing diskimage-builder using python2. It doesn’t drop support for in-image python2, that would be a whole different story. It seems that since the two largest DIB users, OpenStack and Zuul, are both now python3 only, it’s a safe move to make. >>> >>> IBM PowerKVM CI is running third-party CI with python2-based tests. We should probably either update those or just drop it? >>> >>> Thoughts? >>> Monty > > > > -- > Att[]'s > Iury Gregory Melo Ferreira > MSc in Computer Science at UFCG > Part of the puppet-manager-core team in OpenStack > Software Engineer at Red Hat Czech > Social: https://www.linkedin.com/in/iurygregory > E-mail: iurygregory at gmail.com From balazs.gibizer at est.tech Tue May 19 07:49:38 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 19 May 2020 09:49:38 +0200 Subject: [nova][ptg] Can we close old bugs? In-Reply-To: <549bf8aa4a7dfa3316ea3516e12f70873a21411b.camel@redhat.com> References: <549bf8aa4a7dfa3316ea3516e12f70873a21411b.camel@redhat.com> Message-ID: On Mon, May 18, 2020 at 18:49, Sean Mooney wrote: > On Mon, 2020-05-18 at 18:11 +0200, Balázs Gibizer wrote: >> Hi, >> >> [This is a topic from the PTG etherpad [0]. As the PTG time is >> intentionally kept short, let's try to discuss it or even conclude >> it >> before the PTG] >> >> We have more than 800 open bugs in nova [1] and the oldest is 8 >> years >> old. >> Can we close old bugs? > realitically i think yes but we might want to mark them in some way > so we > know it may or may not have been fixed when we do. How do we decide that a bug might have been fixed? If a human brain needs to check the bugs one by one, then given the time such check needs and the amount of bugs we have, I don't think this is feasible. >> If yes, what would be the closing criteria? Age and status? > so downstream we debate this from time too time. > ultimately if a release in no longer supported and we dont have > customer using it > then we can close "bugs" for those older release provided we dont > think they affect current/supported > release too. we have a 30 rule for bugs that are in "need info" e.g. > if i asked the reporter to provide > more info such as logs and they dont do so in 30 days we close it. > they are free to reopen it if they > eventually provide the info requested. i think this would be > equiveltnt to the incomplete state upstream > > where the bug report is marked as incompelte because we are missing > info we need. > upstream we might want to extend the time frame form 30 days to say 6 > months/one cycle but after a cycle if a bug > is still in incomplete its likely that any upstream momentum that may > have existed to go fix it has long since > fizzeled out. When I request additional logs / information in a bug I immediately mark it Incomplete and ask the author to put it back to New when she provides the requested information. Also the 800 bugs in [1] does not contain the Incomplete bugs. > > there might still genuinly be an issue that we shoudl fix which is > why i think we should have some way to mark the bug > as closed without resolution due to age such as a tag but i dont > think it makes sense to leave them open for ever. >> >> Personally I would close every bug that is not updated in the last 3 >> years and not in INPROGRESS state. > you are rather geourse in your tiem as i would close any bug in that > state which si not on a maintianed > branch. which would be 18 months to 2 years ish but we could start > with 3 years. I'm fine to close more bugs so 2 years works for me. Cheers, gibi >> >> Cheers, >> gibi >> >> [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> [1] >> > https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE >> >> >> > From elfosardo at gmail.com Tue May 19 07:57:15 2020 From: elfosardo at gmail.com (Riccardo Pittau) Date: Tue, 19 May 2020 09:57:15 +0200 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> Message-ID: As far as I always like the idea of dropping Python 2, I dread more the potential mayhem that this will bring in bifrost stable branches. If this change needs to move forward, we need to be sure of the impact and have a plan in place *before* the change happen. On Tue, May 19, 2020 at 9:20 AM Mark Goddard wrote: > > On Mon, 18 May 2020 at 16:56, Iury Gregory wrote: > > > > @Pierre, I think we can change stable branches in bifrost to use a specific tag from diskimage-builder and the problem would be solved =) > > I would urge caution over dropping Python 2 from branchless projects. > We tried it for Tenks, and within weeks had created a branch from the > last release supporting Python 2 for bug fixes. > > > > > Em seg., 18 de mai. de 2020 às 17:28, Pierre Riteau escreveu: > >> > >> This might break bifrost stable branches, as bifrost uses DIB from master by default, even for older releases. > >> > >> On Mon, 18 May 2020 at 15:41, Monty Taylor wrote: > >>> > >>> Heya, > >>> > >>> I just pushed up: > >>> > >>> https://review.opendev.org/728889 Drop support for python2 > >>> > >>> Which drops support for installing diskimage-builder using python2. It doesn’t drop support for in-image python2, that would be a whole different story. It seems that since the two largest DIB users, OpenStack and Zuul, are both now python3 only, it’s a safe move to make. > >>> > >>> IBM PowerKVM CI is running third-party CI with python2-based tests. We should probably either update those or just drop it? > >>> > >>> Thoughts? > >>> Monty > > > > > > > > -- > > Att[]'s > > Iury Gregory Melo Ferreira > > MSc in Computer Science at UFCG > > Part of the puppet-manager-core team in OpenStack > > Software Engineer at Red Hat Czech > > Social: https://www.linkedin.com/in/iurygregory > > E-mail: iurygregory at gmail.com > From skaplons at redhat.com Tue May 19 08:12:24 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 19 May 2020 10:12:24 +0200 Subject: [neutron] Recent incompatible change in stable/rocky branch In-Reply-To: References: <20200515151731.tfmwi5qtop4t6f67@skaplons-mac> <20200515174255.4x5es6msr4xgsajo@skaplons-mac> Message-ID: <20200519081224.kwvlnzxyh2ziwrhv@skaplons-mac> Hi, On Mon, May 18, 2020 at 12:15:05PM +0100, Neil Jerram wrote: > On Fri, May 15, 2020 at 6:43 PM Slawek Kaplonski > wrote: > > > Hi, > > > > On Fri, May 15, 2020 at 06:20:07PM +0100, Neil Jerram wrote: > > > Thanks Slawek. Are you planning to leave this change in place? I can > > > update my plugin's code, but that still leaves the likelihood of > > breakage if > > > - there's a new Rocky patch release > > > - a deployer is using an out-of-tree plugin with its own interface > > driver, > > > and upgrades to the Rocky patch release > > > - either they don't also upgrade their plugin code, or there isn't a > > plugin > > > update available because the plugin author hasn't noticed this problem > > yet. > > > > Can You maybe open Launchpad bug for that? It will be the same issue for > > all > > other stable branches like Stein or Train so we should fix it there too. > > > > Thanks Slawek, I've opened a bug here: > https://bugs.launchpad.net/neutron/+bug/1879307 Thank You. I just proposed patch https://review.opendev.org/729167 to address this issue. I will also propose backport of this patch to stable branches and we also agreed already that we will need to have this one merged before new release of the stable/Stein and stable/Train releases to not break others. > > Best wishes, > Neil -- Slawek Kaplonski Senior software engineer Red Hat From licanwei_cn at 163.com Tue May 19 08:38:00 2020 From: licanwei_cn at 163.com (licanwei) Date: Tue, 19 May 2020 16:38:00 +0800 (GMT+08:00) Subject: [Watcher]IRC meeting on May 20 Message-ID: <30ceab2a.4a0e.1722c151bd0.Coremail.licanwei_cn@163.com> Hi, Tomorrow at 08:00 UTC on #openstack-meeting-alt. pls update the meeting agenda if you have something want to be discussed. thanks, | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue May 19 11:13:43 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 19 May 2020 12:13:43 +0100 Subject: [nova][ptg] Can we close old bugs? In-Reply-To: References: <549bf8aa4a7dfa3316ea3516e12f70873a21411b.camel@redhat.com> Message-ID: <51f23c748f6a696e128f5bc96916906ec248faf3.camel@redhat.com> On Tue, 2020-05-19 at 09:49 +0200, Balázs Gibizer wrote: > > On Mon, May 18, 2020 at 18:49, Sean Mooney wrote: > > On Mon, 2020-05-18 at 18:11 +0200, Balázs Gibizer wrote: > > > Hi, > > > > > > [This is a topic from the PTG etherpad [0]. As the PTG time is > > > intentionally kept short, let's try to discuss it or even conclude > > > it > > > before the PTG] > > > > > > We have more than 800 open bugs in nova [1] and the oldest is 8 > > > years > > > old. > > > Can we close old bugs? > > > > realitically i think yes but we might want to mark them in some way > > so we > > know it may or may not have been fixed when we do. > > How do we decide that a bug might have been fixed? If a human brain > needs to check the bugs one by one, then given the time such check > needs and the amount of bugs we have, I don't think this is feasible. oh i was thinking that instead of checking if they are fixed we add a unknown-if-fixed tag or something to them an close them so if we do a launchpad search for an issue we are seeingin the future we will still find the closed bug and see the tag and know that it was not fixed and can reopen it. i agree we likely dont have the time to check them 1:1 > > > > If yes, what would be the closing criteria? Age and status? > > > > so downstream we debate this from time too time. > > ultimately if a release in no longer supported and we dont have > > customer using it > > then we can close "bugs" for those older release provided we dont > > think they affect current/supported > > release too. we have a 30 rule for bugs that are in "need info" e.g. > > if i asked the reporter to provide > > more info such as logs and they dont do so in 30 days we close it. > > they are free to reopen it if they > > eventually provide the info requested. i think this would be > > equiveltnt to the incomplete state upstream > > > > where the bug report is marked as incompelte because we are missing > > info we need. > > upstream we might want to extend the time frame form 30 days to say 6 > > months/one cycle but after a cycle if a bug > > is still in incomplete its likely that any upstream momentum that may > > have existed to go fix it has long since > > fizzeled out. > > When I request additional logs / information in a bug I immediately > mark it Incomplete and ask the author to put it back to New when she > provides the requested information. Also the 800 bugs in [1] does not > contain the Incomplete bugs. yes if i need more info i do the same and move it to incomplete. i was just suggesting that one way of closing bug would be in addtion to the 2 years in new state we could also close incomplete bugs that are older the 6 months or 1 release cycle which ever is longer. > > > > > there might still genuinly be an issue that we shoudl fix which is > > why i think we should have some way to mark the bug > > as closed without resolution due to age such as a tag but i dont > > think it makes sense to leave them open for ever. > > > > > > Personally I would close every bug that is not updated in the last 3 > > > years and not in INPROGRESS state. > > > > you are rather geourse in your tiem as i would close any bug in that > > state which si not on a maintianed > > branch. which would be 18 months to 2 years ish but we could start > > with 3 years. > > I'm fine to close more bugs so 2 years works for me. > > Cheers, > gibi > > > > > > > Cheers, > > > gibi > > > > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > > [1] > > > > > > > https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE > > > > > > > > > > > From lbragstad at gmail.com Tue May 19 13:03:59 2020 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 19 May 2020 08:03:59 -0500 Subject: [keystone] scope enforcement ready for prime time? In-Reply-To: References: <4005BE1E-DA1A-4F64-9FAA-8992242C1A50@bu.edu> Message-ID: On Thu, May 7, 2020 at 9:17 AM Mark Goddard wrote: > On Thu, 7 May 2020 at 00:19, Nikolla, Kristi wrote: > > > > Hi Mark, > > > > If the API in that OpenStack service doesn't have support for > system/domain scopes and only checks for the admin role, the user would > have cloud admin on that specific OpenStack API. > > > > Until all the services that you have deployed in your cloud properly > support scopes, admin role on anything still gives cloud admin. > > > > Best, > > Kristi > > Thanks for the response Kristi. I think we can solve this particular > use case with a custom policy and role in the mean time. > > Having spent a little time playing with scopes, I found it quite > awkward getting the right environment variables into OSC to get the > scope I wanted. e.g. Need OS_DOMAIN_* and no OS_PROJECT_* to get a > domain scoped token. I suppose clouds.yaml could help. > ++ I typically put different scopes in their own profiles. While it results in multiple profiles for the same cloud, it's relatively easy to switch back and forth between the various contexts. $ openstack --os-cloud system-admin list domains $ openstack --os-cloud domain-admin create user foo $ openstack --os-cloud project-member list servers > > > > > > On May 6, 2020, at 5:17 PM, Mark Goddard wrote: > > > > > > Hi, > > > > > > I have a use case which I think could be fulfilled by scoped tokens: > > > > > > Allow an operator to delegate the ability to an actor to create users > within a domain, without giving them the keys to the cloud. > > > > > > To do this, I understand I can assign a user the admin role for a > domain. It seems that for this to work, I need to set [oslo_policy] > enforce_scope = True in keystone.conf. > > > > > > The Train cycle highlights suggest this is now fully implemented in > keystone, but other most projects lack support for scopes. Does this mean > that in the above case, the user would have full cloud admin privileges in > other services that lack support for scopes? i.e. while I expect it's safe > to enable scope enforcement in keystone, is it "safe" to use it? > > > > > > Cheers, > > > Mark > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.bauza at gmail.com Tue May 19 13:06:45 2020 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 19 May 2020 15:06:45 +0200 Subject: [nova][ptg] Can we close old bugs? In-Reply-To: References: <549bf8aa4a7dfa3316ea3516e12f70873a21411b.camel@redhat.com> Message-ID: Le mar. 19 mai 2020 à 09:56, Balázs Gibizer a écrit : > > > On Mon, May 18, 2020 at 18:49, Sean Mooney wrote: > > On Mon, 2020-05-18 at 18:11 +0200, Balázs Gibizer wrote: > >> Hi, > >> > >> [This is a topic from the PTG etherpad [0]. As the PTG time is > >> intentionally kept short, let's try to discuss it or even conclude > >> it > >> before the PTG] > >> > >> We have more than 800 open bugs in nova [1] and the oldest is 8 > >> years > >> old. > >> Can we close old bugs? > > realitically i think yes but we might want to mark them in some way > > so we > > know it may or may not have been fixed when we do. > > How do we decide that a bug might have been fixed? If a human brain > needs to check the bugs one by one, then given the time such check > needs and the amount of bugs we have, I don't think this is feasible. > > >> If yes, what would be the closing criteria? Age and status? > > so downstream we debate this from time too time. > > ultimately if a release in no longer supported and we dont have > > customer using it > > then we can close "bugs" for those older release provided we dont > > think they affect current/supported > > release too. we have a 30 rule for bugs that are in "need info" e.g. > > if i asked the reporter to provide > > more info such as logs and they dont do so in 30 days we close it. > > they are free to reopen it if they > > eventually provide the info requested. i think this would be > > equiveltnt to the incomplete state upstream > > > > where the bug report is marked as incompelte because we are missing > > info we need. > > upstream we might want to extend the time frame form 30 days to say 6 > > months/one cycle but after a cycle if a bug > > is still in incomplete its likely that any upstream momentum that may > > have existed to go fix it has long since > > fizzeled out. > > When I request additional logs / information in a bug I immediately > mark it Incomplete and ask the author to put it back to New when she > provides the requested information. Also the 800 bugs in [1] does not > contain the Incomplete bugs. > > > > > there might still genuinly be an issue that we shoudl fix which is > > why i think we should have some way to mark the bug > > as closed without resolution due to age such as a tag but i dont > > think it makes sense to leave them open for ever. > >> > >> Personally I would close every bug that is not updated in the last 3 > >> years and not in INPROGRESS state. > > you are rather geourse in your tiem as i would close any bug in that > > state which si not on a maintianed > > branch. which would be 18 months to 2 years ish but we could start > > with 3 years. > > I'm fine to close more bugs so 2 years works for me. > > +1 for a 2-year deadline. We did this already and fwiw we were saying something like "if you think that the bug is still there, please reopen it". > Cheers, > gibi > > >> > >> Cheers, > >> gibi > >> > >> [0] https://etherpad.opendev.org/p/nova-victoria-ptg > >> [1] > >> > > > https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE > >> > >> > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue May 19 14:08:18 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 19 May 2020 16:08:18 +0200 Subject: [nova][neutron][ptg] How to increase the minimum bandwidth guarantee of a running instance Message-ID: Hi, [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally kept short, let's try to discuss it or even conclude it before the PTG] As a next step in the minimum bandwidth QoS support I would like to solve the use case where a running instance has some ports with minimum bandwidth but then user wants to change (e.g. increase) the minimum bandwidth used by the instance. I see two generic ways to solve the use case: Option A - interface attach --------------------------- Attach a new port with minimum bandwidth to the instance to increase the instance's overall bandwidth guarantee. This only impacts Nova's interface attach code path: 1) The interface attach code path needs to read the port's resource request 2) Call Placement GET /allocation_candidates?in_tree= 3a) If placement returns candidates then select one and modify the current allocation of the instance accordingly and continue the existing interface attach code path. 3b) If placement returns no candidates then there is no free resource left on the instance's current host to resize the allocation locally. Option B - QoS rule update -------------------------- Allow changing the minimum bandwidth guarantee of a port that is already bound to the instance. Today Neutron rejects such QoS rule update. If we want to support such update then: * either Neutron should call placement allocation_candidates API and the update the instance's allocation. Similarly what Nova does in Option A. * or Neutron should tell Nova that the resource request of the port has been changed and then Nova needs to call Placement and update instance's allocation. The Option A and Option B are not mutually exclusive but still I would like to see what is the preference of the community. Which direction should we move forward? Both options have the limitation that if the instance's current host does not have enough free resources for the requested change then Nova will not do a full scheduling and move the instance to another host where resource is available. This seems a hard problem to me. Do you have any idea how can we remove / ease this limitation without boiling the ocean? For example: Does it make sense to implement a bandwidth weigher in the scheduler so instances can be spread by free bandwidth during creation? Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg From opensrloo at gmail.com Tue May 19 14:09:11 2020 From: opensrloo at gmail.com (Ruby Loo) Date: Tue, 19 May 2020 10:09:11 -0400 Subject: [ironic] trusted delgation cores? In-Reply-To: References: Message-ID: +2 for Iury & trusted delegation! Congratulations Iury! --ruby On Wed, May 13, 2020 at 11:33 AM Julia Kreger wrote: > Greetings awesome people and AIs! > > Earlier today, I noticed Iury (iurygregory) only had +1 rights on the > ironic-prometheus-exporter. I noted this in IRC and went to go see > about adding him to the group, and realized we didn't have a separate > group already defined for ironic-prometheus-exporter, which left a > question, do we create a new group, or just grant ironic-core > membership. > > And the question of starting to engage in trusted delegation of core > rights came up in IRC[0]. I think it makes a lot of sense, but wanted > to see what everyone thought? > > Specifically in Iury's case: I feel he has proven himself, in ironic > and non-ironic cases, and I think it makes sense to grant him core > rights under the premise that he is unlikely to approve non-trivial > changes to merge that he is not confident in. > > Thoughts, concerns, congratulations? For both questions? > > -Julia > > [0]: > http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2020-05-13.log.html#t2020-05-13T13:35:45 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue May 19 14:29:43 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 19 May 2020 16:29:43 +0200 Subject: [nova][ptg] virtual PTG In-Reply-To: References: <6VQA9Q.TH47VQEIJBW43@est.tech> Message-ID: Hi, The PTG is two weeks from now. So I would like to encourage you to look at the PTG etherpad [0]. If you have topics for the PTG then please start discussing them now on the ML or in a spec. (See the threads [1][2][3][4][5] I have already started) Such preparation is needed as we will only have limited time to conclude the topics during the PTG. Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014916.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014917.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014919.html [4] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014921.html [5] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014938.html On Wed, Apr 29, 2020 at 11:38, Balázs Gibizer wrote: > Hi, > > Based on the doodle I've booked the following slots for Nova, in the > Rocky room[1]. > > * June 3 Wednesday 13:00 UTC - 15:00 UTC > * June 4 Thursday 13:00 UTC - 15:00 UTC > * June 5 Friday 13:00 UTC - 15:00 UTC > > I have synced with Slaweq and we agreed to have the Neutron - Nova > cross project discussion on Friday form 13:00 UTC. > > If it turns out that we need more time then we can arrange extra > slots per topic on the week after. > > Cheers, > gibi > > [1] https://ethercalc.openstack.org/126u8ek25noy > > > On Fri, Apr 24, 2020 at 16:28, Balázs Gibizer > wrote: >> >> >> On Wed, Apr 15, 2020 at 10:26, Balázs Gibizer >>  wrote: >>> Hi, >>> >>> I need to book slots for nova discussions on the official >>> schedule[2] of the virtual PTG[1]. I need two things to do that: >>> >>> 1) What topics we have that needs real-time discussion. Please add >>> those to the etherpad [3] >>> 2) Who wants to join such real-time discussion and what time slots >>> works for you. Please fill the doodle[4] >>> >>> Based on the current etherpad content we need 2-3 slots for nova >>> discussion and a half slot for cross project (current neutron) >>> discussion. >> >> We refined our schedule during the Nova meeting [5]. Based on that >> my current plan is to book one 2 hours slot for 3 consecutive days >> (Wed-Fri) during the PTG week. I talked to Slaweq about a >> neutron-nova cross project session. We agreed to book a one hour >> slot for that. >> >> If you haven't filled the doodle[4] then please do so until early >> next week. >> >> Thanks >> gibi >> >>> >>> Please try to provide your topics and options before the Thursday's >>> nova meeting. >>> >>> Cheers, >>> gibi >>> >>> >>> [1] >>> http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.html >>> [2] https://ethercalc.openstack.org/126u8ek25noy >>> [3] https://etherpad.opendev.org/p/nova-victoria-ptg >>> [4] https://doodle.com/poll/ermn3vxy9v53aayy >>> >> [5] >> http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-23-16.00.log.html#l-119 >> >> >> > > > From mark at stackhpc.com Tue May 19 14:30:25 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 19 May 2020 15:30:25 +0100 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx Message-ID: Hi, There are no tarballs published for ussuri networking-mlnx or vmware-nsx. The most recently published tarballs were in February. Is this expected? networking-l2gw also has no ussuri tarballs, although master was updated recently. Thanks, Mark From whayutin at redhat.com Tue May 19 14:34:28 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 19 May 2020 08:34:28 -0600 Subject: [tripleo][ptg] schedule posted Message-ID: Greetings owls, Victoria Schedule has been posted [1]. There are a lot of great topics so we're going to limit the time slots to 1/2 hour each. Creating a blueprint and spec for your topic prior to the PTG will help you make your point and win the game of influence [2]. Please feel free to self sign up on the schedule, I will cross check any unassigned topics prior to the event and punt them to the last day. For now we're going to close adding new topics to fit the schedule. There is spare time on the calendar for Q&A and random ad hoc topics [3] if needed. To prepare for your presentations please ensure you are comfortable w/ the video conferencing [4] as much as possible. Looking forward to seeing you all Thanks!! [1] https://etherpad.opendev.org/p/tripleo-ptg-victoria [2] https://review.opendev.org/#/q/project:openstack/tripleo-specs+status:open https://blueprints.launchpad.net/tripleo [3] https://ethercalc.openstack.org/126u8ek25noy [4] https://meetpad.opendev.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaifeng.w at gmail.com Tue May 19 14:54:15 2020 From: kaifeng.w at gmail.com (Kaifeng Wang) Date: Tue, 19 May 2020 22:54:15 +0800 Subject: [ironic] trusted delgation cores? In-Reply-To: References: Message-ID: +2 and thanks Iury for all the efforts! On Wed, May 13, 2020 at 11:36 PM Julia Kreger wrote: > Greetings awesome people and AIs! > > Earlier today, I noticed Iury (iurygregory) only had +1 rights on the > ironic-prometheus-exporter. I noted this in IRC and went to go see > about adding him to the group, and realized we didn't have a separate > group already defined for ironic-prometheus-exporter, which left a > question, do we create a new group, or just grant ironic-core > membership. > > And the question of starting to engage in trusted delegation of core > rights came up in IRC[0]. I think it makes a lot of sense, but wanted > to see what everyone thought? > > Specifically in Iury's case: I feel he has proven himself, in ironic > and non-ironic cases, and I think it makes sense to grant him core > rights under the premise that he is unlikely to approve non-trivial > changes to merge that he is not confident in. > > Thoughts, concerns, congratulations? For both questions? > > -Julia > > [0]: > http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2020-05-13.log.html#t2020-05-13T13:35:45 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue May 19 15:31:23 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 19 May 2020 16:31:23 +0100 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: References: Message-ID: On Tue, 19 May 2020 at 15:30, Mark Goddard wrote: > > Hi, > > There are no tarballs published for ussuri networking-mlnx or > vmware-nsx. The most recently published tarballs were in February. Is > this expected? networking-l2gw also has no ussuri tarballs, although > master was updated recently. Thanks to Yatin Karel for pointing out that some projects now publish under x/ namespace. l2gw does not though, so I'm not sure about that one. > > Thanks, > Mark From johnsomor at gmail.com Tue May 19 15:32:52 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 19 May 2020 08:32:52 -0700 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> Message-ID: This will likely cause pain for the Octavia team (given the extended maintenance stable branches), but I would rather do this now, early in the release cycle, than wait. Just my $0.02. Michael On Mon, May 18, 2020 at 6:32 AM Monty Taylor wrote: > > Heya, > > I just pushed up: > > https://review.opendev.org/728889 Drop support for python2 > > Which drops support for installing diskimage-builder using python2. It doesn’t drop support for in-image python2, that would be a whole different story. It seems that since the two largest DIB users, OpenStack and Zuul, are both now python3 only, it’s a safe move to make. > > IBM PowerKVM CI is running third-party CI with python2-based tests. We should probably either update those or just drop it? > > Thoughts? > Monty > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From sean.mcginnis at gmx.com Tue May 19 15:44:04 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 19 May 2020 10:44:04 -0500 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: References: Message-ID: >> There are no tarballs published for ussuri networking-mlnx or >> vmware-nsx. The most recently published tarballs were in February. Is >> this expected? networking-l2gw also has no ussuri tarballs, although >> master was updated recently. > Thanks to Yatin Karel for pointing out that some projects now publish > under x/ namespace. l2gw does not though, so I'm not sure about that > one. It looks like networking-l2gw is still under the openstack/ namespace: https://opendev.org/openstack/networking-l2gw Though it looks like it shouldn't be since it is not under official governance: https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml No official release team releases have been done for this repo. It looks like there was a 15.0.0 tag pushed by the team last October: https://opendev.org/openstack/networking-l2gw/src/tag/15.0.0 And there is a corresponding 15.0.0 tarball published for that one: https://tarballs.opendev.org/openstack/networking-l2gw/networking-l2gw-15.0.0.tar.gz So it looks like this repo either needs to be added to governance, and added to the releases deliverables, or it needs to move to the x/ namespace and have tagging managed by the team. Sean From aj at suse.com Tue May 19 15:44:31 2020 From: aj at suse.com (Andreas Jaeger) Date: Tue, 19 May 2020 17:44:31 +0200 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: References: Message-ID: <3e376da3-d037-9f3f-ec2b-dcb67a5c3a03@suse.com> On 19.05.20 17:31, Mark Goddard wrote: > On Tue, 19 May 2020 at 15:30, Mark Goddard wrote: >> >> Hi, >> >> There are no tarballs published for ussuri networking-mlnx or >> vmware-nsx. The most recently published tarballs were in February. Is >> this expected? networking-l2gw also has no ussuri tarballs, although >> master was updated recently. > > Thanks to Yatin Karel for pointing out that some projects now publish > under x/ namespace. l2gw does not though, so I'm not sure about that > one. l2gw has not created a stable/ussuri branch yet, Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From mark at stackhpc.com Tue May 19 15:48:18 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 19 May 2020 16:48:18 +0100 Subject: [kolla] Kolla klub meeting Message-ID: Hi, Just a reminder that we will host a project onboarding session for the next Kolla Klub meeting on Thursday 21st May. We'll cover a variety of topics on the many different ways you can contribute to the project. Gaël Therond will also fill us in on some of the questions put to him since the last meeting about his case studies. https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw Thanks, Mark From kotobi at dkrz.de Tue May 19 15:50:38 2020 From: kotobi at dkrz.de (Amjad Kotobi) Date: Tue, 19 May 2020 17:50:38 +0200 Subject: [keystone] Message-ID: Hi, I have keystone running in HA multiple nodes, and after reboot one of the node it started in logs to pop up below messages “""WARNING keystone.server.flask.application [req-fe993224-cc7e-4e5f-83a8-1e925e60b995 24dfa0beaaaa48848b4eee68e449f2de d1fecbdb49ef4871b94b201b7856eed3 - default default] Could not recognize Fernet token: TokenNotFound: Could not recognize Fernet token””” And in command line “”"Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-1145c696-6275-4c78-9486-a690b98548ed)””" During command lines when request lands on machine which hasn’t rebooted or so, the result comes but on the rest fore-mentioned message always in place. Openstack release Train, and deployed manually. How can I tackle it down? Any ideas? Thanks Amjad -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Tue May 19 17:35:45 2020 From: moshele at mellanox.com (Moshe Levi) Date: Tue, 19 May 2020 17:35:45 +0000 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: References: Message-ID: Hi Mark, We will do the release hopefully tomorrow > -----Original Message----- > From: Mark Goddard > Sent: Tuesday, May 19, 2020 6:31 PM > To: openstack-discuss > Subject: Re: [neutron][release] Missing tarballs for networking-mlnx, > networking-l2gw and vmware-nsx > > On Tue, 19 May 2020 at 15:30, Mark Goddard wrote: > > > > Hi, > > > > There are no tarballs published for ussuri networking-mlnx or > > vmware-nsx. The most recently published tarballs were in February. Is > > this expected? networking-l2gw also has no ussuri tarballs, although > > master was updated recently. > > Thanks to Yatin Karel for pointing out that some projects now publish under x/ > namespace. l2gw does not though, so I'm not sure about that one. > > > > > Thanks, > > Mark From skaplons at redhat.com Tue May 19 19:55:10 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 19 May 2020 21:55:10 +0200 Subject: [nova][neutron][ptg] How to increase the minimum bandwidth guarantee of a running instance In-Reply-To: References: Message-ID: <20200519195510.chfuo5byodcrooaj@skaplons-mac> Hi, Thx for starting this thread. I can share some thoughts from the Neutron point of view. On Tue, May 19, 2020 at 04:08:18PM +0200, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally > kept short, let's try to discuss it or even conclude it before the PTG] > > As a next step in the minimum bandwidth QoS support I would like to solve > the use case where a running instance has some ports with minimum bandwidth > but then user wants to change (e.g. increase) the minimum bandwidth used by > the instance. > > I see two generic ways to solve the use case: > > Option A - interface attach > --------------------------- > > Attach a new port with minimum bandwidth to the instance to increase the > instance's overall bandwidth guarantee. > > This only impacts Nova's interface attach code path: > 1) The interface attach code path needs to read the port's resource request > 2) Call Placement GET /allocation_candidates?in_tree= instance> > 3a) If placement returns candidates then select one and modify the current > allocation of the instance accordingly and continue the existing interface > attach code path. > 3b) If placement returns no candidates then there is no free resource left > on the instance's current host to resize the allocation locally. > > > Option B - QoS rule update > -------------------------- > > Allow changing the minimum bandwidth guarantee of a port that is already > bound to the instance. > > Today Neutron rejects such QoS rule update. If we want to support such > update then: > * either Neutron should call placement allocation_candidates API and the > update the instance's allocation. Similarly what Nova does in Option A. > * or Neutron should tell Nova that the resource request of the port has been > changed and then Nova needs to call Placement and update instance's > allocation. In this case, if You update QoS rule, don't forget that policy with this rule can be used by many ports already. So we will need to find all of them and call placement for each. What if that will be fine for some ports but not for all? > > > The Option A and Option B are not mutually exclusive but still I would like > to see what is the preference of the community. Which direction should we > move forward? There is also 3rd possible option, very similar to Option B which is change of the QoS policy for the port. It's basically almost the same as Option B, but that way You have always only one port to update (unless it's not policy associated with network). So because of that reason, maybe a bit easier to do. > > > Both options have the limitation that if the instance's current host does > not have enough free resources for the requested change then Nova will not > do a full scheduling and move the instance to another host where resource is > available. This seems a hard problem to me. > > Do you have any idea how can we remove / ease this limitation without > boiling the ocean? > > For example: Does it make sense to implement a bandwidth weigher in the > scheduler so instances can be spread by free bandwidth during creation? > > > Cheers, > gibi > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > > -- Slawek Kaplonski Senior software engineer Red Hat From whayutin at redhat.com Tue May 19 21:11:46 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 19 May 2020 15:11:46 -0600 Subject: [tripleo][ci] remove upstream multinode scenario jobs (queens, rocky) Message-ID: Greetings, I'd like to propose that we start removing multinode ( 2 node ) based scenario jobs from the upstream that still exist in queens and rocky. We will continue to support a basic multinode deployment [1 ] across all the branches. From stable/stein on we only maintain one multinode job, and the scenarios have all been migrated to single node deployments. The loss of coverage will be augmented by the RDO promotion pipeline [2] that should be verifying all the upstream check/gate jobs across the branches. Issues will be caught periodically vs. on a per patch basis. Issues will be reported, debugged and sent to the appropriate teams. I am not proposing we do this overnight or in one giant patch, but over time start to remove the upstream coverage. We also need to ensure that the upstream version is properly covered in RDO software factory. WHY??? Two node deployments in upstream infrastructure are very unreliable and often timeout. Generally devolving into noise and wasted resources and time spent by developers trying to land patches [3]. Comments / Questions?? [1] tripleo-ci-centos-[7,8]-containers-multinode [2] https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-wednesday-weekend [3] https://bugs.launchpad.net/tripleo/+bug/1879565 -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Tue May 19 22:23:50 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 19 May 2020 18:23:50 -0400 Subject: [Nova][Scheduler] Reducing race-conditions and re-scheduling during creation of multiple high-ressources instances or instances with anti-affinity. Message-ID: Hey everyone, We are seeing a pretty consistent issue with Nova/Scheduler where some instances creation are hitting the "max_attempts" limits of the scheduler. Env : Red Hat Queens Computes : All the same hardware and specs (even weight throughout) Nova : Three nova-schedulers This can be due to two different factors (from what we've seen) : - Anti-affinity rules are getting triggered during the creation (two claims are done within a few milliseconds on the same compute) which counts as a retry (we've seen this when spawning 40+ VMs in a single server group with maybe 50-55 computes - or even less 14 instances on 20ish computes). - We've seen another case where MEMORY_MB becomes an issue (we are spinning new instances in the same host-aggregate where VMs are already running. Only one VM can run per compute but there are no anti-affinity groups to force that between the two deployments. The ressource requirements prevent anything else from getting spun on those). - The logs look like the following : - Unable to submit allocation for instance 659ef90e-33b8-42a9-9c8e-fac87278240d (409 {"errors": [{"status": 409, "request_id": "req-429c2734-2f2d-4d2d-82d1-fa4ebe12c991", "detail": "There was a conflict when trying to complete your request.\n\n Unable to allocate inventory: Unable to create allocation for 'MEMORY_MB' on resource provider '35b78f3b-8e59-4f2f-8cad-eaf116b7c1c7'. The requested amount would exceed the capacity. ", "title": "Conflict"}]}) / Setting instance to ERROR state.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance f6d06cca-e9b5-4199-8220-e3ff2e5c2a41. - I do believe we are hitting this issue as well : https://bugs.launchpad.net/nova/+bug/1837955 - In all the cases where the Stacks creation failed, one instance was left in the Build state for 120 minutes and then finally failed. >From what we can gather, there are a couple of parameters that be be tweaked. 1. host_subset_size (Return X number of host instead of 1?) 2. randomize_allocation_candidates (Not 100% on this one) 3. shuffle_best_same_weighed_hosts (Return a random of X number of computes if they are all equal (instance of the same list for all scheduling requests)) 4. max_attempts (how many times the Scheduler will try to fit the instance somewhere) We've already raised "max_attempts" to 5 from the default of 3 and will raise it further. That said, what are the recommendations for the rest of the settings? We are not exactly concerned with stacking vs spreading (but that's always nice) of the instances but rather making sure deployments fail because of real reasons and not just because Nova/Scheduler keeps stepping on it's own toes. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue May 19 22:48:32 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 19 May 2020 23:48:32 +0100 Subject: [nova][neutron][ptg] How to increase the minimum bandwidth guarantee of a running instance In-Reply-To: <20200519195510.chfuo5byodcrooaj@skaplons-mac> References: <20200519195510.chfuo5byodcrooaj@skaplons-mac> Message-ID: On Tue, 2020-05-19 at 21:55 +0200, Slawek Kaplonski wrote: > Hi, > > Thx for starting this thread. > I can share some thoughts from the Neutron point of view. > > On Tue, May 19, 2020 at 04:08:18PM +0200, Balázs Gibizer wrote: > > Hi, > > > > [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally > > kept short, let's try to discuss it or even conclude it before the PTG] > > > > As a next step in the minimum bandwidth QoS support I would like to solve > > the use case where a running instance has some ports with minimum bandwidth > > but then user wants to change (e.g. increase) the minimum bandwidth used by > > the instance. > > > > I see two generic ways to solve the use case: > > > > Option A - interface attach > > --------------------------- > > > > Attach a new port with minimum bandwidth to the instance to increase the > > instance's overall bandwidth guarantee. > > > > This only impacts Nova's interface attach code path: > > 1) The interface attach code path needs to read the port's resource request > > 2) Call Placement GET /allocation_candidates?in_tree= > instance> > > 3a) If placement returns candidates then select one and modify the current > > allocation of the instance accordingly and continue the existing interface > > attach code path. > > 3b) If placement returns no candidates then there is no free resource left > > on the instance's current host to resize the allocation locally. so currently we dont support attaching port with resouce request. if we were to do that i would prefer to make it more generic e.g. support attich sriov devices as well. i dont think we should ever support this for the usecase of changing qos policies or bandwith allocations but i think this is a good feature in its own right. > > > > > > Option B - QoS rule update > > -------------------------- > > > > Allow changing the minimum bandwidth guarantee of a port that is already > > bound to the instance. > > > > Today Neutron rejects such QoS rule update. If we want to support such > > update then: > > * either Neutron should call placement allocation_candidates API and the > > update the instance's allocation. Similarly what Nova does in Option A. > > * or Neutron should tell Nova that the resource request of the port has been > > changed and then Nova needs to call Placement and update instance's > > allocation. > > In this case, if You update QoS rule, don't forget that policy with this rule > can be used by many ports already. So we will need to find all of them and > call placement for each. > What if that will be fine for some ports but not for all? i think if we went with a qos rule update we would not actully modify the rule itself that would break to many thing and instead change change the qos rule that is applied to the port. e.g. if you have a 1GBps rule and and 10GBps then we could support swaping between the rules but we should not support chnaging the 1GBps rule to a 2GBps rule. neutron should ideally do the placement check and allocation update as part of the qos rule update api action and raise an exception if it could not. > > > > > > > The Option A and Option B are not mutually exclusive but still I would like > > to see what is the preference of the community. Which direction should we > > move forward? > > There is also 3rd possible option, very similar to Option B which is change of > the QoS policy for the port. It's basically almost the same as Option B, but > that way You have always only one port to update (unless it's not policy > associated with network). So because of that reason, maybe a bit easier to do. yes that is what i was suggesting above and its one of the option we discused when first desigining the minium bandwith policy. this i think is the optimal solution and i dont think we should do option a or b although A could be done as a sperate feature just not as a way we recommend to update qos policies. > > > > > > > Both options have the limitation that if the instance's current host does > > not have enough free resources for the requested change then Nova will not > > do a full scheduling and move the instance to another host where resource is > > available. This seems a hard problem to me. i honestly dont think it is we condiered this during the design of the feature with the intent of one day supporting it. option c was how i always assumed it would work. support attach and detach for port or other things with reqsouce requests is a seperate topic as it applies to gpu hotplug, sriov port and cyborg so i would ignore that for now and focuse on what is basicaly a qos resize action where we are swaping between predefiend qos policies. > > > > Do you have any idea how can we remove / ease this limitation without > > boiling the ocean? > > > > For example: Does it make sense to implement a bandwidth weigher in the > > scheduler so instances can be spread by free bandwidth during creation? we discussed this in the passed breifly. i always belived that was a good idea but it would require the allocation candiates to be passed to the weigher and the provider summaries. we have other usecases that could benifit form that too but i think in the past that was see as to much work when we did not even have the basic support working yet. now i think it would be a resonable next step and as i said we will need the ability to weigh based on allcoation candiates in the future of for other feature too so this might be a nice time to intoduce that. > > > > > > Cheers, > > gibi > > > > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > > > > > > > From melwittt at gmail.com Tue May 19 23:10:38 2020 From: melwittt at gmail.com (melanie witt) Date: Tue, 19 May 2020 16:10:38 -0700 Subject: [Nova][Scheduler] Reducing race-conditions and re-scheduling during creation of multiple high-ressources instances or instances with anti-affinity. In-Reply-To: References: Message-ID: <2a15a743-06d6-93be-4489-ebe5ff3a053c@gmail.com> On 5/19/20 15:23, Laurent Dumont wrote: > From what we can gather, there are a couple of parameters that be be > tweaked. > > 1. host_subset_size (Return X number of host instead of 1?) > 2. randomize_allocation_candidates (Not 100% on this one) > 3. shuffle_best_same_weighed_hosts (Return a random of X number of > computes if they are all equal (instance of the same list for all > scheduling requests)) > 4. max_attempts (how many times the Scheduler will try to fit the > instance somewhere) > > We've already raised "max_attempts" to 5 from the default of 3 and will > raise it further. That said, what are the recommendations for the rest > of the settings? We are not exactly concerned with stacking vs spreading > (but that's always nice) of the instances but rather making sure > deployments fail because of real reasons and not just because > Nova/Scheduler keeps stepping on it's own toes. This is something I've written in the past related to the anti-affinity piece of what you're describing, that might be of help: https://bugzilla.redhat.com/show_bug.cgi?id=1780380#c4 Option (2) in your list only helps if you have > 1000 hosts in your deployment and you want to make sure resource provider candidates beyond the same first 1000 are regularly made available for scheduling (by randomizing before returning the top 1000 weighted hosts). The placement API will limit the maximum number of returned allocation candidates to 1000 for performance reasons. Option (3) in your list only helps if you have lots of hosts being weighed equally and you need some randomization per exact weight to help prevent collisions. This is usually applicable to requests for certain NUMA topology and you get many hosts weighted equally. Hope this helps, -melanie From melwittt at gmail.com Tue May 19 23:17:31 2020 From: melwittt at gmail.com (melanie witt) Date: Tue, 19 May 2020 16:17:31 -0700 Subject: [Nova][Scheduler] Reducing race-conditions and re-scheduling during creation of multiple high-ressources instances or instances with anti-affinity. In-Reply-To: <2a15a743-06d6-93be-4489-ebe5ff3a053c@gmail.com> References: <2a15a743-06d6-93be-4489-ebe5ff3a053c@gmail.com> Message-ID: On 5/19/20 16:10, melanie witt wrote: > On 5/19/20 15:23, Laurent Dumont wrote: >>  From what we can gather, there are a couple of parameters that be be >> tweaked. >> >>  1. host_subset_size (Return X number of host instead of 1?) >>  2. randomize_allocation_candidates (Not 100% on this one) >>  3. shuffle_best_same_weighed_hosts (Return a random of X number of >>     computes if they are all equal (instance of the same list for all >>     scheduling requests)) >>  4. max_attempts (how many times the Scheduler will try to fit the >>     instance somewhere) >> >> We've already raised "max_attempts" to 5 from the default of 3 and >> will raise it further. That said, what are the recommendations for the >> rest of the settings? We are not exactly concerned with stacking vs >> spreading (but that's always nice) of the instances but rather making >> sure deployments fail because of real reasons and not just because >> Nova/Scheduler keeps stepping on it's own toes. > > This is something I've written in the past related to the anti-affinity > piece of what you're describing, that might be of help: > > https://bugzilla.redhat.com/show_bug.cgi?id=1780380#c4 > > Option (2) in your list only helps if you have > 1000 hosts in your > deployment and you want to make sure resource provider candidates beyond > the same first 1000 are regularly made available for scheduling (by > randomizing before returning the top 1000 weighted hosts). The placement > API will limit the maximum number of returned allocation candidates to > 1000 for performance reasons. And for reference, here is where the limit of 1000 results comes from, it is configurable: https://docs.openstack.org/nova/queens/configuration/config.html#scheduler.max_placement_results > Option (3) in your list only helps if you have lots of hosts being > weighed equally and you need some randomization per exact weight to help > prevent collisions. This is usually applicable to requests for certain > NUMA topology and you get many hosts weighted equally. > > Hope this helps, > -melanie > From smooney at redhat.com Tue May 19 23:33:10 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 May 2020 00:33:10 +0100 Subject: [Nova][Scheduler] Reducing race-conditions and re-scheduling during creation of multiple high-ressources instances or instances with anti-affinity. In-Reply-To: References: Message-ID: On Tue, 2020-05-19 at 18:23 -0400, Laurent Dumont wrote: > Hey everyone, > > We are seeing a pretty consistent issue with Nova/Scheduler where some > instances creation are hitting the "max_attempts" limits of the scheduler. Well the answer you are not going to like is nova is working as expected and we expect this to happen when you use multi create. placment help reduce the issue but there are some fundemtal issue with how we do retries that make this hard to fix. im not going to go into the detail right now as its not helpful but we have had quryies about this form customer in the past so fortunetly i do have some recomendation i can share https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 well not that i have made the comment public i can :) > > Env : Red Hat Queens > Computes : All the same hardware and specs (even weight throughout) > Nova : Three nova-schedulers > > This can be due to two different factors (from what we've seen) : > > - Anti-affinity rules are getting triggered during the creation (two > claims are done within a few milliseconds on the same compute) which counts > as a retry (we've seen this when spawning 40+ VMs in a single server group > with maybe 50-55 computes - or even less 14 instances on 20ish computes). yep the only way to completely avoid this issue on queens and depending on what fature you are using on master is to boot the vms serially waiting for each vm to sapwn. > - We've seen another case where MEMORY_MB becomes an issue (we are > spinning new instances in the same host-aggregate where VMs are already > running. Only one VM can run per compute but there are no anti-affinity > groups to force that between the two deployments. The ressource > requirements prevent anything else from getting spun on those). > - The logs look like the following : > - Unable to submit allocation for instance > 659ef90e-33b8-42a9-9c8e-fac87278240d (409 {"errors": [{"status": 409, > "request_id": "req-429c2734-2f2d-4d2d-82d1-fa4ebe12c991", > "detail": "There > was a conflict when trying to complete your request.\n\n Unable > to allocate > inventory: Unable to create allocation for 'MEMORY_MB' on > resource provider > '35b78f3b-8e59-4f2f-8cad-eaf116b7c1c7'. The requested amount would exceed > the capacity. ", "title": "Conflict"}]}) / Setting instance to ERROR > state.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted > all hosts available for retrying build failures for instance > f6d06cca-e9b5-4199-8220-e3ff2e5c2a41. in this case you are racing with ohter instance for the that host. basically when doing a multi create if any vm fails to boot it will go to the next host in the alternate host list and try to create an allcoation againt ther first host in that list. however when the alternate host list was created none of the vms had been sapwned yet. by the time the rety arrive at the conductor one of the other vms could have been schduled to that host either as a first chose or because that other vm retried first and won the race. when this happens we then try the next host in the list wehre we can race again. since the retries happen at the cell conductor level without going back to the schduler again we are not going to check the current status of the host using the anti affintiy filter or anti affintiy weigher during the retry so while it was vaild intially i can be invalid when we try to use the alternate host. the only way to fix that is to have retrys not use alternate hosts and instead have each retry return the full scudling process so that it can make a desicion based on the current state of the server not the old view. > - I do believe we are hitting this issue as well : > https://bugs.launchpad.net/nova/+bug/1837955 > - In all the cases where the Stacks creation failed, one instance was > left in the Build state for 120 minutes and then finally failed. > > From what we can gather, there are a couple of parameters that be be > tweaked. > > 1. host_subset_size (Return X number of host instead of 1?) > 2. randomize_allocation_candidates (Not 100% on this one) > 3. shuffle_best_same_weighed_hosts (Return a random of X number of > computes if they are all equal (instance of the same list for all > scheduling requests)) > 4. max_attempts (how many times the Scheduler will try to fit the > instance somewhere) > > We've already raised "max_attempts" to 5 from the default of 3 and will > raise it further. That said, what are the recommendations for the rest of > the settings? We are not exactly concerned with stacking vs spreading (but > that's always nice) of the instances but rather making sure deployments > fail because of real reasons and not just because Nova/Scheduler keeps > stepping on it's own toes. https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 has some suggestions but tl;dr it should be safe to set max_attempts=10 if you set subset_size=15 shuffle_best_same_weighed_hosts=true that said i really would not put max_attempts over 10, max_attempts 5 should be more then enough. subset_size=15 is a little bit arbiraty. the best value will depend on the type ical size of your deplopyment and the size of your cloud. randomize_allocation_candidates help if and only if you have limite the number of allocation candiates retruned by placment to subset of your cloud hosts. e.g. if you set the placemment allcation candiate limit to 10 on for a cloud with 100 host then you should set randomize_allocation_candidates=true so that you do not get a bias that will pack host baded on the natural db order. the default limit for alloction candiates is 1000 so unless you have more then 1000 hosts or have changed that limit you do not need to set this. > > Thanks! From yumeng_bao at yahoo.com Wed May 20 03:26:32 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Wed, 20 May 2020 11:26:32 +0800 Subject: [Cyborg][PTG]Virtual PTG Planning References: <3D20A938-2462-429F-B79F-88102BF02863.ref@yahoo.com> Message-ID: <3D20A938-2462-429F-B79F-88102BF02863@yahoo.com>   Hi team, I have booked the following slots for Cyborg Virtual PTG [1] based on the doodle poll results. * June 2 Tuesday 6:00 UTC - 8:00 UTC, Room Cactus * June 3 Wednesday 6:00 UTC - 8:00 UTC,Room Cactus * June 4 Thursday 6:00 UTC - 8:00 UTC,Room Cactus *June 5 Friday 6:00 UTC - 8:00 UTC,Room Bexar Moreover, I have synced with Slaweq and gibi, and we agreed to have a Nova-Neutron-Cyborg cross project discussion on Friday from 14:00 UTC to 15:00 UTC in Room Juno (Thanks to Nova for sharing this time slot!) There are two weeks to go before the virtual PTG. So I would like to encourage you to LOOK AT the etherpad[2] and ADD YOUR NAME to the Attendance section if you are planning to attend! If you have topics for the PTG, please try to start discussing them now either in the etherpad[2] or in a spec or just on the ML. (Such preparation is especially necessary for those cross project topics to get agreements/directions internally first, we have very limited time to conclude the topics during the PTG.) Let’s continue discussing this at tomorrow’s weekly meeting. [1] https://ethercalc.openstack.org/126u8ek25noy [2] https://etherpad.opendev.org/p/cyborg-victoria-goals Regards, Yumeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Wed May 20 07:40:24 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Wed, 20 May 2020 09:40:24 +0200 Subject: [operators] multi release upgrades? Message-ID: Hi, Does anybody did already some multi release upgrades f.e. queens->train and is able to share his knowhow? Thanks, Fabian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccamacho at redhat.com Wed May 20 07:47:42 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Wed, 20 May 2020 09:47:42 +0200 Subject: [operators] multi release upgrades? In-Reply-To: References: Message-ID: Hi, There are some upstream docs to achieve that[1]. So, depending on the OpenStack flavor you are using the steps might differ but the overall approach is pretty much described in that link. [1]: https://specs.openstack.org/openstack/tripleo-specs/specs/queens/fast-forward-upgrades.html Cheers, Carlos. On Wed, May 20, 2020 at 9:43 AM Fabian Zimmermann wrote: > > Hi, > > Does anybody did already some multi release upgrades f.e. queens->train and is able to share his knowhow? > > Thanks, > > Fabian From tobias.urdin at binero.com Wed May 20 09:18:29 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Wed, 20 May 2020 09:18:29 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> , <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> Message-ID: <1589966309759.42370@binero.com> Hello, I'm thinking that we maybe we can compromise on certain points here. I would say it's pretty certain we would not break any Puppet 5 code in itself in the Victoria cycle since we are already behind most of the new features anyways. What if: * We officially state Puppet 5 is unsupported * Remove Puppet 5 as supported in metadata.json * Only run Puppet 6 integration testing for CentOS and Ubuntu But we: * Keep a promise to not break Puppet 5 usage in Victoria * Keep Puppet 5 unit/syntax testing (as well as Puppet 6 of course) * Debian can run integration testing with Puppet 5 if you fix those up The benefit here is that we would not expose new consumers to use Puppet 5 but there is also drawbacks in that: * You cannot use Puppet 5 and do a "puppet module install" since metadata.json would cause Puppet 5 to not be supported (a note here is that we don't even test that this is possible with the current state of the modules i.e checking or conflict or faulty dependencies in the metadata.json files) Since the only issue here is that downstream Debian wants to use Puppet 5 I think this is a fair compromise and since you package the Puppet modules in Debian I assume you don't need any support being stated in metadata.json for Puppet 5 explicitly. What do you think? Best regards Tobias ________________________________________ From: Thomas Goirand Sent: Monday, May 11, 2020 5:37 PM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Puppet 5 is officially unsupported in Victoria release On 5/11/20 2:03 PM, Takashi Kajinami wrote: > Hi, > > > IIUC the most important reason behind puppet 5 removal is that puppet 5 > is EOLed soon, this month. > https://puppet.com/docs/puppet/latest/about_agent.html > > As you know puppet-openstack has some external dependencies, this can > cause the problem with our support for puppet 5. > For example if any dependencies remove their compatibility with puppet 5, > we should pin all of them to keep puppet 5 tests running. > This is the biggest concern I know about keeping puppet 5 support. > > While it makes sense to use puppet 5 for existing stable branches from a > stable > management perspective, I don't think it's actually reasonable to extend > support > for EOLed stuff in master development with possibly adding pins to old > modules. > IMO we can delay the actual removal a bit until puppet 6 gets ready in > Debian, > but I'd like to hear some actual plans to have puppet 6 available in Debian > so that we can expect short gap about puppet 5 eol timing, between > puppet-openstack > and puppet itself. > > Thank you, > Takashi Thank you, a bit more time, is the only thing I was asking for! About the plan for packaging Puppet 6 in Debian: I don't know yet, as one will have to do the work, and that's probably going to be me, since nobody is volunteering... :( Now, about dependencies: if supporting Puppet 5 gets on the way to use a newer dependency, then I suppose we can try to manage this when it happens. Worst case: forget about Puppet 5 if we get into such a bad situation. Until we're there, let's hope it doesn't happen too soon. I can tell you when I know more about the amount of work that there is to do. At the moment, it's still a bit blurry to me. Cheers, Thomas Goirand (zigo) From tobias.urdin at binero.com Wed May 20 09:48:17 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Wed, 20 May 2020 09:48:17 +0000 Subject: [operators] multi release upgrades? In-Reply-To: References: , Message-ID: <1589968097342.69605@binero.com> Hello, I cannot spreak for Queens to Train specifically but we jumped from Rocky to Train without any major hickups, we did a lot of testing before, just be sure to verify that projects support multiple jumps. I'm unsure if minimum version to jump to Train for Nova was Queens or Rocky so you might want to start looking at that. Best regards Tobias ________________________________________ From: Carlos Camacho Gonzalez Sent: Wednesday, May 20, 2020 9:47 AM To: Fabian Zimmermann Cc: OpenStack Discuss Subject: Re: [operators] multi release upgrades? Hi, There are some upstream docs to achieve that[1]. So, depending on the OpenStack flavor you are using the steps might differ but the overall approach is pretty much described in that link. [1]: https://specs.openstack.org/openstack/tripleo-specs/specs/queens/fast-forward-upgrades.html Cheers, Carlos. On Wed, May 20, 2020 at 9:43 AM Fabian Zimmermann wrote: > > Hi, > > Does anybody did already some multi release upgrades f.e. queens->train and is able to share his knowhow? > > Thanks, > > Fabian From thierry at openstack.org Wed May 20 09:54:01 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 20 May 2020 11:54:01 +0200 Subject: [largescale-sig] Next meeting: May 20, 8utc In-Reply-To: <3aae1db3-625e-4c78-e408-feed94a3a506@openstack.org> References: <3aae1db3-625e-4c78-e408-feed94a3a506@openstack.org> Message-ID: <382d32d0-b9bd-d631-1f76-6bcd85d424a9@openstack.org> Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-05-20-08.00.html TODOs: - masahito to post initial oslo.metrics code to openstack/oslo.metrics Next meeting: Jun 10, 8:00UTC on #openstack-meeting-3 -- Thierry Carrez (ttx) From katonalala at gmail.com Wed May 20 12:33:54 2020 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 20 May 2020 14:33:54 +0200 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: References: Message-ID: Hi, Back to the question of moving l2gw under 'x' from openstack. I think it is a valid option which we discussed with networking team. >From neutron perspective as networking-l2gw (and networking-l2gw-tempest-plugin) is not a neutron stadium project it is not monitored by them. As I see like tap-as-a-service and other small networking (but not stadium) projects it can leave under 'x'. Regards Lajos Sean McGinnis ezt írta (időpont: 2020. máj. 19., K, 17:51): > > >> There are no tarballs published for ussuri networking-mlnx or > >> vmware-nsx. The most recently published tarballs were in February. Is > >> this expected? networking-l2gw also has no ussuri tarballs, although > >> master was updated recently. > > Thanks to Yatin Karel for pointing out that some projects now publish > > under x/ namespace. l2gw does not though, so I'm not sure about that > > one. > > It looks like networking-l2gw is still under the openstack/ namespace: > > https://opendev.org/openstack/networking-l2gw > > Though it looks like it shouldn't be since it is not under official > governance: > > > https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml > > No official release team releases have been done for this repo. It looks > like there was a 15.0.0 tag pushed by the team last October: > > https://opendev.org/openstack/networking-l2gw/src/tag/15.0.0 > > And there is a corresponding 15.0.0 tarball published for that one: > > > https://tarballs.opendev.org/openstack/networking-l2gw/networking-l2gw-15.0.0.tar.gz > > So it looks like this repo either needs to be added to governance, and > added to the releases deliverables, or it needs to move to the x/ > namespace and have tagging managed by the team. > > Sean > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kotobi at dkrz.de Wed May 20 12:41:26 2020 From: kotobi at dkrz.de (Amjad Kotobi) Date: Wed, 20 May 2020 14:41:26 +0200 Subject: [keystone][ldap] Message-ID: <27629E23-B721-4A87-B2D4-B29710D6E835@dkrz.de> Hi all, I’m integrating keystone with LDAP, and having “service account” e.g. Nova, keystone etc.. which are in database. As soon as connecting it to ldap all authentication getting failed, how can I have both “service account” and “LDAP users” connected to Keystone? Here is my keystone.conf ################### [ldap] url = ldap://XXXXX user = uid=XXX,cn=sysaccounts,cn=etc,dc=XXX,dc=de password = dkrzprox user_tree_dn = cn=users,cn=accounts,dc=XXX,dc=de user_objectclass = posixAccount user_id_attribute = uid user_name_attribute = uid user_allow_create = false user_allow_update = false user_allow_delete = false group_tree_dn = cn=groups,cn=accounts,dc=XXX,dc=de group_objectclass = groupOfNames group_id_attribute = cn group_name_attribute = cn group_member_attribute = member group_desc_attribute = description group_allow_create = false group_allow_update = false group_allow_delete = false use_pool = true use_auth_pool = true debug_level = 4095 query_scope = sub [identity] driver = ldap ##################### OS: Centos7 OpenStack-Release: Train Any idea or example of options gonna be great! Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.chanu at univ-lyon1.fr Wed May 20 12:51:03 2020 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Wed, 20 May 2020 12:51:03 +0000 Subject: [keystone][ldap] In-Reply-To: <27629E23-B721-4A87-B2D4-B29710D6E835@dkrz.de> References: <27629E23-B721-4A87-B2D4-B29710D6E835@dkrz.de> Message-ID: <1589979063804.1716@univ-lyon1.fr> Hello, You can use multi domain authentification. One using LDAP and an other one using database https://docs.openstack.org/keystone/latest/admin/configuration.html Best regards, Romain ________________________________ From: Amjad Kotobi Sent: Wednesday, May 20, 2020 2:41 PM To: openstack-discuss at lists.openstack.org Subject: [keystone][ldap] Hi all, I'm integrating keystone with LDAP, and having "service account" e.g. Nova, keystone etc.. which are in database. As soon as connecting it to ldap all authentication getting failed, how can I have both "service account" and "LDAP users" connected to Keystone? Here is my keystone.conf ################### [ldap] url = ldap://XXXXX user = uid=XXX,cn=sysaccounts,cn=etc,dc=XXX,dc=de password = dkrzprox user_tree_dn = cn=users,cn=accounts,dc=XXX,dc=de user_objectclass = posixAccount user_id_attribute = uid user_name_attribute = uid user_allow_create = false user_allow_update = false user_allow_delete = false group_tree_dn = cn=groups,cn=accounts,dc=XXX,dc=de group_objectclass = groupOfNames group_id_attribute = cn group_name_attribute = cn group_member_attribute = member group_desc_attribute = description group_allow_create = false group_allow_update = false group_allow_delete = false use_pool = true use_auth_pool = true debug_level = 4095 query_scope = sub [identity] driver = ldap ##################### OS: Centos7 OpenStack-Release: Train Any idea or example of options gonna be great! Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsharma1818 at outlook.com Wed May 20 13:32:57 2020 From: rsharma1818 at outlook.com (Rahul Sharma) Date: Wed, 20 May 2020 13:32:57 +0000 Subject: [Neutron] How to change the MAC address of Gateway interface of the router In-Reply-To: <9c2453a3f9531dccf8d6da219fa672428eef8668.camel@redhat.com> References: , <9c2453a3f9531dccf8d6da219fa672428eef8668.camel@redhat.com> Message-ID: Thanks Sean.. Will definitely try this ________________________________ From: Sean Mooney Sent: Monday, May 18, 2020 4:55 AM To: Rahul Sharma ; openstack-discuss at lists.openstack.org Subject: Re: [Neutron] How to change the MAC address of Gateway interface of the router On Sat, 2020-05-16 at 17:05 +0000, Rahul Sharma wrote: > Hi, > > I have setup a multi-host openstack cloud on AWS consisting of 3 servers i.e. Controller, Compute & Network > > Everything is working as expected. My requirement is that the compute instances should be able to communicate with the > internet and vice-versa. > > However, AWS due to its security policies will drop all traffic that is sourced from the VMs because the VM traffic > will have the MAC address of the gateway interface of the router when it hits the AWS switch. This MAC address is not > know to AWS hence it drops this traffic. AWS will allow only that traffic that contains the registered MAC address as > its source address > > So I need to change the MAC address of the gateway interface of the L3 router on the network node. I tried googling > but could not find any solution. > > Is there any solution/command to do this ? you might be able to do a neutorn port update to update the neutron port mac of the router your other options is to not add an interface directly to br-ex and instead assign the wan netwroks gateway ip to the br-ex directly and nat the traffic https://www.rdoproject.org/networking/networking-in-too-much-detail/#nat-to-host-addres > > Thanks, > Kaushik -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed May 20 13:50:16 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 20 May 2020 15:50:16 +0200 Subject: [nova][neutron][ptg] How to increase the minimum bandwidth guarantee of a running instance In-Reply-To: References: <20200519195510.chfuo5byodcrooaj@skaplons-mac> Message-ID: On Tue, May 19, 2020 at 23:48, Sean Mooney wrote: > On Tue, 2020-05-19 at 21:55 +0200, Slawek Kaplonski wrote: >> Hi, >> >> Thx for starting this thread. >> I can share some thoughts from the Neutron point of view. >> >> On Tue, May 19, 2020 at 04:08:18PM +0200, Balázs >> Gibizer wrote: >> > Hi, >> > >> > [This is a topic from the PTG etherpad [0]. As the PTG time is >> intentionally >> > kept short, let's try to discuss it or even conclude it before >> the PTG] >> > >> > As a next step in the minimum bandwidth QoS support I would like >> to solve >> > the use case where a running instance has some ports with minimum >> bandwidth >> > but then user wants to change (e.g. increase) the minimum >> bandwidth used by >> > the instance. >> > >> > I see two generic ways to solve the use case: >> > >> > Option A - interface attach >> > --------------------------- >> > >> > Attach a new port with minimum bandwidth to the instance to >> increase the >> > instance's overall bandwidth guarantee. >> > >> > This only impacts Nova's interface attach code path: >> > 1) The interface attach code path needs to read the port's >> resource request >> > 2) Call Placement GET /allocation_candidates?in_tree=> of the >> > instance> >> > 3a) If placement returns candidates then select one and modify >> the current >> > allocation of the instance accordingly and continue the existing >> interface >> > attach code path. >> > 3b) If placement returns no candidates then there is no free >> resource left >> > on the instance's current host to resize the allocation locally. > so currently we dont support attaching port with resouce request. > if we were to do that i would prefer to make it more generic e.g. > support attich sriov devices as well. > > i dont think we should ever support this for the usecase of changing > qos policies or bandwith allocations > but i think this is a good feature in its own right. >> > >> > >> > Option B - QoS rule update >> > -------------------------- >> > >> > Allow changing the minimum bandwidth guarantee of a port that is >> already >> > bound to the instance. >> > >> > Today Neutron rejects such QoS rule update. If we want to support >> such >> > update then: >> > * either Neutron should call placement allocation_candidates API >> and the >> > update the instance's allocation. Similarly what Nova does in >> Option A. >> > * or Neutron should tell Nova that the resource request of the >> port has been >> > changed and then Nova needs to call Placement and update >> instance's >> > allocation. >> >> In this case, if You update QoS rule, don't forget that policy with >> this rule >> can be used by many ports already. So we will need to find all of >> them and >> call placement for each. >> What if that will be fine for some ports but not for all? > i think if we went with a qos rule update we would not actully modify > the rule itself > that would break to many thing and instead change change the qos rule > that is applied to the port. > > e.g. if you have a 1GBps rule and and 10GBps then we could support > swaping between the rules > but we should not support chnaging the 1GBps rule to a 2GBps rule. > > neutron should ideally do the placement check and allocation update > as part of the qos rule update > api action and raise an exception if it could not. >> >> > >> > >> > The Option A and Option B are not mutually exclusive but still I >> would like >> > to see what is the preference of the community. Which direction >> should we >> > move forward? >> >> There is also 3rd possible option, very similar to Option B which >> is change of >> the QoS policy for the port. It's basically almost the same as >> Option B, but >> that way You have always only one port to update (unless it's not >> policy >> associated with network). So because of that reason, maybe a bit >> easier to do. > > yes that is what i was suggesting above and its one of the option we > discused when first > desigining the minium bandwith policy. this i think is the optimal > solution and i dont think we should do > option a or b although A could be done as a sperate feature just not > as a way we recommend to update qos policies. My mistake. I don't want to allow changing a rule I want to allow changing which rule is assigned to a bound port. As Sean described this direction might require neutron to call GET /allocation_candidates and then update the instance allocation as a result in placement. However it would create a situation where the instance's allocation is managed both from nova and neutron. > >> >> > >> > >> > Both options have the limitation that if the instance's current >> host does >> > not have enough free resources for the requested change then Nova >> will not >> > do a full scheduling and move the instance to another host where >> resource is >> > available. This seems a hard problem to me. > i honestly dont think it is we condiered this during the design of > the feature with the > intent of one day supporting it. option c was how i always assumed it > would work. > support attach and detach for port or other things with reqsouce > requests is a seperate topic > as it applies to gpu hotplug, sriov port and cyborg so i would ignore > that for now and focuse > on what is basicaly a qos resize action where we are swaping between > predefiend qos policies. >> > >> > Do you have any idea how can we remove / ease this limitation >> without >> > boiling the ocean? >> > >> > For example: Does it make sense to implement a bandwidth weigher >> in the >> > scheduler so instances can be spread by free bandwidth during >> creation? > we discussed this in the passed breifly. i always belived that was a > good idea but it would require the allocation > candiates to be passed to the weigher and the provider summaries. we > have other usecases that could benifit form that > too but i think in the past that was see as to much work when we did > not even have the basic support working yet. > now i think it would be a resonable next step and as i said we will > need the ability to weigh based on allcoation > candiates in the future of for other feature too so this might be a > nice time to intoduce that. >> > >> > >> > Cheers, >> > gibi >> > >> > >> > [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> > >> > >> > >> >> > From balazs.gibizer at est.tech Wed May 20 14:03:56 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 20 May 2020 16:03:56 +0200 Subject: [nova][neutron][ptg] Future of the routed network support Message-ID: Hi, [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally kept short, let's try to discuss it or even conclude it before the PTG] There is only basic scheduling support in nova for the neutron routed networks feature (server create with port.ip_allocation=deferred seems to work). There was multiple attempts in the past to complete the support (e.g.server create with port.ip_allocation=immediate or server move operations). The latest attempt started by Matt couple of cycles ago, and in the last cycle I tried to push that forward[1]. When I added this topic to the PTG etherpad I thought I will have time in the Victoria cycle to continue [1] but internal priorities has changed. So finishing this feature needs some developers. If there are volunteers for Victoria then please let me know and then we can keep this as a topic for the PTG but otherwise I will remove it from the schedule. Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg [1] https://review.opendev.org/#/q/topic:routed-networks-scheduling From laurentfdumont at gmail.com Wed May 20 15:32:03 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 20 May 2020 11:32:03 -0400 Subject: [Nova][Scheduler] Reducing race-conditions and re-scheduling during creation of multiple high-ressources instances or instances with anti-affinity. In-Reply-To: References: Message-ID: Hey Melanie, Sean, Thank you! That should cover most of our uses cases. Is there any downside to a "subset_size" that would be larger than the actual number of computes? We have some env with 4 computes, and others with 100+. Laurent On Tue, May 19, 2020 at 7:33 PM Sean Mooney wrote: > On Tue, 2020-05-19 at 18:23 -0400, Laurent Dumont wrote: > > Hey everyone, > > > > We are seeing a pretty consistent issue with Nova/Scheduler where some > > instances creation are hitting the "max_attempts" limits of the > scheduler. > Well the answer you are not going to like is nova is working as expected > and > we expect this to happen when you use multi > create. placment help reduce the issue but > there are some fundemtal issue with how we do retries that make this hard > to > fix. > > im not going to go into the detail right now as its not helpful but > we have had quryies about this form customer in the past so fortunetly i > do have some > recomendation i can share > https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 > well not that i have made the comment public i can :) > > > > > Env : Red Hat Queens > > Computes : All the same hardware and specs (even weight throughout) > > Nova : Three nova-schedulers > > > > This can be due to two different factors (from what we've seen) : > > > > - Anti-affinity rules are getting triggered during the creation (two > > claims are done within a few milliseconds on the same compute) which > counts > > as a retry (we've seen this when spawning 40+ VMs in a single server > group > > with maybe 50-55 computes - or even less 14 instances on 20ish > computes). > yep the only way to completely avoid this issue on queens and depending on > what fature you are using on master > is to boot the vms serially waiting for each vm to sapwn. > > > - We've seen another case where MEMORY_MB becomes an issue (we are > > spinning new instances in the same host-aggregate where VMs are > already > > running. Only one VM can run per compute but there are no > anti-affinity > > groups to force that between the two deployments. The ressource > > requirements prevent anything else from getting spun on those). > > - The logs look like the following : > > - Unable to submit allocation for instance > > 659ef90e-33b8-42a9-9c8e-fac87278240d (409 {"errors": [{"status": > 409, > > "request_id": "req-429c2734-2f2d-4d2d-82d1-fa4ebe12c991", > > "detail": "There > > was a conflict when trying to complete your request.\n\n Unable > > to allocate > > inventory: Unable to create allocation for 'MEMORY_MB' on > > resource provider > > '35b78f3b-8e59-4f2f-8cad-eaf116b7c1c7'. The requested amount would > exceed > > the capacity. ", "title": "Conflict"}]}) / Setting instance to > ERROR > > state.: MaxRetriesExceeded: Exceeded maximum number of retries. > Exhausted > > all hosts available for retrying build failures for instance > > f6d06cca-e9b5-4199-8220-e3ff2e5c2a41. > in this case you are racing with ohter instance for the that host. > basically when doing a multi create if any vm fails to boot it will go to > the next > host in the alternate host list and try to create an allcoation againt > ther first host in that list. > > however when the alternate host list was created none of the vms had been > sapwned yet. > by the time the rety arrive at the conductor one of the other vms could > have been schduled to that host either > as a first chose or because that other vm retried first and won the race. > > when this happens we then try the next host in the list wehre we can race > again. > > since the retries happen at the cell conductor level without going back to > the schduler again we are not going to check > the current status of the host using the anti affintiy filter or anti > affintiy weigher during the retry so while it was > vaild intially i can be invalid when we try to use the alternate host. > > the only way to fix that is to have retrys not use alternate hosts and > instead have each retry return the full scudling > process so that it can make a desicion based on the current state of the > server not the old view. > > - I do believe we are hitting this issue as well : > > https://bugs.launchpad.net/nova/+bug/1837955 > > - In all the cases where the Stacks creation failed, one instance > was > > left in the Build state for 120 minutes and then finally failed. > > > > From what we can gather, there are a couple of parameters that be be > > tweaked. > > > > 1. host_subset_size (Return X number of host instead of 1?) > > 2. randomize_allocation_candidates (Not 100% on this one) > > 3. shuffle_best_same_weighed_hosts (Return a random of X number of > > computes if they are all equal (instance of the same list for all > > scheduling requests)) > > 4. max_attempts (how many times the Scheduler will try to fit the > > instance somewhere) > > > > We've already raised "max_attempts" to 5 from the default of 3 and will > > raise it further. That said, what are the recommendations for the rest of > > the settings? We are not exactly concerned with stacking vs spreading > (but > > that's always nice) of the instances but rather making sure deployments > > fail because of real reasons and not just because Nova/Scheduler keeps > > stepping on it's own toes. > https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 > has some suggestions > but tl;dr it should be safe to set max_attempts=10 if you set > subset_size=15 shuffle_best_same_weighed_hosts=true > that said i really would not put max_attempts over 10, max_attempts 5 > should be more then enough. > subset_size=15 is a little bit arbiraty. the best value will depend on the > type ical size of your deplopyment and the > size of your cloud. randomize_allocation_candidates help if and only if > you have limite the number of allocation > candiates retruned by placment to subset of your cloud hosts. > > e.g. if you set the placemment allcation candiate limit to 10 on for a > cloud with 100 host then you should set > randomize_allocation_candidates=true so that you do not get a bias that > will pack host baded on the natural db order. > the default limit for alloction candiates is 1000 so unless you have more > then 1000 hosts or have changed that limit you > do not need to set this. > > > > > Thanks! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Wed May 20 15:40:07 2020 From: aj at suse.com (Andreas Jaeger) Date: Wed, 20 May 2020 17:40:07 +0200 Subject: [docs][all] Important changes in recent openstackdocstheme updates Message-ID: <67de416d-8881-66f5-29d9-29069290e354@suse.com> A couple of changes recently merged into openstackdocstheme to fix problems reported. These had some surprises in it and we'd like to inform you about the changes: * Config options are now prefixed with openstackdocs_, the old names will be removed in a future release * The 'project' config option is now only respected (and displayed in the left menu) if 'openstackdocs_auto_name = False' is set. By default, the theme uses the package name (from setup.cfg) * The HTML files show the version number by default (with exception of releasenotes and api docs) calculated from git. If you want to use your own version number or disable it, set 'openstackdocs_auto_version = False' and manually configure the 'version' and 'release' options. * Previously, the theme always used 'pygments_style = "native"' and overrode the setting of 'sphinx' that many repos have. Now the setting is respected. For a few repos this lead to unreadable code snippets. If you see this or want to go back to the previous theme, configure 'pygments_style = "native"'. * Many projects have written PDF documents. openstackdocstheme can now optionally link to them. Set 'openstackdocs_pdf_link' to True to show the icon with path. Note that the PDF file is placed on docs.openstack.org in the top of the html files while in check/gate it's in a separate PDF folder. Thus, the site preview will show in check/gate a broken link - but it works fine, check [2]. * Both reno (since version 3.1.0) and openstackdocsstheme are now declared parallel safe, the CI jobs automatically build releasenotes in parallel [1]. You can modify your local tox job to do this by adding the '-j auto' parameter to your 'sphinx-build' invocation. We're releasing openstackdocstheme version 2.2.1 soon with two further fixes: * PDF documents will now show the version number like html document, no need to configure versions in conf.py for this anymore [3]. * small bug fix (if you set auto_name = False in doc/source/conf.py, this hit so far 5 repos)[4]. Everything is documented in the documentation of openstackdocstheme [2]. If there are any questions, best ask in #openstack-oslo. Andreas has started pushing changes to update projects with topic:reno-openstackdocstheme. Hope that's all for Victoria on the openstackdocstheme, Stephen and Andreas [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014902.html [2] https://docs.openstack.org/openstackdocstheme/ [3] https://review.opendev.org/729554 [4] https://review.opendev.org/729031 -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From colleen at gazlene.net Wed May 20 16:40:25 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 20 May 2020 09:40:25 -0700 Subject: [keystone] In-Reply-To: References: Message-ID: Hello, On Tue, May 19, 2020, at 08:50, Amjad Kotobi wrote: > Hi, > > I have keystone running in HA multiple nodes, and after reboot one of > the node it started in logs to pop up below messages > > “""WARNING keystone.server.flask.application > [req-fe993224-cc7e-4e5f-83a8-1e925e60b995 > 24dfa0beaaaa48848b4eee68e449f2de d1fecbdb49ef4871b94b201b7856eed3 - > default default] Could not recognize Fernet token: TokenNotFound: Could > not recognize Fernet token””” > > And in command line > > “”"Unauthorized: The request you have made requires authentication. > (HTTP 401) (Request-ID: req-1145c696-6275-4c78-9486-a690b98548ed)””" > > During command lines when request lands on machine which hasn’t > rebooted or so, the result comes but on the rest fore-mentioned message > always in place. > > Openstack release Train, and deployed manually. > > How can I tackle it down? Any ideas? It seems like the key repositories are out of sync among your control plane nodes. Check the keystone fernet FAQ for help distributing and rotating fernet keys: https://docs.openstack.org/keystone/latest/admin/fernet-token-faq.html Colleen > > Thanks > Amjad From colleen at gazlene.net Wed May 20 16:42:40 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 20 May 2020 09:42:40 -0700 Subject: [keystone][ldap] In-Reply-To: <1589979063804.1716@univ-lyon1.fr> References: <27629E23-B721-4A87-B2D4-B29710D6E835@dkrz.de> <1589979063804.1716@univ-lyon1.fr> Message-ID: <1bba0676-1609-43dd-9467-7e11b9f1b86e@www.fastmail.com> Hello, On Wed, May 20, 2020, at 05:51, CHANU ROMAIN wrote: > Hello, > > > > You can use multi domain authentification. > > > > One using LDAP and an other one using database > > > > https://docs.openstack.org/keystone/latest/admin/configuration.html Romain is right, use domain-specific configuration to configure a different identity backend for non-service users. The specific section of that page that addresses this is here: https://docs.openstack.org/keystone/latest/admin/configuration.html#domain-specific-configuration Colleen > > > > Best regards, > > Romain > > *From:* Amjad Kotobi > *Sent:* Wednesday, May 20, 2020 2:41 PM > *To:* openstack-discuss at lists.openstack.org > *Subject:* [keystone][ldap] > Hi all, > > I’m integrating keystone with LDAP, and having “service account” e.g. > Nova, keystone etc.. which are in database. > As soon as connecting it to ldap all authentication getting failed, how > can I have both “service account” and “LDAP users” connected to > Keystone? > > Here is my keystone.conf > > > ################### > [ldap] > > url = ldap://XXXXX > > user = uid=XXX,cn=sysaccounts,cn=etc,dc=XXX,dc=de > password = dkrzprox > user_tree_dn = cn=users,cn=accounts,dc=XXX,dc=de > user_objectclass = posixAccount > user_id_attribute = uid > user_name_attribute = uid > user_allow_create = false > user_allow_update = false > user_allow_delete = false > group_tree_dn = cn=groups,cn=accounts,dc=XXX,dc=de > group_objectclass = groupOfNames > group_id_attribute = cn > group_name_attribute = cn > group_member_attribute = member > group_desc_attribute = description > group_allow_create = false > group_allow_update = false > group_allow_delete = false > use_pool = true > use_auth_pool = true > debug_level = 4095 > query_scope = sub > > [identity] > > driver = ldap > > ##################### > > OS: Centos7 > OpenStack-Release: Train > > Any idea or example of options gonna be great! > > > Thank you > > > From neil at tigera.io Wed May 20 17:04:06 2020 From: neil at tigera.io (Neil Jerram) Date: Wed, 20 May 2020 18:04:06 +0100 Subject: [keystone][devstack][openstackclient] Problem with Keystone setup in stable/ussuri devstack install Message-ID: + ./stack.sh:main:1091 : create_keystone_accounts + lib/keystone:create_keystone_accounts:314 : local admin_project ++ lib/keystone:create_keystone_accounts:315 : oscwrap project show admin -f value -c id WARNING: Failed to import plugin identity. ++ functions-common:oscwrap:2370 : return 2 + lib/keystone:create_keystone_accounts:315 : admin_project='openstack: '\''project show admin -f value -c id'\'' is not an openstack command. See '\''openstack --help'\''. I believe this is a completely mainline Keystone setup by devstack. The top-level stack.sh code is echo_summary "Starting Keystone" if [ "$KEYSTONE_AUTH_HOST" == "$SERVICE_HOST" ]; then init_keystone start_keystone bootstrap_keystone fi create_keystone_accounts ... bootstrap_keystone succeeded but create_keystone_accounts failed as shown above, trying to execute openstack project show "admin" -f value -c id IIUC, the rootmost problem here is "WARNING: Failed to import plugin identity.", indicating that python-openstackclient is failing to import its openstackclient.identity.client module. But I don't know any more about why that would be. Any ideas? Many thanks, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed May 20 17:55:17 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 May 2020 18:55:17 +0100 Subject: [Nova][Scheduler] Reducing race-conditions and re-scheduling during creation of multiple high-ressources instances or instances with anti-affinity. In-Reply-To: References: Message-ID: <8cd05e52e1a78716d25fc6df8773d4109ebf4f93.camel@redhat.com> On Wed, 2020-05-20 at 11:32 -0400, Laurent Dumont wrote: > Hey Melanie, Sean, > > Thank you! That should cover most of our uses cases. Is there any downside > to a "subset_size" that would be larger than the actual number of computes? > We have some env with 4 computes, and others with 100+. it will basicaly make the weigher irrelevent. when you use subset_size we basically select randomly form the first "subset_size" hosts in the list of host returned so so if subset_size is equal or large then the total number of host it will just be a random selection from the hosts that pass the filter/placment query. so you want subset_size to be proportionally small (an order of mangniture or two smaller) compareed to the number of avlaible hosts and proptionally equivelent (within 1 order of mangniture or so) of your typeical concurrent instnace multi create request. you want it to be small relitive to the could so the that weigher remian statistally relevent and similar to the size of the multi create to make the proablity of the same host being selected for an instance low. > > Laurent > > On Tue, May 19, 2020 at 7:33 PM Sean Mooney wrote: > > > On Tue, 2020-05-19 at 18:23 -0400, Laurent Dumont wrote: > > > Hey everyone, > > > > > > We are seeing a pretty consistent issue with Nova/Scheduler where some > > > instances creation are hitting the "max_attempts" limits of the > > > > scheduler. > > Well the answer you are not going to like is nova is working as expected > > and > > we expect this to happen when you use multi > > create. placment help reduce the issue but > > there are some fundemtal issue with how we do retries that make this hard > > to > > fix. > > > > im not going to go into the detail right now as its not helpful but > > we have had quryies about this form customer in the past so fortunetly i > > do have some > > recomendation i can share > > https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 > > well not that i have made the comment public i can :) > > > > > > > > Env : Red Hat Queens > > > Computes : All the same hardware and specs (even weight throughout) > > > Nova : Three nova-schedulers > > > > > > This can be due to two different factors (from what we've seen) : > > > > > > - Anti-affinity rules are getting triggered during the creation (two > > > claims are done within a few milliseconds on the same compute) which > > > > counts > > > as a retry (we've seen this when spawning 40+ VMs in a single server > > > > group > > > with maybe 50-55 computes - or even less 14 instances on 20ish > > > > computes). > > yep the only way to completely avoid this issue on queens and depending on > > what fature you are using on master > > is to boot the vms serially waiting for each vm to sapwn. > > > > > - We've seen another case where MEMORY_MB becomes an issue (we are > > > spinning new instances in the same host-aggregate where VMs are > > > > already > > > running. Only one VM can run per compute but there are no > > > > anti-affinity > > > groups to force that between the two deployments. The ressource > > > requirements prevent anything else from getting spun on those). > > > - The logs look like the following : > > > - Unable to submit allocation for instance > > > 659ef90e-33b8-42a9-9c8e-fac87278240d (409 {"errors": [{"status": > > > > 409, > > > "request_id": "req-429c2734-2f2d-4d2d-82d1-fa4ebe12c991", > > > "detail": "There > > > was a conflict when trying to complete your request.\n\n Unable > > > to allocate > > > inventory: Unable to create allocation for 'MEMORY_MB' on > > > resource provider > > > '35b78f3b-8e59-4f2f-8cad-eaf116b7c1c7'. The requested amount would > > > > exceed > > > the capacity. ", "title": "Conflict"}]}) / Setting instance to > > > > ERROR > > > state.: MaxRetriesExceeded: Exceeded maximum number of retries. > > > > Exhausted > > > all hosts available for retrying build failures for instance > > > f6d06cca-e9b5-4199-8220-e3ff2e5c2a41. > > > > in this case you are racing with ohter instance for the that host. > > basically when doing a multi create if any vm fails to boot it will go to > > the next > > host in the alternate host list and try to create an allcoation againt > > ther first host in that list. > > > > however when the alternate host list was created none of the vms had been > > sapwned yet. > > by the time the rety arrive at the conductor one of the other vms could > > have been schduled to that host either > > as a first chose or because that other vm retried first and won the race. > > > > when this happens we then try the next host in the list wehre we can race > > again. > > > > since the retries happen at the cell conductor level without going back to > > the schduler again we are not going to check > > the current status of the host using the anti affintiy filter or anti > > affintiy weigher during the retry so while it was > > vaild intially i can be invalid when we try to use the alternate host. > > > > the only way to fix that is to have retrys not use alternate hosts and > > instead have each retry return the full scudling > > process so that it can make a desicion based on the current state of the > > server not the old view. > > > - I do believe we are hitting this issue as well : > > > https://bugs.launchpad.net/nova/+bug/1837955 > > > - In all the cases where the Stacks creation failed, one instance > > > > was > > > left in the Build state for 120 minutes and then finally failed. > > > > > > From what we can gather, there are a couple of parameters that be be > > > tweaked. > > > > > > 1. host_subset_size (Return X number of host instead of 1?) > > > 2. randomize_allocation_candidates (Not 100% on this one) > > > 3. shuffle_best_same_weighed_hosts (Return a random of X number of > > > computes if they are all equal (instance of the same list for all > > > scheduling requests)) > > > 4. max_attempts (how many times the Scheduler will try to fit the > > > instance somewhere) > > > > > > We've already raised "max_attempts" to 5 from the default of 3 and will > > > raise it further. That said, what are the recommendations for the rest of > > > the settings? We are not exactly concerned with stacking vs spreading > > > > (but > > > that's always nice) of the instances but rather making sure deployments > > > fail because of real reasons and not just because Nova/Scheduler keeps > > > stepping on it's own toes. > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 > > has some suggestions > > but tl;dr it should be safe to set max_attempts=10 if you set > > subset_size=15 shuffle_best_same_weighed_hosts=true > > that said i really would not put max_attempts over 10, max_attempts 5 > > should be more then enough. > > subset_size=15 is a little bit arbiraty. the best value will depend on the > > type ical size of your deplopyment and the > > size of your cloud. randomize_allocation_candidates help if and only if > > you have limite the number of allocation > > candiates retruned by placment to subset of your cloud hosts. > > > > e.g. if you set the placemment allcation candiate limit to 10 on for a > > cloud with 100 host then you should set > > randomize_allocation_candidates=true so that you do not get a bias that > > will pack host baded on the natural db order. > > the default limit for alloction candiates is 1000 so unless you have more > > then 1000 hosts or have changed that limit you > > do not need to set this. > > > > > > > > Thanks! > > > > From kendall at openstack.org Tue May 19 16:59:52 2020 From: kendall at openstack.org (Kendall Waters) Date: Tue, 19 May 2020 11:59:52 -0500 Subject: Virtual PTG Schedule Live & Registration Reminder Message-ID: <620C17A8-DAD9-4B9F-A317-59127A12D358@openstack.org> Hey everyone, The June 2020 Project Teams Gathering is right around the corner! The official schedule has now been posted on the PTG website [1], the PTGbot has been updated[2], and we have also attached it to this email. Friendly reminder, if you have not already registered, please do so [3]. It is important that we get everyone to register for the event as this is how we will contact you about tooling information and other event details. Please let us know if you have any questions. Cheers, The Kendalls (diablo_rojo & wendallkaters) [1] www.openstack.org/ptg [2] http://ptg.openstack.org/ptg.html [3] https://virtualptgjune2020.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PTG2020_Schedule (2).pdf Type: application/pdf Size: 601622 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From wilkers.steve at gmail.com Tue May 19 17:09:26 2020 From: wilkers.steve at gmail.com (Steve Wilkerson) Date: Tue, 19 May 2020 12:09:26 -0500 Subject: [openstack-helm] Stepping down as core reviewer Message-ID: Hey everyone, I hope you're all staying safe and well. I switched jobs a few months back that took me in a different direction from my previous employer, and as a result I've not had the time to remain involved with the OpenStack community (especially as my day job no longer involves working with OpenStack). I think it's best that I step down from the openstack-helm core reviewer team, as I can't realistically make the time to do my due diligence in providing thorough, useful code reviews. I greatly appreciate the opportunities that working with the OpenStack community at large has provided me over the past 5 years. Cheers, srwilkers -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed May 20 21:45:17 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 20 May 2020 23:45:17 +0200 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: References: Message-ID: <20200520214517.7xtwpga6gertye46@skaplons-mac> Hi, Actually I'm not really sure what are the criteria of what should be in the "openstack/" and what in "x/" namespace. I know that in "openstack/" should be an official OpenStack projects but how to really be sure what is such official project and what no, I don't know. Speaking about those projects mentioned in the thread, networking-mlnx and vmware-nsx are both in the "x/" namespace but networking-l2gw is in the "openstack/" namespace. None of them is Neutron stadium project. There is also many other projects with "networking-" in the name in the "openstack/" namespace. For example: networking-baremetal networking-generic-switch which Ironic team is taking care of. But we have also projects like: networking-onos networking-calico and probably others. Those projects are not in deliverables in https://opendev.org/openstack/releases/src/branch/master (at least not for Train or Ussuri releases). So should we keep them in "openstack/" namespace or move to "x/"? On Wed, May 20, 2020 at 02:33:54PM +0200, Lajos Katona wrote: > Hi, > Back to the question of moving l2gw under 'x' from openstack. > I think it is a valid option which we discussed with networking team. > From neutron perspective as networking-l2gw (and > networking-l2gw-tempest-plugin) is not a neutron stadium project it is not > monitored by them. > As I see like tap-as-a-service and other small networking (but not stadium) > projects it can leave under 'x'. > > Regards > Lajos > > Sean McGinnis ezt írta (időpont: 2020. máj. 19., K, > 17:51): > > > > > >> There are no tarballs published for ussuri networking-mlnx or > > >> vmware-nsx. The most recently published tarballs were in February. Is > > >> this expected? networking-l2gw also has no ussuri tarballs, although > > >> master was updated recently. > > > Thanks to Yatin Karel for pointing out that some projects now publish > > > under x/ namespace. l2gw does not though, so I'm not sure about that > > > one. > > > > It looks like networking-l2gw is still under the openstack/ namespace: > > > > https://opendev.org/openstack/networking-l2gw > > > > Though it looks like it shouldn't be since it is not under official > > governance: > > > > > > https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml > > > > No official release team releases have been done for this repo. It looks > > like there was a 15.0.0 tag pushed by the team last October: > > > > https://opendev.org/openstack/networking-l2gw/src/tag/15.0.0 > > > > And there is a corresponding 15.0.0 tarball published for that one: > > > > > > https://tarballs.opendev.org/openstack/networking-l2gw/networking-l2gw-15.0.0.tar.gz > > > > So it looks like this repo either needs to be added to governance, and > > added to the releases deliverables, or it needs to move to the x/ > > namespace and have tagging managed by the team. > > > > Sean > > > > > > -- Slawek Kaplonski Senior software engineer Red Hat From iwienand at redhat.com Wed May 20 22:22:42 2020 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 21 May 2020 08:22:42 +1000 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> Message-ID: <20200520222242.GA1349440@fedora19.localdomain> On Tue, May 19, 2020 at 08:13:26AM +0100, Mark Goddard wrote: > I would urge caution over dropping Python 2 from branchless projects. > We tried it for Tenks, and within weeks had created a branch from the > last release supporting Python 2 for bug fixes. I certainly get that but it has to happen one day. The gate is broken because we do use requirements from OpenStack and they have moved on to be Python 3 only. Since dib is currently on version 2, perhaps we should tag as version 3 and then leave ourselves the option of a branch from that point, if we find we actually need it? [1] https://review.opendev.org/728889 From fungi at yuggoth.org Wed May 20 23:18:37 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 May 2020 23:18:37 +0000 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: <20200520214517.7xtwpga6gertye46@skaplons-mac> References: <20200520214517.7xtwpga6gertye46@skaplons-mac> Message-ID: <20200520231837.k5xhbatvpjq2jzfi@yuggoth.org> On 2020-05-20 23:45:17 +0200 (+0200), Slawek Kaplonski wrote: > Actually I'm not really sure what are the criteria of what should > be in the "openstack/" and what in "x/" namespace. I know that in > "openstack/" should be an official OpenStack projects but how to > really be sure what is such official project and what no, I don't > know. "Officially part of OpenStack" in this case means it's listed in https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml at the time it was published, but also inclusion here has been considered sufficient for carrying the OpenStack name in similar ways: https://opendev.org/openstack/governance/src/branch/master/reference/sigs-repos.yaml > Speaking about those projects mentioned in the thread, > networking-mlnx and vmware-nsx are both in the "x/" namespace but > networking-l2gw is in the "openstack/" namespace. None of them is > Neutron stadium project. There is also many other projects with > "networking-" in the name in the "openstack/" namespace. For > example: > > networking-baremetal > networking-generic-switch > > which Ironic team is taking care of. But we have also projects > like: > > networking-onos > networking-calico > > and probably others. Those projects are not in deliverables in > https://opendev.org/openstack/releases/src/branch/master (at least > not for Train or Ussuri releases). So should we keep them in > "openstack/" namespace or move to "x/"? [...] During the great namespace migration, we did not rename(space) any repositories listed in the two files I mentioned above or any which were formerly part of OpenStack as evidenced by inclusion in this file: https://opendev.org/openstack/governance/src/branch/master/reference/legacy.yaml That decision was recorded in this resolution: https://governance.openstack.org/tc/resolutions/20190322-namespace-unofficial-projects.html (Worth noting, the "unknown/" namespace mentioned there was a placeholder, and "x/" is what the OpenDev sysadmins ultimately chose to use instead.) We announced via another resolution that any projects which were once but no longer part of OpenStack had until the end of the Train cycle to pick a new namespace for themselves: https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html I guess the question is what to do about artifacts generated for stable branches of repositories which were deliverables for earlier OpenStack coordinated releases. My take is that we've already got a concept of "partial retirement" where a project is deprecated but still receives stable branch releases for some period of time thereafter. Taking the mandatory retirement resolution into account, I think that means the best fit for a workflow is to fork continued development to another namespace outside "openstack/" namespace (whether that's "x/" or something new), and follow the current partial retirement precedent for the original copy which remains in the "openstack/" namespace. I agree though, this is a tough question with obvious complexities for all possible solutions. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From juliaashleykreger at gmail.com Thu May 21 03:07:14 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 20 May 2020 20:07:14 -0700 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: <20200520222242.GA1349440@fedora19.localdomain> References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> <20200520222242.GA1349440@fedora19.localdomain> Message-ID: On Wed, May 20, 2020 at 3:24 PM Ian Wienand wrote: > > On Tue, May 19, 2020 at 08:13:26AM +0100, Mark Goddard wrote: > > I would urge caution over dropping Python 2 from branchless projects. > > We tried it for Tenks, and within weeks had created a branch from the > > last release supporting Python 2 for bug fixes. > > I certainly get that but it has to happen one day. The gate is broken > because we do use requirements from OpenStack and they have moved on > to be Python 3 only. > > Since dib is currently on version 2, perhaps we should tag as version > 3 and then leave ourselves the option of a branch from that point, if > we find we actually need it? I think this is the most reasonable action to take. > > [1] https://review.opendev.org/728889 > > From iwienand at redhat.com Thu May 21 04:22:48 2020 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 21 May 2020 14:22:48 +1000 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: <20200520222242.GA1349440@fedora19.localdomain> References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> <20200520222242.GA1349440@fedora19.localdomain> Message-ID: <20200521042248.GA1361000@fedora19.localdomain> On Thu, May 21, 2020 at 08:22:42AM +1000, Ian Wienand wrote: > Since dib is currently on version 2, perhaps we should tag as version > 3 and then leave ourselves the option of a branch from that point, if > we find we actually need it? After a bit of discussion, I think [1] is ready; it keeps minimal Python 3.5 support to keep inline with Zuul/nodepool by not using constraints to test under 3.5, but then includes py36 onwards jobs as usual. Looking at bifrost, they seem to install from Zuul checkout, so just adding a branch-override tag for the last 2.0 (2.37 I guess) to the required-projects on stable branches should do it. johnsom confirmed it would be OK for octavia. Since [1] has a few dependencies anyway, I think it's probably best to force-merge [2,3,4,5] around the gate failures (they'd had review from clarkb and otherwise passing) which fixes some bits about installing the focal kernel and will mean that the last 2.X does not have any known issues building images up until the focal release. -i [1] https://review.opendev.org/728889 [2] https://review.opendev.org/727049 [3] https://review.opendev.org/727050 [4] https://review.opendev.org/726996 [5] https://review.opendev.org/725752 From mark at stackhpc.com Thu May 21 07:52:20 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 21 May 2020 08:52:20 +0100 Subject: [kolla] deprecating kolla-cli Message-ID: Hi, We previously planned to retire kolla-cli due to lack of interest, but at the last minute there was an offer to help support it. That support is no longer available, and we are not aware of the project having gained traction since our original decision. We are therefore adding a deprecation notice [1] to the Ussuri release notes, with a plan to remove it from governance in Victoria if there is no response to the call for help. [1] https://review.opendev.org/729855 Thanks, Mark From mark at stackhpc.com Thu May 21 08:26:58 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 21 May 2020 09:26:58 +0100 Subject: [neutron][release] Missing tarballs for networking-mlnx, networking-l2gw and vmware-nsx In-Reply-To: References: Message-ID: On Tue, 19 May 2020 at 15:30, Mark Goddard wrote: > > Hi, > > There are no tarballs published for ussuri networking-mlnx or > vmware-nsx. The most recently published tarballs were in February. Is > this expected? networking-l2gw also has no ussuri tarballs, although > master was updated recently. It appears that networking-ansible made a 4.0.0 release from their stable/ussuri branch, but this has not been published to https://tarballs.opendev.org/x/networking-ansible/. The same is true for the 16.0.0 release of networking-mlnx: https://tarballs.opendev.org/x/networking-mlnx/. > > Thanks, > Mark From doka.ua at gmx.com Thu May 21 11:24:27 2020 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Thu, 21 May 2020 14:24:27 +0300 Subject: [nova][cinder][linstor] VM creation failed with invalid libvirt xml Message-ID: <7a7225ac-a37e-0661-6f6a-5eb24169490f@gmx.com> Hi colleagues, does anybody here use Linstor as backend for Openstack? I can not launch VM (synopsis below) while everything looks fine until the last step :-) Few words about Openstack's storage configuration: - controllers: cinder-api, cinder-scheduler, cinder-volume (modified with latest driver from Linstor's git), linstor-client, python-linstor - compute nodes: linstor-controller, linstor-satellite, linstor-client, python-linstor; node types - Combined - cinder.conf[1] - volume-type registered with corresponding properties[2] I'm able to manipulate volumes - create/delete[3] and every existing volume has the corresponding drbd/linstor repr[4] and LVM LV[5] on the compute nodes, so, from storage point of view, it looks as correct and ready to use. The problem is when I try to create VM using $ openstack server create --flavor [...] --network [...] --volume 15c4fa98-76a5-44d6-9701-c89a8a848b82 fearst Nova fails[6] because of invalid VM's libvirt XML definition (note "source dev='None'" below): /usr/bin/qemu-system-x86_64 15c4fa98-76a5-44d6-9701-c89a8a848b82
I do not have another working DRBD/Linstor/Openstack installation and can't even imagine, where to look into the problem - whether it is in Nova, Cinder or in Linstor configuration itself. I will appreciate if anybody can give me some guidance on where to seek for the solution to the problem. In my lab I'm using Openstack Train on Ubuntu 18.04.4: nova-compute 2:20.1.1-0ubuntu1~cloud0 nova-compute-kvm 2:20.1.1-0ubuntu1~cloud0 nova-compute-libvirt 2:20.1.1-0ubuntu1~cloud0 nova-api 2:20.1.1-0ubuntu1~cloud0 nova-common 2:20.1.1-0ubuntu1~cloud0 nova-conductor 2:20.1.1-0ubuntu1~cloud0 nova-novncproxy 2:20.1.1-0ubuntu1~cloud0 nova-scheduler 2:20.1.1-0ubuntu1~cloud0 cinder-api 2:15.1.0-0ubuntu1~cloud0 cinder-common 2:15.1.0-0ubuntu1~cloud0 cinder-scheduler 2:15.1.0-0ubuntu1~cloud0 cinder-volume 2:15.1.0-0ubuntu1~cloud0 python3-cinder 2:15.1.0-0ubuntu1~cloud0 python3-cinderclient 1:5.0.0-0ubuntu2~cloud0 python3-nova 2:20.1.1-0ubuntu1~cloud0 python3-novaclient 2:15.1.0-0ubuntu2~cloud0 Thank you! ==================================== [1] cinder.conf [DEFAULT] enabled_backends = linstor default_volume_type = linstor [linstor] storage_availability_zone = nova volume_backend_name = linstor volume_driver = cinder.volume.drivers.linstordrv.LinstorDrbdDriver linstor_autoplace_count = 2 linstor_default_volume_group_name=DfltRscGrp linstor_default_uri=linstor://runner linstor_default_storage_pool_name=drbdpool linstor_default_resource_size=1 linstor_volume_downsize_factor=4096 where 'runner' is virtual ip of linstor-controller, protected by pacemaker. ==================================== [2] openstack volume type show linstor +--------------------+--------------------------------------+ | Field | Value | +--------------------+--------------------------------------+ | access_project_ids | None | | description | None | | id | d2025962-503a-4f37-93bd-b766bb346a42 | | is_public | True | | name | linstor | | properties | volume_backend_name='linstor' | | qos_specs_id | None | +--------------------+--------------------------------------+ ==================================== [3] openstack volume show fearst +--------------------------------+-----------------------------------------------------+ | Field | Value | +--------------------------------+-----------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | true | | consistencygroup_id | None | | created_at | 2020-05-21T08:42:11.000000 | | description | None | | encrypted | False | | id | 15c4fa98-76a5-44d6-9701-c89a8a848b82 | | migration_status | None | | multiattach | False | | name | fearst | | os-vol-host-attr:host | ctrl1 at linstor#linstor | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 7acec404393344cbabde07a22bbe6b3f | | properties | | | replication_status | None | | size | 5 | | snapshot_id | None | | source_volid | None | | status | available | | type | linstor | | updated_at | 2020-05-21T08:42:18.000000 | | user_id | 1048a19e94234451a61c3d9d46907ceb | | volume_image_metadata | {'signature_verified': 'False', | | | 'image_id': '15e992e8-a601-44cd-a2f0-8f34588a4d18',| | | 'image_name': 'cirros', | | | 'checksum': '1d3062cd89af34e419f7100277f38b2b', | | | 'container_format': 'bare', | | | 'disk_format': 'qcow2', | | | 'min_disk': '0', | | | 'min_ram': '0', | | | 'size': '16338944'} | +--------------------------------+-----------------------------------------------------+ ==================================== [4] linstor r l ╭───────────────────────────────────────────────────────────────────────────────────╮ ┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ Conns ┊ State ┊ ╞═══════════════════════════════════════════════════════════════════════════════════╡ ┊ CV_15c4fa98-76a5-44d6-9701-c89a8a848b82 ┊ cmp1 ┊ 7000 ┊ Unused ┊ Ok ┊ UpToDate ┊ ┊ CV_15c4fa98-76a5-44d6-9701-c89a8a848b82 ┊ cmp2 ┊ 7000 ┊ Unused ┊ Ok ┊ UpToDate ┊ ╰───────────────────────────────────────────────────────────────────────────────────╯ Note, that cmp3 is arbiter and has no 'by-disk' representation: root at cmp1:~# ls -lR /dev/drbd* brw-rw---- 1 root disk 147, 1000 May 21 11:42 /dev/drbd1000 /dev/drbd/by-disk/sds: lrwxrwxrwx 1 root root 17 May 21 11:42 CV_15c4fa98-76a5-44d6-9701-c89a8a848b82_00000 -> ../../../drbd1000 /dev/drbd/by-res/CV_15c4fa98-76a5-44d6-9701-c89a8a848b82: lrwxrwxrwx 1 root root 17 May 21 11:42 0 -> ../../../drbd1000 root at cmp2:~# ls -lR /dev/drbd* brw-rw---- 1 root disk 147, 1000 May 21 11:42 /dev/drbd1000 /dev/drbd/by-disk/sds: lrwxrwxrwx 1 root root 17 May 21 11:42 CV_15c4fa98-76a5-44d6-9701-c89a8a848b82_00000 -> ../../../drbd1000 /dev/drbd/by-res/CV_15c4fa98-76a5-44d6-9701-c89a8a848b82: lrwxrwxrwx 1 root root 17 May 21 11:42 0 -> ../../../drbd1000 root at cmp3:~# ls -lR /dev/drbd* brw-rw---- 1 root disk 147, 1000 May 21 11:42 /dev/drbd1000 /dev/drbd/by-res/CV_15c4fa98-76a5-44d6-9701-c89a8a848b82: lrwxrwxrwx 1 root root 17 May 21 11:42 0 -> ../../../drbd1000 ==================================== [5] root at cmp1:~# lvdisplay [ ... ] --- Logical volume --- LV Path /dev/sds/CV_15c4fa98-76a5-44d6-9701-c89a8a848b82_00000 LV Name CV_15c4fa98-76a5-44d6-9701-c89a8a848b82_00000 VG Name sds LV UUID jGzcIQ-yRpH-RKxP-ExsH-JSn6-kTFn-8kpKYO LV Write Access read/write LV Creation host, time cmp1, 2020-05-21 11:42:14 +0300 LV Pool name thin LV Status available # open 2 LV Size 5.00 GiB Mapped size 0.02% Current LE 1280 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 root at cmp2:~# lvdisplay [ ... ] --- Logical volume --- LV Path /dev/sds/CV_15c4fa98-76a5-44d6-9701-c89a8a848b82_00000 LV Name CV_15c4fa98-76a5-44d6-9701-c89a8a848b82_00000 VG Name sds LV UUID 8pVwMo-e3ed-Sy0w-06IZ-M7Mh-YVIL-BtKZEG LV Write Access read/write LV Creation host, time cmp2, 2020-05-21 11:42:14 +0300 LV Pool name thin LV Status available # open 2 LV Size 5.00 GiB Mapped size 0.02% Current LE 1280 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 ==================================== [6] Traceback of nova-compute: 3304 ERROR nova.compute.manager [] Traceback (most recent call last): 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2419, in _build_and_run_instance 3304 ERROR nova.compute.manager [] block_device_info=block_device_info) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3474, in spawn 3304 ERROR nova.compute.manager [] power_on=power_on) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6261, in _create_domain_and_network 3304 ERROR nova.compute.manager [] destroy_disks_on_failure) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 3304 ERROR nova.compute.manager [] self.force_reraise() 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 3304 ERROR nova.compute.manager [] six.reraise(self.type_, self.value, self.tb) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise 3304 ERROR nova.compute.manager [] raise value 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6230, in _create_domain_and_network 3304 ERROR nova.compute.manager [] post_xml_callback=post_xml_callback) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6164, in _create_domain 3304 ERROR nova.compute.manager [] guest.launch(pause=pause) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 143, in launch 3304 ERROR nova.compute.manager [] self._encoded_xml, errors='ignore') 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 3304 ERROR nova.compute.manager [] self.force_reraise() 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 3304 ERROR nova.compute.manager [] six.reraise(self.type_, self.value, self.tb) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise 3304 ERROR nova.compute.manager [] raise value 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 138, in launch 3304 ERROR nova.compute.manager [] return self._domain.createWithFlags(flags) 3304 ERROR nova.compute.manager [] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 190, in doit 3304 ERROR nova.compute.manager [] result = proxy_call(self._autowrap, f, *args, **kwargs) 3304 ERROR nova.compute.manager [] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 148, in proxy_call 3304 ERROR nova.compute.manager [] rv = execute(f, *args, **kwargs) 3304 ERROR nova.compute.manager [] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 129, in execute 3304 ERROR nova.compute.manager [] six.reraise(c, e, tb) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise 3304 ERROR nova.compute.manager [] raise value 3304 ERROR nova.compute.manager [] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 83, in tworker 3304 ERROR nova.compute.manager [] rv = meth(*args, **kwargs) 3304 ERROR nova.compute.manager [] File "/usr/lib/python3/dist-packages/libvirt.py", line 1110, in createWithFlags 3304 ERROR nova.compute.manager [] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 3304 ERROR nova.compute.manager [] libvirt.libvirtError: Cannot access storage file 'None': No such file or directory -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From root.mch at gmail.com Thu May 21 14:07:45 2020 From: root.mch at gmail.com (=?UTF-8?Q?=C4=B0zzettin_Erdem?=) Date: Thu, 21 May 2020 17:07:45 +0300 Subject: [Heat] Heat Stack Create Authorization Failed In-Reply-To: References: Message-ID: Hi David, I inspected keystone and heat but there is not any wrong configuration at these services. I think Sahara causes this, because I can create heat stack manually and Murano service can create too. I shared the latest sahara-engine journal below. http://paste.openstack.org/show/793840/ Thanks, İzzettin David Peacock , 15 May 2020 Cum, 20:22 tarihinde şunu yazdı: > Hi izzettin, > > That seems pretty fundamental; are you sure you're trying to authenticate > this with the correct user? I'd be checking keystone configuration and > logs to see what's going on. > > Thanks, > David > > On Fri, May 15, 2020 at 10:27 AM İzzettin Erdem > wrote: > >> Hello everyone, >> >> When I launch a cluster on Sahara, it gives a heat stack authorization >> error. I also use Murano service and it is working with heat. I could not >> find the solution and I discuss this error with the Sahara-devel team. They >> are also searching for this. Could you help me, please? >> >> Both the error log of Sahara and Heat are below. >> >> Sahara-engine: >> http://paste.openstack.org/show/793671/ >> >> Heat-engine: >> http://paste.openstack.org/show/793673/ >> >> Thanks. Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Thu May 21 14:25:35 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 21 May 2020 22:25:35 +0800 Subject: [ptg][cyborg][neutron][nova]Smartnic support integration via cyborg neutron and nova? - Cyborg Pre-PTG Meeting References: Message-ID:   Hi, As agreed on the IRC, tomorrow, we will have a cyborg Pre-PTG Zoom Meeting. Shaohe Feng from Intel will introduce SmartNIC support from the following aspects: * the cyborg-neutron-nova integration workflow when creating a server with a port of SmartNIC which is managed by cyborg *what changes needs to be done in nova, neutron to support SmartNIC feature? Please find the Pre-PTG meeting Link below: Zoom URL: https://us04web.zoom.us/j/9575741145?pwd=cVZydHl6blYzTGtNVlU4QU96ZWxiZz09 Meeting ID:957 574 1145 Password:9XhFhK The reserved meeting time is in total one and a half hours, but divided into three time slots(if time is not enough, we can extend another half hour time slot). *May 22 Friday 2:30 UTC -3:00 UTC *May 22 Friday 3:00 UTC -3:30UTC *May 22 Friday 3:30 UTC -4:00UTC * *And Please be aware: The three slots use the same URL, same Meeting ID and same Password. This means, when time of the slot is running out,and if you're kicked out of the Zoom, then you will need to login to the same Zoom URL again to join the next time slot. (And there’s also possibility that we don’t need to do the re-login, this depends on the number of people.) For others interested but not quite familiar with this topic, here’s some background information: Cyborg, served as a general purpose management framework for acceleration resources, is trying to support various types of accelerators.(FPGA, GPU, SPDK and so on) We continuously got questions from operators if cyborg supports SmartNIC, and this motivated us to support SmartNIC. Actually, we’ve made a discussion on SmartNIC with neutron team at the Shanghai Summit[1] 2019. We are now planning to continue discussing SmartNICs support integration in the coming Virtual PTG and try to move forward in Victoria Release. Cyborg will discuss about this topic internally first, and then later discuss with nova and neutron at PTG cross-project session[2][3]. [1] https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj [2] https://etherpad.opendev.org/p/nova-victoria-ptg [3] https://etherpad.opendev.org/p/cyborg-victoria-goals Regards, Yumeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu May 21 15:34:14 2020 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 21 May 2020 11:34:14 -0400 Subject: [Heat] Heat Stack Create Authorization Failed In-Reply-To: References: Message-ID: <87676fa3-fad6-5da3-a75d-01ad82b3b4fc@redhat.com> On 21/05/20 10:07 am, İzzettin Erdem wrote: > Hi David, > > I inspected keystone and heat but there is not any wrong configuration > at these services. I think Sahara causes this, because I can create heat > stack manually and Murano service can create too. I shared the latest > sahara-engine journal below. > > http://paste.openstack.org/show/793840/ The resource that's failing is a WaitConditionHandle. This relies on a user created (by Heat) in a separate domain (known, confusingly, as the stack_user).[1] Heat is failing to get a token for this user using the username and password that it assigned when it created the user. This suggests a configuration problem with Heat, or with the setup of the domain that Heat needs in Keystone.[2] You can reproduce outside of Sahara by creating a stack with an OS::Heat::WaitConditionHandle resource. cheers, Zane. [1] https://hardysteven.blogspot.com/2014/04/heat-auth-model-updates-part-2-stack.html [2] Prerequisite #5 in one of these install guides: https://docs.openstack.org/heat/latest/install/install.html > Thanks, > İzzettin > > David Peacock >, 15 May > 2020 Cum, 20:22 tarihinde şunu yazdı: > > Hi izzettin, > > That seems pretty fundamental; are you sure you're trying to > authenticate this with the correct user?  I'd be checking keystone > configuration and logs to see what's going on. > > Thanks, > David > > On Fri, May 15, 2020 at 10:27 AM İzzettin Erdem > wrote: > > Hello everyone, > > When I launch a cluster on Sahara, it gives a heat stack > authorization error. I also use Murano service and it is working > with heat. I could not find the solution and I discuss this > error with the Sahara-devel team. They are also searching for > this. Could you help me, please? > > Both the error log of Sahara and Heat are below. > > Sahara-engine: > http://paste.openstack.org/show/793671/ > > Heat-engine: > http://paste.openstack.org/show/793673/ > > Thanks. Regards. > From jeremyfreudberg at gmail.com Thu May 21 17:15:02 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Thu, 21 May 2020 13:15:02 -0400 Subject: [Heat] Heat Stack Create Authorization Failed In-Reply-To: <87676fa3-fad6-5da3-a75d-01ad82b3b4fc@redhat.com> References: <87676fa3-fad6-5da3-a75d-01ad82b3b4fc@redhat.com> Message-ID: On Thu, May 21, 2020 at 11:40 AM Zane Bitter wrote: > > The resource that's failing is a WaitConditionHandle. > [...] > This suggests a configuration problem with Heat, > [...] I can confirm. Earlier today I was helping İzzettin with this issue and we found that disabling wait conditions on the Sahara side leads to a successful Heat stack and the creation of the Sahara cluster can continue. From whayutin at redhat.com Thu May 21 17:38:33 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 21 May 2020 11:38:33 -0600 Subject: review of tripleo-ci-centos-8-standalone-on-multinode-ipa Message-ID: Greetings, Please review the following new standalone + ipa deployment [1]. The keystone folks would like this deployment to run by default on all THT changes vs. the current set of files [2]. Looking for any strong objects to running the job on any THT change and or more files to target the job against in [2] If you need specific information about the job please contact rlandy @ redhat pojadhav @ redhat Thanks [1] https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-8-standalone-on-multinode-ipa [2] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/zuul.d/layout.yaml#L76-L82 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu May 21 18:06:38 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 21 May 2020 19:06:38 +0100 Subject: [kolla] Kolla klub meeting In-Reply-To: References: Message-ID: On Tue, 19 May 2020 at 16:48, Mark Goddard wrote: > > Hi, > > Just a reminder that we will host a project onboarding session for the > next Kolla Klub meeting on Thursday 21st May. We'll cover a variety of > topics on the many different ways you can contribute to the > project. Gaël Therond will also fill us in on some of the questions > put to him since the last meeting about his case studies. Thanks to everyone who joined today, especially Radosław for hosting an excellent onboarding session. For anyone who couldn't make it, or had to drop off before the end of the meeting, or would like to recap anything covered, the meeting was recorded and can be watched via https://drive.google.com/file/d/1UuCLfW2IZjr7dlZt92VKxrzfiyApGSai/view?usp=sharing. We will skip the next kolla klub meeting slot in two weeks time, since it clashes with the PTG (which you are all welcome to join: https://etherpad.opendev.org/p/kolla-victoria-ptg). The next meeting will be 18th June, and we'll have a short recap of the PTG discussions, followed by some open discussion around particular topics (e.g. identity management). Please propose topics on the agenda: https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw Thanks, Mark From skaplons at redhat.com Thu May 21 20:29:05 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 21 May 2020 22:29:05 +0200 Subject: [neutron] Drivers meeting 22.05.2020 Message-ID: <20200521202905.c52r4fjwx2mazy7v@skaplons-mac> Hi, Due to lack of agenda for the next drivers meeting, lets cancel it this week. See You all next week. BTW. Please maybe check Nate's email [1] - this isn't exactly RFE but something which IMO drivers team should check too :) [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014820.html -- Slawek Kaplonski Senior software engineer Red Hat From peter.matulis at canonical.com Thu May 21 20:58:59 2020 From: peter.matulis at canonical.com (Peter Matulis) Date: Thu, 21 May 2020 16:58:59 -0400 Subject: [charms] OpenStack Charms 20.05 release is now available Message-ID: The OpenStack Charms team is happy to announce the 20.05 charms release, introducing support for OpenStack Ussuri and Ceph Octopus on Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. This release also brings several new and valuable features to the existing OpenStack Charms deployments for Queens, Rocky, Stein, Train, Ussuri and many other stable combinations of Ubuntu + OpenStack. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/2005.html == Highlights == * OpenStack Ussuri OpenStack Ussuri is now supported on Ubuntu 18.04 LTS (via UCA) and Ubuntu 20.04 LTS natively. * Ceph Octopus The Octopus release of Ceph is now supported, starting with OpenStack Ussuri. * New charms: MySQL 8 The MySQL 8 charms (mysql-innodb-cluster and mysql-router) have been promoted to supported status. These charms allow for a completely native cloud database HA cluster (no need for the hacluster charm). The legacy solution (percona-cluster charm) is not supported on Ubuntu 20.04 LTS. Migration documentation is available. * New charms: Masakari The Masakari charms (masakari and masakari-monitors) and the requisite pacemaker-remote charm have been promoted to supported status, starting with OpenStack Stein. Masakari provides automated recovery of OpenStack instances (shared storage required). * New charms: OVN The OVN charms (neutron-api-plugin-ovn, ovn-central, ovn-chassis, and ovn-dedicated-chassis) have been promoted to supported status. OVN provides virtual networking for Open vSwitch and is the preferred default for new deployments of OpenStack Ussuri. * Ceph iSCSI Ceph iSCSI gateway functionality is available as a tech preview. These gateways provide iSCSI targets backed by a Ceph cluster. == OpenStack Charms team == The OpenStack Charms team can be contacted on the #openstack-charms IRC channel on Freenode. == Thank you == Lots of thanks to the below 67 charm contributors who squashed 89 bugs, enabled support for a new release of OpenStack, improved documentation, and added exciting new functionality! James Page Alex Kavanagh Frode Nordahl Liam Young Peter Matulis David Ames Corey Bryant Chris MacNaughton Ghanshyam Mann Aurelien Lourot Sahid Orentino Ferdjaoui Stamatis Katsaounis Ryan Beisner inspurericzhang Tiago Pasqualini Andrew McLeod ShangXiao Edward Hope-Morley Felipe Reyes kangyufei Adam Dyess Andreas Jaeger wangfaxin Xuan Yandong Arif Ali José Pekkarinen Chris Johnston Hemanth Nakkina Alexandros Soumplis Jose Guedez Qitao Tytus Kurek Seyeong Kim Dmitrii Shcherbakov Dongdong Tao Drew Freiberger Haw Loeung Jorge Niedbalski Andrea Ieri Xav Paice Qiu Fossen Yanos Angelopoulos Syed Mohammad Adnan Karim JiaSiRui Sérgio Filipe Marques Manso Vladimir Grevtsev George Kraft Marco Filipe Moutinho da Silva Xiyue Wang Adam Dyess Jose Delarosa Alexander Balderson Facundo Ciccioli Jacek Nykis Eduardo Sousa Thobias Trevisan Rodrigo Barbieri Alejandro Santoyo Gonzalez Sean McGinnis Claudio Pisa Shuo Liu Matus Kosut Stephen Muss Camille Rodriguez zhangboye Aggelos Kolaitis Nicolas Bock -- OpenStack Charms Team From aj at suse.com Fri May 22 07:02:43 2020 From: aj at suse.com (Andreas Jaeger) Date: Fri, 22 May 2020 09:02:43 +0200 Subject: [charms] OpenStack Charms 20.05 release is now available In-Reply-To: References: Message-ID: <9fbab92c-c311-a5ed-955e-0ec6f448e56b@suse.com> On 21.05.20 22:58, Peter Matulis wrote: > The OpenStack Charms team is happy to announce the 20.05 charms > release, introducing support for OpenStack Ussuri and Ceph Octopus on > Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. This release also brings > several new and valuable features to the existing OpenStack Charms > deployments for Queens, Rocky, Stein, Train, Ussuri and many other > stable combinations of Ubuntu + OpenStack. > Congrats! If you want to have the index files on docs.o.o updated for this, check http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014658.html Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From frode.nordahl at canonical.com Fri May 22 07:54:12 2020 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Fri, 22 May 2020 09:54:12 +0200 Subject: [charms] Propose Aurelien Lourot for OpenStack Charms core Message-ID: Hello all, I would like to propose adding Aurelien Lourot as core developer for the Charms project. -- Frode Nordahl From liam.young at canonical.com Fri May 22 09:01:59 2020 From: liam.young at canonical.com (Liam Young) Date: Fri, 22 May 2020 10:01:59 +0100 Subject: [charms] Propose Aurelien Lourot for OpenStack Charms core In-Reply-To: References: Message-ID: On Fri, May 22, 2020 at 8:56 AM Frode Nordahl wrote: > Hello all, > > I would like to propose adding Aurelien Lourot as core developer for > the Charms project. > +1 from me, I have worked with Aurelien a fair bit recently and it has been a pleasure, he has an excellent grasp of the code and proven to be an great code reviewer too. > > -- > Frode Nordahl > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello at dincercelik.com Fri May 22 09:21:55 2020 From: hello at dincercelik.com (Dincer Celik) Date: Fri, 22 May 2020 12:21:55 +0300 Subject: [neutron][operators] Message-ID: <7D4E231D-7CBA-4695-8E38-1588E736C778@dincercelik.com> Hi, You know in public cloud world, most users prefer using VPCs (aka tenant networks for us), and when they need to connect two separate VPC, they have VPC peering functionality. For OpenStack, we have routers, RBACs, static routes and floating IPs, but there is nothing functioning like a real VPC peering. I see this was discussed at Stein PTG[1] under L3 topics but seems there is no progress about it. So I would like to ask what is the most common way to let two separate tenant networks under different accounts talk to each other? Thanks! Dincer [1] https://etherpad.opendev.org/p/neutron-stein-ptg From alex.kavanagh at canonical.com Fri May 22 10:17:33 2020 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Fri, 22 May 2020 11:17:33 +0100 Subject: [charms] Propose Aurelien Lourot for OpenStack Charms core In-Reply-To: References: Message-ID: On Fri, May 22, 2020 at 8:56 AM Frode Nordahl wrote: > Hello all, > > I would like to propose adding Aurelien Lourot as core developer for > the Charms project. > Yup, +1 from; his code reviews are incisive (perhaps more than I'd like, but definitely what I need!) Be a great addition to the team Cheers -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri May 22 13:20:45 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 22 May 2020 08:20:45 -0500 Subject: [release] Release countdown for week R-20, May 25 - 29 Message-ID: <20200522132045.GA2109431@sm-workstation> Welcome back to the release countdown emails! These will be sent at major points in the Victoria development cycle, which should conclude with a final release on October 16, 2020. Development Focus ----------------- At this stage in the release cycle, focus should be on planning the Victoria development cycle, assessing Victoria community goals and approving Victoria specs. General Information ------------------- Victoria is a 22 week development cycle, which is already underway. In case you haven't seen it yet, please take a look over the schedule for this release: https://releases.openstack.org/victoria/schedule.html By default, the team PTL is responsible for handling the release cycle and approving release requests. This task can (and probably should) be delegated to release liaisons. Now is a good time to review release liaison information for your team and make sure it is up to date: https://opendev.org/openstack/releases/src/branch/master/data/release_liaisons.yaml By default, all your team deliverables from the Ussuri release are continued in the Victoria series with a similar release model. If you intend to drop a deliverable, or modify its release model, please do so before the victoria-1 milestone by proposing a change to the deliverable file at: https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria Upcoming Deadlines & Dates -------------------------- Virtual Victoria PTG: June 1-5 Victoria-1 milestone: June 18 From david.ames at canonical.com Fri May 22 14:34:28 2020 From: david.ames at canonical.com (David Ames) Date: Fri, 22 May 2020 07:34:28 -0700 Subject: [charms] Propose Aurelien Lourot for OpenStack Charms core In-Reply-To: References: Message-ID: On Fri, May 22, 2020 at 12:57 AM Frode Nordahl wrote: > > Hello all, > > I would like to propose adding Aurelien Lourot as core developer for > the Charms project. A strong +1. Thanks for all the work so far, Aurelien! From tburke at nvidia.com Fri May 22 17:18:12 2020 From: tburke at nvidia.com (Tim Burke) Date: Fri, 22 May 2020 10:18:12 -0700 Subject: [swift] Erasure code hardware offloading In-Reply-To: <768ec329-a82e-af7f-9b0b-c63e0fdb977e@catalyst.net.nz> References: <768ec329-a82e-af7f-9b0b-c63e0fdb977e@catalyst.net.nz> Message-ID: On 3/18/20 2:41 PM, Mark Kirkwood wrote: > External email: Use caution opening links or attachments > > > Hi, > > Can Swift make use of an Erasure Code offload card (e.g > https://community.mellanox.com/s/article/understanding-erasure-coding-offload)? > > > > regards > > Mark > > Hi Mark, Sorry for the delay in responding; it's been a crazy couple months. Swift cannot currently make use of an EC offload card, though that's a very interesting idea. Currently, Swift uses liberasurecode (https://opendev.org/openstack/liberasurecode/) which allows plugins to a few different backends. I'm not sure what would be involved in trying to add support for hardware offloading; whether swift or liberasurecode or both would need to change, for example. But given recent news with my (new) employer, maybe that's something I'll be able to look into in the future ;-) Tim From tburke at nvidia.com Fri May 22 17:29:37 2020 From: tburke at nvidia.com (Tim Burke) Date: Fri, 22 May 2020 10:29:37 -0700 Subject: [swift] Rolling upgrade, any version relationships? In-Reply-To: <4286adc3-4f78-7580-fe22-1bc6fa2ce2e5@catalyst.net.nz> References: <5f55bc36-b51c-5c6d-ad0f-63a32fcba2d4@catalyst.net.nz> <4286adc3-4f78-7580-fe22-1bc6fa2ce2e5@catalyst.net.nz> Message-ID: <1e733746-66d2-6d25-3759-3ed6598bc46d@nvidia.com> On 4/14/20 6:00 PM, Mark Kirkwood wrote: > Hi all, flagging this again as it would be great to have a definite > answer to these questions! Thanks! > > On 11/03/20 3:27 pm, Mark Kirkwood wrote: >> Hi, we are looking at upgrading our 2.7.0 Swift cluster. In the past >> I've modeled this on a dev system by upgrading storage nodes one by >> one (using 2.17 as the target version). This seemed to work well - I >> deliberately left the cluster half upgraded for an extended period to >> test for any cross version weirdness (didn't see any). However I'm >> wanting to check that I have not missed something important. So my >> questions are: >> >> - If upgrading from 2.7.0 is it safe to just grab the latest version >> (e.g 2.23)? >> >> - If not is there a preferred version to jump to first? >> >> - Is it ok for the upgrade to take extended time (e.g weeks) and >> therefore be running with some new and some old storage nodes for that >> time? >> >> regards >> >> Mark >> >> > Hi Mark, It should be perfectly fine upgrading from 2.7.0 to latest. If possible, you'll want to upgrade backend servers (object, container, account) before proxies, though I've done many upgrades where that wasn't possible (because all nodes were running all services). We've never (to my knowledge, anyway) had anything like a checkpoint release. Having a long in-progress window for the upgrade increases the likelihood that you'll uncover some heretofore unknown upgrade bug, but it should be mostly OK. There may be some noisy logging, due to changes in the replication protocol for example, but as long as the cluster was healthy going into the upgrade, I wouldn't worry too much. Shorter is generally better, though, especially if your cluster topology requires that some proxies get upgraded before all the backend servers are upgraded. Tim From pramchan at yahoo.com Fri May 22 17:56:28 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 22 May 2020 17:56:28 +0000 (UTC) Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs References: <1625228792.1113226.1590170188086.ref@mail.yahoo.com> Message-ID: <1625228792.1113226.1590170188086@mail.yahoo.com> Hi all, Please help the Interop WG to review and finalize the draft version for the next interop guidelines. https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD Link for the DNS add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/dns.2020.06.json;h=848e6d800f75e1f3aed8ba19263c3d1f86f4dde6;hb=HEAD  Link for the Heat add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/orchestration.2020.06.json;h=6362abbac5756aaec24e25e5a0803f31497360f3;hb=HEAD  This guidelines is intended to cover "Stein" , "Train", "Ussuri" and "Victoria" releases of OpenStack. We request that PTLs or Core team members please confirm any updates you may want to include or exclude in link above. The Projects currently covered under OpenStack  Powered Programs include "Keystone", "Glance", "Nova", "Neutron", "Cinder", "Swift" and add-on programs "Designate", "Heat". A more human-readable view of the capabilities can be found in RefStack: https://refstack.openstack.org/#/guidelines We would like have feedback by next Interop call on Friday. - Can attend 10 AM PDT  call refer etherpad - https://etherpad.opendev.org/p/interop If you have any additional topics to discuss at PTG please add them to https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 The conference Bridge will be assigned once we know the number of participants. Please add your names/topics to ether pad interop ptg planning.  Optionally reply to this  email with your inputs/patches.  Please  Register to  attend the Interop WG slot scheduled for Virtual PTG on June 1 6-8 AM PDT /  9-11 AM EST / 14-16 UTC / 9-11 PM BJT https://www.openstack.org/ptg/ Thanksfor Interop WGChair - Prakash RamchandranVice Chair - Mark Voelker -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri May 22 18:38:39 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 22 May 2020 18:38:39 +0000 Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs In-Reply-To: <1625228792.1113226.1590170188086@mail.yahoo.com> References: <1625228792.1113226.1590170188086.ref@mail.yahoo.com> <1625228792.1113226.1590170188086@mail.yahoo.com> Message-ID: Prakash, Why not submit a patch for of guidelines for compute, storage, and openstack for review? It will show the diffs against previous version. Thanks, From: prakash RAMCHANDRAN Sent: Friday, May 22, 2020 12:56 PM To: openstack-discuss at lists.openstack.org Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs [EXTERNAL EMAIL] Hi all, Please help the Interop WG to review and finalize the draft version for the next interop guidelines. https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD Link for the DNS add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/dns.2020.06.json;h=848e6d800f75e1f3aed8ba19263c3d1f86f4dde6;hb=HEAD Link for the Heat add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/orchestration.2020.06.json;h=6362abbac5756aaec24e25e5a0803f31497360f3;hb=HEAD This guidelines is intended to cover "Stein" , "Train", "Ussuri" and "Victoria" releases of OpenStack. We request that PTLs or Core team members please confirm any updates you may want to include or exclude in link above. The Projects currently covered under OpenStack Powered Programs include "Keystone", "Glance", "Nova", "Neutron", "Cinder", "Swift" and add-on programs "Designate", "Heat". A more human-readable view of the capabilities can be found in RefStack: https://refstack.openstack.org/#/guidelines We would like have feedback by next Interop call on Friday. - Can attend 10 AM PDT call refer etherpad - https://etherpad.opendev.org/p/interop If you have any additional topics to discuss at PTG please add them to https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 The conference Bridge will be assigned once we know the number of participants. Please add your names/topics to ether pad interop ptg planning. Optionally reply to this email with your inputs/patches. Please Register to attend the Interop WG slot scheduled for Virtual PTG on June 1 6-8 AM PDT / 9-11 AM EST / 14-16 UTC / 9-11 PM BJT https://www.openstack.org/ptg/ Thanks for Interop WG Chair - Prakash Ramchandran Vice Chair - Mark Voelker -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Fri May 22 20:15:23 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 22 May 2020 21:15:23 +0100 Subject: [nova][ptg] Feature Liaison In-Reply-To: References: Message-ID: On Mon, 2020-05-18 at 17:50 +0200, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > Last cycle we introduced the Feature Liaison process [1]. I think this > is time to reflect on it. > Did it helped? > Do we need to tweak it? > > Personally for me it did not help much but I think this is a fairly low > cost process so I'm OK to keep it as is. I'm of much the same opinion. I don't think it achieved what it was expected to achieve, and I suspect people reaching out to cores on IRC effectively achieves the same purpose. It's low cost to keep this, at you say, but personally I would still remove this section. Stephen > Cheers, > gibi > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://review.opendev.org/#/c/685857/ > > > From stephenfin at redhat.com Fri May 22 20:34:13 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 22 May 2020 21:34:13 +0100 Subject: [nova][ptg] Runway process in Victoria In-Reply-To: <4AAJAQ.8C39QP5J2M3Z@est.tech> References: <4AAJAQ.8C39QP5J2M3Z@est.tech> Message-ID: On Mon, 2020-05-18 at 17:42 +0200, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > > In the last 4 cycles we used a process called runway to focus and > timebox of the team's feature review effort. However compared to the > previous cycles in ussuri we did not really keep the process running. > Just compare the length of the Log section of each etherpad > [1][2][3][4] to see the difference. So I have two questions: > > 1) Do we want to keep the process in Victoria? > > 2) If yes, how we can make the process running? > 2.1) How can we keep the runway etherpad up-to-date? > 2.2) How to make sure that the team is focusing on the reviews that are > in the runway slots? > > Personally I don't want to advertise this process for contributors if > the core team is not agreed and committed to keep the process running > as it would lead to unnecessary disappointment. I tend to use this as a way to find things to review, though I think I'd be equally well served by a gerrit dashboard that filtered on bp topics. I've stopped using it because I already do a lot of reviews and have no issue finding more but also, more importantly, because I didn't find having *my* items in a runway significantly increased reviews from anyone != mriedem. If there were sign on from every core to prioritize this (perhaps with a reminder during the weekly meeting?) then I'd embrace it again. If not though, I'd rather we dropped it than make false promises. Stephen > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://etherpad.opendev.org/p/nova-runways-rocky > [2] https://etherpad.opendev.org/p/nova-runways-stein > [3] https://etherpad.opendev.org/p/nova-runways-train > [4] https://etherpad.opendev.org/p/nova-runways-ussuri > > > From stephenfin at redhat.com Fri May 22 20:37:21 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 22 May 2020 21:37:21 +0100 Subject: [nova][ptg] Can we close old bugs? In-Reply-To: References: Message-ID: <9a3fd36ccce34740bd41f4e231eab77f4d21cb7e.camel@redhat.com> On Mon, 2020-05-18 at 18:11 +0200, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude > it before the PTG] > > We have more than 800 open bugs in nova [1] and the oldest is 8 > years old. > Can we close old bugs? Yes. > If yes, what would be the closing criteria? Age and status? Age is probably easiest. I would prefer to keep those with an open, not -2'd review around, since that implies there might be something to pick up, but other than that, simple age will do. You're thinking something similar. > Personally I would close every bug that is not updated in the last 3 > years and not in INPROGRESS state. This is very conservative. 18 months would be more than enough, IMO. As Sean has said elsewhere, we have other bugs in Bugzilla and we're quite aggressive in closing those if we're realistically not going to work on a resolution any time soon. Stephen > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] > https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE > > > From stephenfin at redhat.com Fri May 22 20:51:21 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 22 May 2020 21:51:21 +0100 Subject: [nova][ptg] Documentation in nova Message-ID: <02e32e11204fb4e91979552c5bcda05bda2dd148.camel@redhat.com> Hi, [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally kept short, let's try to discuss it or even conclude it before the PTG] Our documentation in nova is suffering from bit rot, the ongoing effects of the documentation migration during Pike (I think), and general lack of attention. I've been working to tackle this but progress has been very slow. I suggested this a couple of PTGs ago, but once again I'd like to explore going on a solo run with these by writing and self-approving (perhaps after a agreed interval) *multiple* large doc refactors. I've left some notes below, copied from the Etherpad, but in summary I believe this is the only realistic way we will ever be able to fix our documentation. Cheers, Stephen [0] https://etherpad.opendev.org/p/nova-victoria-ptg --- Documentation reviews are appreciated but are generally seen as low priority. See: * https://review.opendev.org/667165 (docs: Rewrite quotas documentation) * https://review.opendev.org/667133 (docs: Rewrite host aggregate, availability zone docs) * https://review.opendev.org/664396 (docs: Document how to revert, confirm a cold migration) * https://review.opendev.org/635243 (docs: Rework the PCI passthrough guides) * https://review.opendev.org/640730 (docs: Rework all things metadata'y) * https://review.opendev.org/625878 (doc: Rework 'resize' user doc) * ... I (stephenfin) want permission to iterate on documentation and merge unilaterally unless someone expresses a clear interest Pros: * Documentation gets better * I can iterate *much* faster Cons: * Possibility of mistakes or "bad documentation is worse than no documentation" * Counterpoint: Our documentation is already bad/wrong and is getting worse due to bitrot. * Better is subjective * Counterpoints: The general structure of the patches I listed above was accepted in each case, I have other projects I have authored to point to (e.g. patchwork.readthedocs.io/) * Merge conflicts * Counterpoint: It's docs. Just drop these (non-functional) hunks if it's awkward. From openstack at fried.cc Fri May 22 20:56:37 2020 From: openstack at fried.cc (Eric Fried) Date: Fri, 22 May 2020 15:56:37 -0500 Subject: [nova][ptg] Feature Liaison In-Reply-To: References: Message-ID: > I don't think it achieved what it was > expected to achieve +1 fwiw. This was an attempt to funnel new/inexperienced contributors toward getting appropriate help from old/experienced contributors, so they didn't get ignored and discouraged. It would still be nice to figure out a way to make that happen -- ideally old hands actively taking initiative to mentor new folks -- but yeah, this wasn't it. So scrap it. efried . From neil at tigera.io Fri May 22 21:35:14 2020 From: neil at tigera.io (Neil Jerram) Date: Fri, 22 May 2020 22:35:14 +0100 Subject: [keystone][devstack][openstackclient] Problem with Keystone setup in stable/ussuri devstack install In-Reply-To: References: Message-ID: Here's the traceback for why python-openstackclient can't import its identity plugin: Traceback (most recent call last): File "/opt/stack/python-openstackclient/openstackclient/common/clientmanager.py", line 151, in get_plugin_modules __import__(ep.module_name) File "/opt/stack/python-openstackclient/openstackclient/identity/client.py", line 18, in from keystoneclient.v2_0 import client as identity_client_v2 File "/usr/local/lib/python3.6/dist-packages/keystoneclient/v2_0/__init__.py", line 1, in from keystoneclient.v2_0.client import Client # noqa File "/usr/local/lib/python3.6/dist-packages/keystoneclient/v2_0/client.py", line 21, in from keystoneclient import httpclient File "", line 1020, in _handle_fromlist File "/usr/local/lib/python3.6/dist-packages/keystoneclient/__init__.py", line 72, in __getattr__ return importlib.import_module('keystoneclient.%s' % name) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/usr/local/lib/python3.6/dist-packages/keystoneclient/httpclient.py", line 43, in import keyring File "/usr/lib/python3/dist-packages/keyring/__init__.py", line 3, in from .core import (set_keyring, get_keyring, set_password, get_password, File "/usr/lib/python3/dist-packages/keyring/core.py", line 153, in init_backend() File "/usr/lib/python3/dist-packages/keyring/core.py", line 66, in init_backend keyrings = filter(limit, backend.get_all_keyring()) File "/usr/lib/python3/dist-packages/keyring/util/__init__.py", line 21, in wrapper func.always_returns = func(*args, **kwargs) File "/usr/lib/python3/dist-packages/keyring/backend.py", line 196, in get_all_keyring exceptions=TypeError)) File "/usr/lib/python3/dist-packages/keyring/util/__init__.py", line 31, in suppress_exceptions for callable in callables: File "/usr/lib/python3/dist-packages/keyring/backend.py", line 188, in is_class_viable keyring_cls.priority File "/usr/lib/python3/dist-packages/keyring/util/properties.py", line 24, in __get__ return self.fget.__get__(None, owner)() File "/usr/lib/python3/dist-packages/keyring/backends/SecretService.py", line 37, in priority bus = secretstorage.dbus_init() File "/usr/lib/python3/dist-packages/secretstorage/__init__.py", line 47, in dbus_init return dbus.SessionBus() File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 211, in __new__ mainloop=mainloop) File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 100, in __new__ bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop) File "/usr/lib/python3/dist-packages/dbus/bus.py", line 122, in __new__ bus = cls._new_for_bus(address_or_type, mainloop=mainloop) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /tmp/dbus-yQkwBYfBbJ: Connection refused Does that ring any bells? Best wishes, Neil On Wed, May 20, 2020 at 6:04 PM Neil Jerram wrote: > + ./stack.sh:main:1091 : create_keystone_accounts > + lib/keystone:create_keystone_accounts:314 : local admin_project > ++ lib/keystone:create_keystone_accounts:315 : oscwrap project show > admin -f value -c id > WARNING: Failed to import plugin identity. > ++ functions-common:oscwrap:2370 : return 2 > + lib/keystone:create_keystone_accounts:315 : admin_project='openstack: > '\''project show admin -f value -c id'\'' is not an openstack command. See > '\''openstack --help'\''. > > I believe this is a completely mainline Keystone setup by devstack. The > top-level stack.sh code is > > echo_summary "Starting Keystone" > > if [ "$KEYSTONE_AUTH_HOST" == "$SERVICE_HOST" ]; then > init_keystone > start_keystone > bootstrap_keystone > fi > > create_keystone_accounts > ... > > bootstrap_keystone succeeded but create_keystone_accounts failed as shown > above, trying to execute > > openstack project show "admin" -f value -c id > > IIUC, the rootmost problem here is "WARNING: Failed to import plugin > identity.", indicating that python-openstackclient is failing to import > its openstackclient.identity.client module. But I don't know any more > about why that would be. > > Any ideas? > > Many thanks, > Neil > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri May 22 21:46:44 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 May 2020 16:46:44 -0500 Subject: [all][qa] Gate Status: week 20-21 Message-ID: <1723e5a4d41.dd678407350678.3084912148040321651@ghanshyammann.com> Hello Everyone, I am highlighting the few of the gate issues we fixed or in-progress recently in case you are facing the same in your projects gate or 3rd party CI. 1. Bug# 1880219. A job using older Tempest (ubuntu xenial node jobs) on stable stein onwards gate or 3rd party CI fail. This is failing the same type of job I reported a few weeks back but with different issues and happening on 3rd party CI as I have not seen such a job or failure on upstream yet. I am reproducing the scenario to test the fixes[1]. Earlier issue[2] was fixed by using the older Tempest. Now it started failing as we use the master constraint on stein and train and master constraints are no more compatible with older Tempest[3]. The solution for this will be to use stable constraints if older Tempest is not used. This fix needs modification on devstack as well as the tempest side. Devstack changes[4] working fine but I need to fix it on the Tempest side also. 2. Bug#1880217. In-tree Tempest plugins python2 jobs on stable/stein or Train failing This might be a rare case but still possible as we do have in-tree plugins for older stable branches. For in-tree plugins (in case it's still present even we migrated most of the tempest plugins into separate repo ), 'all-plugins' tox env which is supposed to detect the system-wide installed plugins will not work now for python2.7 jobs. This is because Tempest is now py3-only and so create py3 venv even on py2 jobs. 'all-plugins' venv which is py3 no longer detect the tempest plugins installed as python2.7 site-packages. Failure example: https://zuul.opendev.org/t/openstack/build/7d3f36f464bb4642bb1647f4b0b8e91f/log/job-output.txt#2375 If you encounter this issue then a quick fix for that will be to install the in-tree plugins in Tempest venv via TEMPEST_PLUGINS and run tests with 'all' tox venv. Example: https://review.opendev.org/#/c/730378/2 3. hacking fix for new flake8 3.8.0 merged in master might need to be backported to stable branches also but not for all projects. But to have a safe guard against future flake8 3.9.0(which can also cause same issue), you can backport the hacking min version fix: - http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014828.html Have a happy weekend and feel free to append more gate bugs here you have encountered or fixed recently. [1] https://review.opendev.org/#/c/729611/3 [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014601.html [3] https://review.opendev.org/#/c/720264/ [4] https://review.opendev.org/#/c/729722/6 -gmann From aschultz at redhat.com Fri May 22 21:48:56 2020 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 22 May 2020 15:48:56 -0600 Subject: [keystone][devstack][openstackclient] Problem with Keystone setup in stable/ussuri devstack install In-Reply-To: References: Message-ID: On Fri, May 22, 2020 at 3:42 PM Neil Jerram wrote: > > Here's the traceback for why python-openstackclient can't import its identity plugin: > > Traceback (most recent call last): > File "/opt/stack/python-openstackclient/openstackclient/common/clientmanager.py", line 151, in get_plugin_modules > __import__(ep.module_name) > File "/opt/stack/python-openstackclient/openstackclient/identity/client.py", line 18, in > from keystoneclient.v2_0 import client as identity_client_v2 > File "/usr/local/lib/python3.6/dist-packages/keystoneclient/v2_0/__init__.py", line 1, in > from keystoneclient.v2_0.client import Client # noqa > File "/usr/local/lib/python3.6/dist-packages/keystoneclient/v2_0/client.py", line 21, in > from keystoneclient import httpclient > File "", line 1020, in _handle_fromlist > File "/usr/local/lib/python3.6/dist-packages/keystoneclient/__init__.py", line 72, in __getattr__ > return importlib.import_module('keystoneclient.%s' % name) > File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module > return _bootstrap._gcd_import(name[level:], package, level) > File "/usr/local/lib/python3.6/dist-packages/keystoneclient/httpclient.py", line 43, in > import keyring > File "/usr/lib/python3/dist-packages/keyring/__init__.py", line 3, in > from .core import (set_keyring, get_keyring, set_password, get_password, > File "/usr/lib/python3/dist-packages/keyring/core.py", line 153, in > init_backend() > File "/usr/lib/python3/dist-packages/keyring/core.py", line 66, in init_backend > keyrings = filter(limit, backend.get_all_keyring()) > File "/usr/lib/python3/dist-packages/keyring/util/__init__.py", line 21, in wrapper > func.always_returns = func(*args, **kwargs) > File "/usr/lib/python3/dist-packages/keyring/backend.py", line 196, in get_all_keyring > exceptions=TypeError)) > File "/usr/lib/python3/dist-packages/keyring/util/__init__.py", line 31, in suppress_exceptions > for callable in callables: > File "/usr/lib/python3/dist-packages/keyring/backend.py", line 188, in is_class_viable > keyring_cls.priority > File "/usr/lib/python3/dist-packages/keyring/util/properties.py", line 24, in __get__ > return self.fget.__get__(None, owner)() > File "/usr/lib/python3/dist-packages/keyring/backends/SecretService.py", line 37, in priority > bus = secretstorage.dbus_init() > File "/usr/lib/python3/dist-packages/secretstorage/__init__.py", line 47, in dbus_init > return dbus.SessionBus() > File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 211, in __new__ > mainloop=mainloop) > File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 100, in __new__ > bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop) > File "/usr/lib/python3/dist-packages/dbus/bus.py", line 122, in __new__ > bus = cls._new_for_bus(address_or_type, mainloop=mainloop) > dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /tmp/dbus-yQkwBYfBbJ: Connection refused > > Does that ring any bells? > I haven't seen this exact issue, but there was an issue with secretstorage & jeepney a while back around it actually hanging because it wasn't properly handling dbus connection problems. I reported it against secretstorage but it ended up being an issue in jeepney. https://github.com/mitya57/secretstorage/issues/22 This bit of code should generally be silently ignored and the backend should just be ignored if not available so it might be an issue in one of the dependencies. Hope that points you in the right direction. Thanks, -Alex > Best wishes, > Neil > > > On Wed, May 20, 2020 at 6:04 PM Neil Jerram wrote: >> >> + ./stack.sh:main:1091 : create_keystone_accounts >> + lib/keystone:create_keystone_accounts:314 : local admin_project >> ++ lib/keystone:create_keystone_accounts:315 : oscwrap project show admin -f value -c id >> WARNING: Failed to import plugin identity. >> ++ functions-common:oscwrap:2370 : return 2 >> + lib/keystone:create_keystone_accounts:315 : admin_project='openstack: '\''project show admin -f value -c id'\'' is not an openstack command. See '\''openstack --help'\''. >> >> I believe this is a completely mainline Keystone setup by devstack. The top-level stack.sh code is >> >> echo_summary "Starting Keystone" >> >> if [ "$KEYSTONE_AUTH_HOST" == "$SERVICE_HOST" ]; then >> init_keystone >> start_keystone >> bootstrap_keystone >> fi >> >> create_keystone_accounts >> ... >> >> bootstrap_keystone succeeded but create_keystone_accounts failed as shown above, trying to execute >> >> openstack project show "admin" -f value -c id >> >> IIUC, the rootmost problem here is "WARNING: Failed to import plugin identity.", indicating that python-openstackclient is failing to import its openstackclient.identity.client module. But I don't know any more about why that would be. >> >> Any ideas? >> >> Many thanks, >> Neil >> From neil at tigera.io Fri May 22 22:35:11 2020 From: neil at tigera.io (Neil Jerram) Date: Fri, 22 May 2020 23:35:11 +0100 Subject: [keystone][devstack][openstackclient] Problem with Keystone setup in stable/ussuri devstack install In-Reply-To: References: Message-ID: On Fri, May 22, 2020 at 10:49 PM Alex Schultz wrote: > On Fri, May 22, 2020 at 3:42 PM Neil Jerram wrote: > > > > Here's the traceback for why python-openstackclient can't import its > identity plugin: > > > > Traceback (most recent call last): > > File > "/opt/stack/python-openstackclient/openstackclient/common/clientmanager.py", > line 151, in get_plugin_modules > > __import__(ep.module_name) > > File > "/opt/stack/python-openstackclient/openstackclient/identity/client.py", > line 18, in > > from keystoneclient.v2_0 import client as identity_client_v2 > > File > "/usr/local/lib/python3.6/dist-packages/keystoneclient/v2_0/__init__.py", > line 1, in > > from keystoneclient.v2_0.client import Client # noqa > > File > "/usr/local/lib/python3.6/dist-packages/keystoneclient/v2_0/client.py", > line 21, in > > from keystoneclient import httpclient > > File "", line 1020, in _handle_fromlist > > File > "/usr/local/lib/python3.6/dist-packages/keystoneclient/__init__.py", line > 72, in __getattr__ > > return importlib.import_module('keystoneclient.%s' % name) > > File "/usr/lib/python3.6/importlib/__init__.py", line 126, in > import_module > > return _bootstrap._gcd_import(name[level:], package, level) > > File > "/usr/local/lib/python3.6/dist-packages/keystoneclient/httpclient.py", line > 43, in > > import keyring > > File "/usr/lib/python3/dist-packages/keyring/__init__.py", line 3, in > > > from .core import (set_keyring, get_keyring, set_password, > get_password, > > File "/usr/lib/python3/dist-packages/keyring/core.py", line 153, in > > > init_backend() > > File "/usr/lib/python3/dist-packages/keyring/core.py", line 66, in > init_backend > > keyrings = filter(limit, backend.get_all_keyring()) > > File "/usr/lib/python3/dist-packages/keyring/util/__init__.py", line > 21, in wrapper > > func.always_returns = func(*args, **kwargs) > > File "/usr/lib/python3/dist-packages/keyring/backend.py", line 196, in > get_all_keyring > > exceptions=TypeError)) > > File "/usr/lib/python3/dist-packages/keyring/util/__init__.py", line > 31, in suppress_exceptions > > for callable in callables: > > File "/usr/lib/python3/dist-packages/keyring/backend.py", line 188, in > is_class_viable > > keyring_cls.priority > > File "/usr/lib/python3/dist-packages/keyring/util/properties.py", line > 24, in __get__ > > return self.fget.__get__(None, owner)() > > File > "/usr/lib/python3/dist-packages/keyring/backends/SecretService.py", line > 37, in priority > > bus = secretstorage.dbus_init() > > File "/usr/lib/python3/dist-packages/secretstorage/__init__.py", line > 47, in dbus_init > > return dbus.SessionBus() > > File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 211, in > __new__ > > mainloop=mainloop) > > File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 100, in > __new__ > > bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop) > > File "/usr/lib/python3/dist-packages/dbus/bus.py", line 122, in __new__ > > bus = cls._new_for_bus(address_or_type, mainloop=mainloop) > > dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: > Failed to connect to socket /tmp/dbus-yQkwBYfBbJ: Connection refused > > > > Does that ring any bells? > > > > I haven't seen this exact issue, but there was an issue with > secretstorage & jeepney a while back around it actually hanging > because it wasn't properly handling dbus connection problems. I > reported it against secretstorage but it ended up being an issue in > jeepney. https://github.com/mitya57/secretstorage/issues/22 > > This bit of code should generally be silently ignored and the backend > should just be ignored if not available so it might be an issue in one > of the dependencies. > > Hope that points you in the right direction. > > Thanks, > -Alex > Many thanks Alex. I believe I've just made some progress on this, and that the problem was not correctly setting up the stack user. I was previously running just ./stack.sh as the default user on a semaphore VM. I've now changed that to sudo tools/create-stack-user.sh cd .. sudo mkdir -p /opt/stack sudo mv devstack /opt/stack sudo chown -R stack:stack /opt/stack ls -la /opt/stack sudo -u stack -i bash -c 'cd devstack && ./stack.sh' and it seems to be getting a lot further. Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Sat May 23 15:07:17 2020 From: johfulto at redhat.com (John Fulton) Date: Sat, 23 May 2020 11:07:17 -0400 Subject: [tripleo][operators] Removal of mistral from the TripleO Undercloud In-Reply-To: References: Message-ID: Update: The underlying Ansible to support derived parameters within a TripleO deployment has merged [1]. We now only need to call and write some new Ansible modules to do the derivation. Then we can merge the THT change [2] and this feature's migration from Mistral will be complete. Until then anyone who wants this feature needs to enable and use Mistral on the undercloud. For anyone who wants to write new Ansible modules for derived params, now that the merge is done, if you build an undercloud from TripleO master and deploy with -p with the new plan-environment [2], the derived_parameters will be in the deployment plan and whatever that parameter contains will be applied as usual. The merged [1] patch has a clear indication of where [3] to call the new modules. Molecule tests the feature and has mock data so Saravanan, Jaganathan, and I should be able to develop and test [4] the new modules. I've documented how I extracted the mock data from my real deployment (based on Kevin's example) and shrank them in a personal repos [5] just in case the NFV team finds that information useful for the NFV modules and need to bring in more mock data. I'm personally planning to start the HCI derive params Ansible module after the PTG. Let's keep this thread updated during the Victoria cycle so that we can get the feature completely migrated from Mistral. Thanks, John [1] https://review.opendev.org/#/c/719466 [2] https://review.opendev.org/#/c/714217 [3] https://opendev.org/openstack/tripleo-ansible/src/commit/f99dea3b508f345e96a0e27e0250523e826eadbb/tripleo_ansible/roles/tripleo_derived_parameters/tasks/main.yml#L237 [4] cd tripleo-ansible; ./scripts/run-local-test tripleo_derived_param [5] https://github.com/fultonj/ussuri/tree/master/derive/data (I think a personal repos is sufficient but let me know if you want me to put it in the actual project; seems too ad hoc for TripleO itself to me) On Tue, Apr 28, 2020 at 8:10 PM John Fulton wrote: > > On Fri, Mar 20, 2020 at 6:07 PM John Fulton wrote: > > > > On Thu, Mar 19, 2020 at 5:37 AM Saravanan KR wrote: > > > > > > On Thu, Mar 19, 2020 at 1:02 AM John Fulton wrote: > > > > > > > > On Sat, Mar 14, 2020 at 8:06 AM Rabi Mishra wrote: > > > > > > > > > > On Sat, Mar 14, 2020 at 2:10 AM John Fulton wrote: > > > > >> > > > > >> On Fri, Mar 13, 2020 at 3:27 PM Kevin Carter wrote: > > > > >> > > > > > >> > Hello stackers, > > > > >> > > > > > >> > In the pursuit to remove Mistral from the TripleO undercloud, we've discovered an old capability that we need to figure out how best to handle. Currently, we provide the ability for an end-user (operator / deployer) to pass in "N" Mistral workflows as part of a given deployment plan which is processed by python-tripleoclient at runtime [0][1]. From what we have documented, and what we can find within the code-base, we're not using this feature by default. That said, we do not remove something if it is valuable in the field without an adequate replacement. The ability to run arbitrary Mistral workflows at deployment time was first created in 2017 [2] and while present all this time, its documented [3] and intra-code-base uses are still limited to samples [4]. > > > > >> > > > > >> As it stands now, we're on track to making Mistral inert this cycle and if our progress holds over the next couple of weeks the capability to run arbitrary Mistral workflows will be the only thing left within our codebase that relies on Mistral running on the Undercloud. > > > > >> > > > > >> > > > > > >> > So the question is what do we do with functionality. Do we remove this ability out right, do we convert the example workflow [5] into a stand-alone Ansible playbook and change the workflow runner to an arbitrary playbook runner, or do we simply leave everything as-is and deprecate it to be removed within the next two releases? > > > > > > > > > > > > > > > Yeah, as John mentioned, tripleo.derive_params.v1.derive_parameters workflow is surely being used for NFV (DPDK/SR-IOV) and HCI use cases and can't be deprecated or dropped. Though we've a generic interface in tripleoclient to run any workflow in plan-environment, I have not seen it being used for anything other than the mentioned workflow. > > > > > > > > > > In the scope of 'mistral-ansible' work, we seem to have two options. > > > > > > > > > > 1. Convert the workflow to ansible playbook 'as-is' i.e calculate and merge the derived parameters in plan-environment and as you've mentioned, change tripleoclient code to call any playbook in plan-environment.yaml and the parameters/vars. > > > > > > > > Nice idea. I hadn't thought of that. > > > > > > > > If there's a "hello world" example of this (which results in a THT > > > > param in the deployment plan being set to "hello world"), then I could > > > > try writing an ansible module to derive the HCI parameters and set > > > > them in place of the "hello world". > > > > > > > I am fine with the approach, but the only concern is, we have plans to > > > remove Heat in the coming cycles. One of inputs for the Mistral derive > > > parameters is fetched from the heat stack. If we are going to retain > > > it, then it has to be re-worked during the Heat removal. Mistral to > > > ansible could be the first step towards it. > > > > Hey Saravanan, > > > > That works for me. I'm glad we were able to come up with a way to do this. > > > > Kevin put some patches together today that will help a lot on this. > > > > 1. tht: https://review.opendev.org/#/c/714217/ > > 2. tripleo-ansible: https://review.opendev.org/#/c/714232/ > > 3. trilpeoclient: https://review.opendev.org/#/c/714198/ > > > > If I put these on my undercloud, then I think I can run: > > > > 'openstack overcloud deploy ... -p plan-environment-derived-params.yaml' > > > > as usual and then the updated tripleoclient and tht patch should > > trigger the new tripleo-ansible playbook in place of the Mistral > > workbook. > > > > I think I can then just update that tripleo-ansible patch to have it > > include a new derive_params_hci role and add a new derive_params_hci > > module where I'll stick code from the original Python prototype I did > > for it. I'll probably just shell to `openstack baremetal introspection > > data save ID` from ansible to get the Ironic data. I'll give it a try > > next week and update this thread. Even if Heat is not in the flow, at > > least the Ansible role and module can be reused. > > > > Note that it uses the new tripleo_plan_parameters_update module that > > Rabi wrote so that should make it easier to deal with the deployment > > plan itself (https://review.opendev.org/712604). > > Kevin and Rabi have made a lot of progress and with their unmerged > patches [1] 'openstack overcloud deploy -p > plan-environment-derived-params.yaml' has Ansible is running a > playbook [2] with place holders for us to use derive params which > looks like it will push the changes back to the deployment plan as > discussed above. > > As far as I can tell 'openstack overcloud deploy -p > plan-environment-derived-params.yaml' from master won't work the old > way as Mistral isn't in the picture anymore when 'openstack overcloud > deploy -p' is run (someone please correct me if I'm wrong). Thus, > derive params are not going to work with the U release unless we > finish the above. I don't think we should undo the progress. We're now > in the RC so time is short. As long as we still ship the Mistral > container on the undercloud in U, a deployer could have it derive the > params in theory if the workbook is run manually and the resultant > params are applied as overrides. Do we GA with that as a known issue > with workaround and then circle back to fix the above and backport it? > I don't think we should delay the release for it. I think we should > instead push harder on getting the playbook and roles that Kevin > started [2] (they could still land in U). > > John > > [1] https://review.opendev.org/#/q/topic:mistral_to_ansible+(status:open+OR+status:merged) > [2] https://review.opendev.org/#/c/719466/13/tripleo_ansible/roles/tripleo_derived_parameters/tasks/main.yml at 170 > > > > > John > > > > > Regards, > > > Saravanan KR > > > > > > > John > > > > > > > > > 2. Move the functionality further down the component chain in TripleO to have the required ansible host/group_vars set for them to be used by config-download playbooks/ansible/puppet. > > > > > > > > > > I guess option 1 would be easier within the timelines. I've done some preliminary work to move some of the functionality in relevant mistral actions to utils modules[1], so that they can be called from ansible modules without depending on mistral/mistral-lib and use those in a playbook that kinda replicate the tasks in the mistral workflow. > > > > > > > > > > Having said that, it would be good to know what the DFG:NFV folks think, as they are the original authors/maintainers of that workflow. > > > > > > > > > > > > > > > > > > > >> The Mistral based workflow took advantage of the deployment plan which > > > > >> was stored in Swift on the undercloud. My understanding is that too is > > > > >> going away. > > > > > > > > > > > > > > > I'm not sure that would be in the scope of 'mstral-to-ansible' work. Dropping swift would probably be a bit more complex, as we use it to store templates, plan-environment, plan backups (possibly missing few more) etc and would require significant design rework (may be possible when we get rid of heat in undercloud). In spite of heat using the templates from swift and merging environments on the client side, we've had already bumped heat's REST API json body size limit (max_json_body_size) on the undercloud to 4MB[2] from the default 1MB and sending all required templates as part of API request would not be a good idea from undercloud scalability pov. > > > > > > > > > > [1] https://review.opendev.org/#/c/709546/ > > > > > [2] https://github.com/openstack/tripleo-heat-templates/blob/master/environments/undercloud.yaml#L109 > > > > > > > > > > -- > > > > > Regards, > > > > > Rabi Mishra > > > > > > > > > > > > From pramchan at yahoo.com Sat May 23 15:36:08 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Sat, 23 May 2020 15:36:08 +0000 (UTC) Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs In-Reply-To: References: <1625228792.1113226.1590170188086.ref@mail.yahoo.com> <1625228792.1113226.1590170188086@mail.yahoo.com> Message-ID: <943070801.300159.1590248168801@mail.yahoo.com> Already, Please note Ghanshtam Maan had submitted a patch for Cinder v2 deprication  and removal and  we have merged it for Cinder under  https://review.opendev.org/#/c/728564/2/next.json PTLs or core reviewers or anyone with API expertise  can review that for reference like above for compute & storage based on test failures available in refstack for 2020.06 However this message is for community to work on main trunk as required by governance. https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD We will have patches coming week. ThanksPrakash Sent from Yahoo Mail on Android On Fri, May 22, 2020 at 11:38 AM, Arkady.Kanevsky at dell.com wrote: #yiv5222315134 #yiv5222315134 -- _filtered {} _filtered {} _filtered {}#yiv5222315134 #yiv5222315134 p.yiv5222315134MsoNormal, #yiv5222315134 li.yiv5222315134MsoNormal, #yiv5222315134 div.yiv5222315134MsoNormal {margin:0in;margin-bottom:.0001pt;font-size:11.0pt;font-family:sans-serif;}#yiv5222315134 a:link, #yiv5222315134 span.yiv5222315134MsoHyperlink {color:blue;text-decoration:underline;}#yiv5222315134 a:visited, #yiv5222315134 span.yiv5222315134MsoHyperlinkFollowed {color:purple;text-decoration:underline;}#yiv5222315134 p.yiv5222315134msonormal0, #yiv5222315134 li.yiv5222315134msonormal0, #yiv5222315134 div.yiv5222315134msonormal0 {margin-right:0in;margin-left:0in;font-size:11.0pt;font-family:sans-serif;}#yiv5222315134 span.yiv5222315134EmailStyle19 {font-family:sans-serif;color:windowtext;}#yiv5222315134 .yiv5222315134MsoChpDefault {font-size:10.0pt;} _filtered {}#yiv5222315134 div.yiv5222315134WordSection1 {}#yiv5222315134 Prakash, Why not submit a patch for of guidelines for compute, storage, and openstack for review? It will show the diffs against previous version. Thanks,   From: prakash RAMCHANDRAN Sent: Friday, May 22, 2020 12:56 PM To: openstack-discuss at lists.openstack.org Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs   [EXTERNAL EMAIL] Hi all,   Please help the Interop WG to review and finalize the draft version for the next interop guidelines.   https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD   Link for the DNS add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/dns.2020.06.json;h=848e6d800f75e1f3aed8ba19263c3d1f86f4dde6;hb=HEAD      Link for the Heat add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/orchestration.2020.06.json;h=6362abbac5756aaec24e25e5a0803f31497360f3;hb=HEAD        This guidelines is intended to cover "Stein" , "Train", "Ussuri" and "Victoria" releases of OpenStack.   We request that PTLs or Core team members please confirm any updates you may want to include or exclude in link above.   The Projects currently covered under OpenStack  Powered Programs include "Keystone", "Glance", "Nova", "Neutron", "Cinder", "Swift" and add-on programs "Designate", "Heat".     A more human-readable view of the capabilities can be found in RefStack: https://refstack.openstack.org/#/guidelines     We would like have feedback by next Interop call on Friday. - Can attend 10 AM PDT  call refer etherpad - https://etherpad.opendev.org/p/interop     If you have any additional topics to discuss at PTG please add them to https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020   The conference Bridge will be assigned once we know the number of participants. Please add your names/topics to ether pad interop ptg planning.      Optionally reply to this  email with your inputs/patches.    Please  Register to  attend the Interop WG slot scheduled for Virtual PTG on June 1 6-8 AM PDT /  9-11 AM EST / 14-16 UTC / 9-11 PM BJT https://www.openstack.org/ptg/     Thanks for Interop WG Chair - Prakash Ramchandran Vice Chair - Mark Voelker                     -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun May 24 23:17:43 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 24 May 2020 17:17:43 -0600 Subject: [tripleo][operators] Removal of mistral from the TripleO Undercloud In-Reply-To: References: Message-ID: On Sat, May 23, 2020 at 9:09 AM John Fulton wrote: > Update: > > The underlying Ansible to support derived parameters within a TripleO > deployment has merged [1]. We now only need to call and write some new > Ansible modules to do the derivation. Then we can merge the THT change > [2] and this feature's migration from Mistral will be complete. Until > then anyone who wants this feature needs to enable and use Mistral on > the undercloud. > > For anyone who wants to write new Ansible modules for derived params, > now that the merge is done, if you build an undercloud from TripleO > master and deploy with -p with the new plan-environment [2], the > derived_parameters will be in the deployment plan and whatever that > parameter contains will be applied as usual. The merged [1] patch has > a clear indication of where [3] to call the new modules. > > Molecule tests the feature and has mock data so Saravanan, Jaganathan, > and I should be able to develop and test [4] the new modules. I've > documented how I extracted the mock data from my real deployment > (based on Kevin's example) and shrank them in a personal repos [5] > just in case the NFV team finds that information useful for the NFV > modules and need to bring in more mock data. I'm personally planning > to start the HCI derive params Ansible module after the PTG. > > Let's keep this thread updated during the Victoria cycle so that we > can get the feature completely migrated from Mistral. > > Thanks, > John > Good news John, thanks for the update!! Looking forward to hearing more at the PTG. > > [1] https://review.opendev.org/#/c/719466 > [2] https://review.opendev.org/#/c/714217 > [3] > https://opendev.org/openstack/tripleo-ansible/src/commit/f99dea3b508f345e96a0e27e0250523e826eadbb/tripleo_ansible/roles/tripleo_derived_parameters/tasks/main.yml#L237 > [4] cd tripleo-ansible; ./scripts/run-local-test tripleo_derived_param > [5] https://github.com/fultonj/ussuri/tree/master/derive/data (I think > a personal repos is sufficient but let me know if you want me to put > it in the actual project; seems too ad hoc for TripleO itself to me) > > On Tue, Apr 28, 2020 at 8:10 PM John Fulton wrote: > > > > On Fri, Mar 20, 2020 at 6:07 PM John Fulton wrote: > > > > > > On Thu, Mar 19, 2020 at 5:37 AM Saravanan KR > wrote: > > > > > > > > On Thu, Mar 19, 2020 at 1:02 AM John Fulton > wrote: > > > > > > > > > > On Sat, Mar 14, 2020 at 8:06 AM Rabi Mishra > wrote: > > > > > > > > > > > > On Sat, Mar 14, 2020 at 2:10 AM John Fulton > wrote: > > > > > >> > > > > > >> On Fri, Mar 13, 2020 at 3:27 PM Kevin Carter < > kecarter at redhat.com> wrote: > > > > > >> > > > > > > >> > Hello stackers, > > > > > >> > > > > > > >> > In the pursuit to remove Mistral from the TripleO undercloud, > we've discovered an old capability that we need to figure out how best to > handle. Currently, we provide the ability for an end-user (operator / > deployer) to pass in "N" Mistral workflows as part of a given deployment > plan which is processed by python-tripleoclient at runtime [0][1]. From > what we have documented, and what we can find within the code-base, we're > not using this feature by default. That said, we do not remove something if > it is valuable in the field without an adequate replacement. The ability to > run arbitrary Mistral workflows at deployment time was first created in > 2017 [2] and while present all this time, its documented [3] and > intra-code-base uses are still limited to samples [4]. > > > > > >> > > > > > >> As it stands now, we're on track to making Mistral inert this > cycle and if our progress holds over the next couple of weeks the > capability to run arbitrary Mistral workflows will be the only thing left > within our codebase that relies on Mistral running on the Undercloud. > > > > > >> > > > > > >> > > > > > > >> > So the question is what do we do with functionality. Do we > remove this ability out right, do we convert the example workflow [5] into > a stand-alone Ansible playbook and change the workflow runner to an > arbitrary playbook runner, or do we simply leave everything as-is and > deprecate it to be removed within the next two releases? > > > > > > > > > > > > > > > > > > Yeah, as John mentioned, > tripleo.derive_params.v1.derive_parameters workflow is surely being used > for NFV (DPDK/SR-IOV) and HCI use cases and can't be deprecated or > dropped. Though we've a generic interface in tripleoclient to run any > workflow in plan-environment, I have not seen it being used for anything > other than the mentioned workflow. > > > > > > > > > > > > In the scope of 'mistral-ansible' work, we seem to have two > options. > > > > > > > > > > > > 1. Convert the workflow to ansible playbook 'as-is' i.e > calculate and merge the derived parameters in plan-environment and as > you've mentioned, change tripleoclient code to call any playbook in > plan-environment.yaml and the parameters/vars. > > > > > > > > > > Nice idea. I hadn't thought of that. > > > > > > > > > > If there's a "hello world" example of this (which results in a THT > > > > > param in the deployment plan being set to "hello world"), then I > could > > > > > try writing an ansible module to derive the HCI parameters and set > > > > > them in place of the "hello world". > > > > > > > > > I am fine with the approach, but the only concern is, we have plans > to > > > > remove Heat in the coming cycles. One of inputs for the Mistral > derive > > > > parameters is fetched from the heat stack. If we are going to retain > > > > it, then it has to be re-worked during the Heat removal. Mistral to > > > > ansible could be the first step towards it. > > > > > > Hey Saravanan, > > > > > > That works for me. I'm glad we were able to come up with a way to do > this. > > > > > > Kevin put some patches together today that will help a lot on this. > > > > > > 1. tht: https://review.opendev.org/#/c/714217/ > > > 2. tripleo-ansible: https://review.opendev.org/#/c/714232/ > > > 3. trilpeoclient: https://review.opendev.org/#/c/714198/ > > > > > > If I put these on my undercloud, then I think I can run: > > > > > > 'openstack overcloud deploy ... -p > plan-environment-derived-params.yaml' > > > > > > as usual and then the updated tripleoclient and tht patch should > > > trigger the new tripleo-ansible playbook in place of the Mistral > > > workbook. > > > > > > I think I can then just update that tripleo-ansible patch to have it > > > include a new derive_params_hci role and add a new derive_params_hci > > > module where I'll stick code from the original Python prototype I did > > > for it. I'll probably just shell to `openstack baremetal introspection > > > data save ID` from ansible to get the Ironic data. I'll give it a try > > > next week and update this thread. Even if Heat is not in the flow, at > > > least the Ansible role and module can be reused. > > > > > > Note that it uses the new tripleo_plan_parameters_update module that > > > Rabi wrote so that should make it easier to deal with the deployment > > > plan itself (https://review.opendev.org/712604). > > > > Kevin and Rabi have made a lot of progress and with their unmerged > > patches [1] 'openstack overcloud deploy -p > > plan-environment-derived-params.yaml' has Ansible is running a > > playbook [2] with place holders for us to use derive params which > > looks like it will push the changes back to the deployment plan as > > discussed above. > > > > As far as I can tell 'openstack overcloud deploy -p > > plan-environment-derived-params.yaml' from master won't work the old > > way as Mistral isn't in the picture anymore when 'openstack overcloud > > deploy -p' is run (someone please correct me if I'm wrong). Thus, > > derive params are not going to work with the U release unless we > > finish the above. I don't think we should undo the progress. We're now > > in the RC so time is short. As long as we still ship the Mistral > > container on the undercloud in U, a deployer could have it derive the > > params in theory if the workbook is run manually and the resultant > > params are applied as overrides. Do we GA with that as a known issue > > with workaround and then circle back to fix the above and backport it? > > I don't think we should delay the release for it. I think we should > > instead push harder on getting the playbook and roles that Kevin > > started [2] (they could still land in U). > > > > John > > > > [1] > https://review.opendev.org/#/q/topic:mistral_to_ansible+(status:open+OR+status:merged) > > [2] > https://review.opendev.org/#/c/719466/13/tripleo_ansible/roles/tripleo_derived_parameters/tasks/main.yml at 170 > > > > > > > > John > > > > > > > Regards, > > > > Saravanan KR > > > > > > > > > John > > > > > > > > > > > 2. Move the functionality further down the component chain in > TripleO to have the required ansible host/group_vars set for them to be > used by config-download playbooks/ansible/puppet. > > > > > > > > > > > > I guess option 1 would be easier within the timelines. I've done > some preliminary work to move some of the functionality in relevant mistral > actions to utils modules[1], so that they can be called from ansible > modules without depending on mistral/mistral-lib and use those in a > playbook that kinda replicate the tasks in the mistral workflow. > > > > > > > > > > > > Having said that, it would be good to know what the DFG:NFV > folks think, as they are the original authors/maintainers of that workflow. > > > > > > > > > > > > > > > > > > > > > > > >> The Mistral based workflow took advantage of the deployment > plan which > > > > > >> was stored in Swift on the undercloud. My understanding is that > too is > > > > > >> going away. > > > > > > > > > > > > > > > > > > I'm not sure that would be in the scope of 'mstral-to-ansible' > work. Dropping swift would probably be a bit more complex, as we use it to > store templates, plan-environment, plan backups (possibly missing few more) > etc and would require significant design rework (may be possible when we > get rid of heat in undercloud). In spite of heat using the templates from > swift and merging environments on the client side, we've had already bumped > heat's REST API json body size limit (max_json_body_size) on the undercloud > to 4MB[2] from the default 1MB and sending all required templates as part > of API request would not be a good idea from undercloud scalability pov. > > > > > > > > > > > > [1] https://review.opendev.org/#/c/709546/ > > > > > > [2] > https://github.com/openstack/tripleo-heat-templates/blob/master/environments/undercloud.yaml#L109 > > > > > > > > > > > > -- > > > > > > Regards, > > > > > > Rabi Mishra > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.kirkwood at catalyst.net.nz Mon May 25 05:41:08 2020 From: mark.kirkwood at catalyst.net.nz (Mark Kirkwood) Date: Mon, 25 May 2020 17:41:08 +1200 Subject: [swift] Rolling upgrade, any version relationships? In-Reply-To: <1e733746-66d2-6d25-3759-3ed6598bc46d@nvidia.com> References: <5f55bc36-b51c-5c6d-ad0f-63a32fcba2d4@catalyst.net.nz> <4286adc3-4f78-7580-fe22-1bc6fa2ce2e5@catalyst.net.nz> <1e733746-66d2-6d25-3759-3ed6598bc46d@nvidia.com> Message-ID: <7f346dbf-05ec-cefb-eb85-e3035eaa4ba6@catalyst.net.nz> On 23/05/20 5:29 am, Tim Burke wrote: > On 4/14/20 6:00 PM, Mark Kirkwood wrote: >> Hi all, flagging this again as it would be great to have a definite >> answer to these questions! Thanks! >> >> On 11/03/20 3:27 pm, Mark Kirkwood wrote: >>> Hi, we are looking at upgrading our 2.7.0 Swift cluster. In the past >>> I've modeled this on a dev system by upgrading storage nodes one by >>> one (using 2.17 as the target version). This seemed to work well - I >>> deliberately left the cluster half upgraded for an extended period to >>> test for any cross version weirdness (didn't see any). However I'm >>> wanting to check that I have not missed something important. So my >>> questions are: >>> >>> - If upgrading from 2.7.0 is it safe to just grab the latest version >>> (e.g 2.23)? >>> >>> - If not is there a preferred version to jump to first? >>> >>> - Is it ok for the upgrade to take extended time (e.g weeks) and >>> therefore be running with some new and some old storage nodes for that >>> time? >>> >>> regards >>> >>> Mark >>> >>> >> > Hi Mark, > > It should be perfectly fine upgrading from 2.7.0 to latest. If > possible, you'll want to upgrade backend servers (object, container, > account) before proxies, though I've done many upgrades where that > wasn't possible (because all nodes were running all services). We've > never (to my knowledge, anyway) had anything like a checkpoint release. > > Having a long in-progress window for the upgrade increases the > likelihood that you'll uncover some heretofore unknown upgrade bug, > but it should be mostly OK. There may be some noisy logging, due to > changes in the replication protocol for example, but as long as the > cluster was healthy going into the upgrade, I wouldn't worry too much. > Shorter is generally better, though, especially if your cluster > topology requires that some proxies get upgraded before all the > backend servers are upgraded. > > Thanks Tim, exactly what I needed to know! Yeah, we are able to upgrade all our storage nodes (account, container, object) before the proxies, so fingers crossed. From skaplons at redhat.com Mon May 25 08:36:29 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 25 May 2020 10:36:29 +0200 Subject: [neutron] Team meeting on Monday 25.05.2020 cancelled Message-ID: <20200525083629.s73ych6cfnj6zw5d@skaplons-mac> Hi, As today is the Memorial Day in the USA, and many from team members are on holidays, lets cancel today's team meeting. Next week there is virtual PTG so we will meet there :) -- Slawek Kaplonski Senior software engineer Red Hat From sbauza at redhat.com Mon May 25 09:29:46 2020 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 25 May 2020 11:29:46 +0200 Subject: [nova][neutron][ptg] Future of the routed network support In-Reply-To: References: Message-ID: On Wed, May 20, 2020 at 4:12 PM Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > There is only basic scheduling support in nova for the neutron routed > networks feature (server create with port.ip_allocation=deferred seems > to work). There was multiple attempts in the past to complete the > support (e.g.server create with port.ip_allocation=immediate or server > move operations). The latest attempt started by Matt couple of cycles > ago, and in the last cycle I tried to push that forward[1]. When I > added this topic to the PTG etherpad I thought I will have time in the > Victoria cycle to continue [1] but internal priorities has changed. So > finishing this feature needs some developers. If there are volunteers > for Victoria then please let me know and then we can keep this as a > topic for the PTG but otherwise I will remove it from the schedule. > > After discussing with Sean (Mooney), we agreed on passing the 'NUMA topology in Placement' implementation to him so I can volunteer for this one. So, let's discuss on it at the PTG. -Sylvain > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://review.opendev.org/#/q/topic:routed-networks-scheduling > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon May 25 09:31:41 2020 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 25 May 2020 11:31:41 +0200 Subject: [nova][ptg] Feature Liaison In-Reply-To: References: Message-ID: On Mon, May 18, 2020 at 5:56 PM Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > Last cycle we introduced the Feature Liaison process [1]. I think this > is time to reflect on it. > Did it helped? > Do we need to tweak it? > > Personally for me it did not help much but I think this is a fairly low > cost process so I'm OK to keep it as is. > > Cool with me. Maybe we could just tell it's optional, so we could better see who would like to get some mentor for them. -Sylvain Cheers, > gibi > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://review.opendev.org/#/c/685857/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjozefcz at redhat.com Mon May 25 10:07:57 2020 From: mjozefcz at redhat.com (Maciej Jozefczyk) Date: Mon, 25 May 2020 12:07:57 +0200 Subject: Bug deputy report for week of 2020-05-18 Message-ID: Hello, Hey, Here's the bug deputy report for last week: ===== High ===== * https://bugs.launchpad.net/neutron/+bug/1879301 [OVN] networking-ovn-tempest-dsvm-ovs-release-python2 job starts to fail on tempest py2 installation fix already merged: https://review.opendev.org/#/c/728817/ * https://bugs.launchpad.net/neutron/+bug/1879307 Recent stable/rocky change breaks networking-calico's interface driver fix proposed: https://review.opendev.org/#/c/729167/ * https://bugs.launchpad.net/neutron/+bug/1879950 [OVN] neutron-ovn-db-sync-util wipes out hash ring nodes fix proposed: https://review.opendev.org/#/c/729997/ ===== Medium ===== * https://bugs.launchpad.net/neutron/+bug/1879215 [L3] Unexcepted HA router scheduled instance shows up after manully scheduling Unassigned. * https://bugs.launchpad.net/neutron/+bug/1879502 [ovn] neutron-ovn-db-sync-util fails with PortContext has no session fix proposed: https://review.opendev.org/#/c/729264/ * https://bugs.launchpad.net/neutron/+bug/1879747 [DOC] Manual install & Configuration in neutron fix proposed: https://review.opendev.org/#/c/729894/ ===== Low ===== * https://bugs.launchpad.net/neutron/+bug/1879407 [OVN] Modifying FIP that is no associated causes ovn_revision_numbers to go stale unassigned, no big deal * https://bugs.launchpad.net/neutron/+bug/1879716 Policy is not enforced for network mtu config unassigned ==== Reopened ==== https://bugs.launchpad.net/neutron/+bug/1742401 Fullstack tests neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork fails often ===== Incomplete ===== * https://bugs.launchpad.net/neutron/+bug/1878719 DHCP Agent's iptables CHECKSUM rule causes skb_warn_bad_offload kernel * https://bugs.launchpad.net/neutron/+bug/1878905 when I use dvr mode for neutron,it's not functional * https://bugs.launchpad.net/neutron/+bug/1879009 attaching extra port to server raise duplicate dns-name error fix proposed: https://review.opendev.org/#/c/729440/ (in progress) See you, Maciej -- Best regards, Maciej Józefczyk -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon May 25 11:45:23 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 25 May 2020 13:45:23 +0200 Subject: [nova][ptg] Documentation in nova In-Reply-To: <02e32e11204fb4e91979552c5bcda05bda2dd148.camel@redhat.com> References: <02e32e11204fb4e91979552c5bcda05bda2dd148.camel@redhat.com> Message-ID: On Fri, May 22, 2020 at 21:51, Stephen Finucane wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > Our documentation in nova is suffering from bit rot, the ongoing > effects of the documentation migration during Pike (I think), and > general lack of attention. I've been working to tackle this but > progress has been very slow. I suggested this a couple of PTGs ago, > but > once again I'd like to explore going on a solo run with these by > writing and self-approving (perhaps after a agreed interval) > *multiple* > large doc refactors. I've left some notes below, copied from the > Etherpad, but in summary I believe this is the only realistic way we > will ever be able to fix our documentation. > > Cheers, > Stephen > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > --- > > Documentation reviews are appreciated but are generally seen as low > priority. See: > > * https://review.opendev.org/667165 (docs: Rewrite quotas > documentation) > * https://review.opendev.org/667133 (docs: Rewrite host aggregate, > availability zone docs) > * https://review.opendev.org/664396 (docs: Document how to revert, > confirm a cold migration) > * https://review.opendev.org/635243 (docs: Rework the PCI > passthrough > guides) > * https://review.opendev.org/640730 (docs: Rework all things > metadata'y) > * https://review.opendev.org/625878 (doc: Rework 'resize' user doc) > * ... > Thank you working on all these documentations. > I (stephenfin) want permission to iterate on documentation and merge > unilaterally unless someone expresses a clear interest Honestly, self approve feels scary to me as it creates precedent. I'm happy to get pinged, pushed, harassed into reviewing the doc patches instead. Cheers, gibi > > Pros: > > * Documentation gets better > * I can iterate *much* faster > > Cons: > > * Possibility of mistakes or "bad documentation is worse than no > documentation" > * Counterpoint: Our documentation is already bad/wrong and is > getting worse due to bitrot. > * Better is subjective > * Counterpoints: The general structure of the patches I listed > above was accepted in each case, I have other projects I have > authored to point to (e.g. patchwork.readthedocs.io/) > * Merge conflicts > * Counterpoint: It's docs. Just drop these (non-functional) hunks > if it's awkward. > > > From alifshit at redhat.com Mon May 25 13:15:38 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 25 May 2020 09:15:38 -0400 Subject: [nova][ptg] Documentation in nova In-Reply-To: References: <02e32e11204fb4e91979552c5bcda05bda2dd148.camel@redhat.com> Message-ID: On Mon, May 25, 2020 at 7:48 AM Balázs Gibizer wrote: > > > > On Fri, May 22, 2020 at 21:51, Stephen Finucane > wrote: > > Hi, > > > > [This is a topic from the PTG etherpad [0]. As the PTG time is > > intentionally kept short, let's try to discuss it or even conclude it > > before the PTG] > > > > Our documentation in nova is suffering from bit rot, the ongoing > > effects of the documentation migration during Pike (I think), and > > general lack of attention. I've been working to tackle this but > > progress has been very slow. I suggested this a couple of PTGs ago, > > but > > once again I'd like to explore going on a solo run with these by > > writing and self-approving (perhaps after a agreed interval) > > *multiple* > > large doc refactors. I've left some notes below, copied from the > > Etherpad, but in summary I believe this is the only realistic way we > > will ever be able to fix our documentation. > > > > Cheers, > > Stephen > > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > > > --- > > > > Documentation reviews are appreciated but are generally seen as low > > priority. See: > > > > * https://review.opendev.org/667165 (docs: Rewrite quotas > > documentation) > > * https://review.opendev.org/667133 (docs: Rewrite host aggregate, > > availability zone docs) > > * https://review.opendev.org/664396 (docs: Document how to revert, > > confirm a cold migration) > > * https://review.opendev.org/635243 (docs: Rework the PCI > > passthrough > > guides) > > * https://review.opendev.org/640730 (docs: Rework all things > > metadata'y) > > * https://review.opendev.org/625878 (doc: Rework 'resize' user doc) > > * ... > > > > Thank you working on all these documentations. > > > I (stephenfin) want permission to iterate on documentation and merge > > unilaterally unless someone expresses a clear interest > > Honestly, self approve feels scary to me as it creates precedent. I'm > happy to get pinged, pushed, harassed into reviewing the doc patches > instead. Agreed. FWIW, I'm willing to review those as well (though obviously my +1 won't be enough to do anything on its own). > > Cheers, > gibi > > > > > Pros: > > > > * Documentation gets better > > * I can iterate *much* faster > > > > Cons: > > > > * Possibility of mistakes or "bad documentation is worse than no > > documentation" > > * Counterpoint: Our documentation is already bad/wrong and is > > getting worse due to bitrot. > > * Better is subjective > > * Counterpoints: The general structure of the patches I listed > > above was accepted in each case, I have other projects I have > > authored to point to (e.g. patchwork.readthedocs.io/) > > * Merge conflicts > > * Counterpoint: It's docs. Just drop these (non-functional) hunks > > if it's awkward. > > > > > > > > > From balazs.gibizer at est.tech Mon May 25 14:22:51 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 25 May 2020 16:22:51 +0200 Subject: [nova][ptg] Runway process in Victoria In-Reply-To: References: <4AAJAQ.8C39QP5J2M3Z@est.tech> Message-ID: <3A5WAQ.63TG1GRWBIIP@est.tech> On Fri, May 22, 2020 at 21:34, Stephen Finucane wrote: > On Mon, 2020-05-18 at 17:42 +0200, Balázs Gibizer wrote: >> Hi, >> >> [This is a topic from the PTG etherpad [0]. As the PTG time is >> intentionally kept short, let's try to discuss it or even conclude >> it >> before the PTG] >> >> >> In the last 4 cycles we used a process called runway to focus and >> timebox of the team's feature review effort. However compared to the >> previous cycles in ussuri we did not really keep the process >> running. >> Just compare the length of the Log section of each etherpad >> [1][2][3][4] to see the difference. So I have two questions: >> >> 1) Do we want to keep the process in Victoria? >> >> 2) If yes, how we can make the process running? >> 2.1) How can we keep the runway etherpad up-to-date? >> 2.2) How to make sure that the team is focusing on the reviews that >> are >> in the runway slots? >> >> Personally I don't want to advertise this process for contributors >> if >> the core team is not agreed and committed to keep the process >> running >> as it would lead to unnecessary disappointment. > > I tend to use this as a way to find things to review, though I think > I'd be equally well served by a gerrit dashboard that filtered on bp > topics. I've stopped using it because I already do a lot of reviews > and > have no issue finding more but also, more importantly, because I > didn't > find having *my* items in a runway significantly increased reviews > from > anyone != mriedem. If there were sign on from every core to prioritize > this (perhaps with a reminder during the weekly meeting?) then I'd > embrace it again. If not though, I'd rather we dropped it than make > false promises. I totally agree to avoid the false promise. As far as I understood you dropped reviewing things in the runway at least partly because the focused review of your patches the runway promised you was not delivered. This support my feeling that the core team needs to re-take the agreement that we want to follow this process and that we are willing to prioritize reviews in the slots. I as the PTL willing to do the scheduling of the slots (a.k.a paperwork) and sure I can add a topic for the meeting agenda about patches in the runway slots. I as a core willing to try again the runway process by prioritizing reviewing things in the slots. Let's see other core will join in. cheers, gibi > > Stephen > >> >> Cheers, >> gibi >> >> [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> [1] https://etherpad.opendev.org/p/nova-runways-rocky >> [2] https://etherpad.opendev.org/p/nova-runways-stein >> [3] https://etherpad.opendev.org/p/nova-runways-train >> [4] https://etherpad.opendev.org/p/nova-runways-ussuri >> >> >> > From balazs.gibizer at est.tech Mon May 25 14:25:20 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 25 May 2020 16:25:20 +0200 Subject: [nova][ptg] Documentation in nova In-Reply-To: References: <02e32e11204fb4e91979552c5bcda05bda2dd148.camel@redhat.com> Message-ID: <8E5WAQ.QVEX9LKM41JX2@est.tech> On Mon, May 25, 2020 at 09:15, Artom Lifshitz wrote: > On Mon, May 25, 2020 at 7:48 AM Balázs Gibizer > wrote: >> >> >> >> On Fri, May 22, 2020 at 21:51, Stephen Finucane >> >> wrote: >> > Hi, >> > >> > [This is a topic from the PTG etherpad [0]. As the PTG time is >> > intentionally kept short, let's try to discuss it or even >> conclude it >> > before the PTG] >> > >> > Our documentation in nova is suffering from bit rot, the ongoing >> > effects of the documentation migration during Pike (I think), and >> > general lack of attention. I've been working to tackle this but >> > progress has been very slow. I suggested this a couple of PTGs >> ago, >> > but >> > once again I'd like to explore going on a solo run with these by >> > writing and self-approving (perhaps after a agreed interval) >> > *multiple* >> > large doc refactors. I've left some notes below, copied from the >> > Etherpad, but in summary I believe this is the only realistic way >> we >> > will ever be able to fix our documentation. >> > >> > Cheers, >> > Stephen >> > >> > [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> > >> > --- >> > >> > Documentation reviews are appreciated but are generally seen as >> low >> > priority. See: >> > >> > * https://review.opendev.org/667165 (docs: Rewrite quotas >> > documentation) >> > * https://review.opendev.org/667133 (docs: Rewrite host >> aggregate, >> > availability zone docs) >> > * https://review.opendev.org/664396 (docs: Document how to >> revert, >> > confirm a cold migration) >> > * https://review.opendev.org/635243 (docs: Rework the PCI >> > passthrough >> > guides) >> > * https://review.opendev.org/640730 (docs: Rework all things >> > metadata'y) >> > * https://review.opendev.org/625878 (doc: Rework 'resize' user >> doc) >> > * ... >> > >> >> Thank you working on all these documentations. >> >> > I (stephenfin) want permission to iterate on documentation and >> merge >> > unilaterally unless someone expresses a clear interest >> >> Honestly, self approve feels scary to me as it creates precedent. >> I'm >> happy to get pinged, pushed, harassed into reviewing the doc patches >> instead. > > Agreed. FWIW, I'm willing to review those as well (though obviously my > +1 won't be enough to do anything on its own). I can be convinced to easy up the rules for pure doc patches. Maybe one +2 would be enough for pure doc patches to merge if there are +1 from SMEs on the patch too. Cheers, gibi > >> >> Cheers, >> gibi >> >> > >> > Pros: >> > >> > * Documentation gets better >> > * I can iterate *much* faster >> > >> > Cons: >> > >> > * Possibility of mistakes or "bad documentation is worse than no >> > documentation" >> > * Counterpoint: Our documentation is already bad/wrong and is >> > getting worse due to bitrot. >> > * Better is subjective >> > * Counterpoints: The general structure of the patches I listed >> > above was accepted in each case, I have other projects I >> have >> > authored to point to (e.g. patchwork.readthedocs.io/) >> > * Merge conflicts >> > * Counterpoint: It's docs. Just drop these (non-functional) >> hunks >> > if it's awkward. >> > >> > >> > >> >> >> > From gmann at ghanshyammann.com Mon May 25 15:09:40 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 25 May 2020 10:09:40 -0500 Subject: [nova][ptg] Feature Liaison In-Reply-To: References: Message-ID: <1724c61d79c.e3730188581.5966946061881788712@ghanshyammann.com> ---- On Mon, 25 May 2020 04:31:41 -0500 Sylvain Bauza wrote ---- > > > On Mon, May 18, 2020 at 5:56 PM Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > Last cycle we introduced the Feature Liaison process [1]. I think this > is time to reflect on it. > Did it helped? > Do we need to tweak it? > > Personally for me it did not help much but I think this is a fairly low > cost process so I'm OK to keep it as is. > > > Cool with me. Maybe we could just tell it's optional, so we could better see who would like to get some mentor for them. +1 on optional and keep it for new contributors which can be any time in the future. So at least if anyone asks we can tell this is how you can get some dedicated Core for your code review/help. -gmann > -Sylvain > Cheers, > gibi > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://review.opendev.org/#/c/685857/ > > > > From gmann at ghanshyammann.com Mon May 25 15:19:39 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 25 May 2020 10:19:39 -0500 Subject: [nova][ptg] Can we close old bugs? In-Reply-To: <9a3fd36ccce34740bd41f4e231eab77f4d21cb7e.camel@redhat.com> References: <9a3fd36ccce34740bd41f4e231eab77f4d21cb7e.camel@redhat.com> Message-ID: <1724c6afb35.f7f9f6971040.4031817624454912429@ghanshyammann.com> ---- On Fri, 22 May 2020 15:37:21 -0500 Stephen Finucane wrote ---- > On Mon, 2020-05-18 at 18:11 +0200, Balázs Gibizer wrote: > > Hi, > > > > [This is a topic from the PTG etherpad [0]. As the PTG time is > > intentionally kept short, let's try to discuss it or even conclude > > it before the PTG] > > > > We have more than 800 open bugs in nova [1] and the oldest is 8 > > years old. > > Can we close old bugs? > > Yes. > > > If yes, what would be the closing criteria? Age and status? > > Age is probably easiest. I would prefer to keep those with an open, not > -2'd review around, since that implies there might be something to pick > up, but other than that, simple age will do. You're thinking something > similar. > > > Personally I would close every bug that is not updated in the last 3 > > years and not in INPROGRESS state. I roughly counted and it is ~95 bugs which are older than <3 years which does not seems to decrease much as compared to 800. > > This is very conservative. 18 months would be more than enough, IMO. As > Sean has said elsewhere, we have other bugs in Bugzilla and we're quite > aggressive in closing those if we're realistically not going to work on > a resolution any time soon. I think 18 months make sense and leave comments as "try on the master(or <= stein) if that is fixed or not, if not please report it back". -gmann > > Stephen > > > Cheers, > > gibi > > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > [1] > > https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE > > > > > > > > > From gmann at ghanshyammann.com Mon May 25 15:27:05 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 25 May 2020 10:27:05 -0500 Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs In-Reply-To: <943070801.300159.1590248168801@mail.yahoo.com> References: <1625228792.1113226.1590170188086.ref@mail.yahoo.com> <1625228792.1113226.1590170188086@mail.yahoo.com> <943070801.300159.1590248168801@mail.yahoo.com> Message-ID: <1724c71cac9.12836da8d1306.8306644386233636073@ghanshyammann.com> ---- On Sat, 23 May 2020 10:36:08 -0500 prakash RAMCHANDRAN wrote ---- > Already, > Please note Ghanshtam Maan had submitted a patch for Cinder v2 deprication and removal and we have merged it for Cinder under > https://review.opendev.org/#/c/728564/2/next.json > > PTLs or core reviewers or anyone with API expertise can review that for reference like above for compute & storage based on test failures available in refstack for 2020.06 > However this message is for community to work on main trunk as required by governance. > https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD Hi Prakash, For, compute guidelines, most of the new capabilities are done with API change( or new API) with microversion. As you know, interop does not have the API version based capabilities yet so I will say to continue with the current compute guidelines. and continue the discussion on adding the microversion capabilities. We have most of the microversion test covered in Tempest but I am sure it's not 100% as per the scope of Tempest. But adding the interop required tests will not take much time. -gmann > We will have patches coming week. > ThanksPrakash > > Sent from Yahoo Mail on Android > On Fri, May 22, 2020 at 11:38 AM, Arkady.Kanevsky at dell.com wrote: > Prakash, > Why not submit a patch for of guidelines for compute, storage, and openstack for review? > It will show the diffs against previous version. > Thanks, > > From: prakash RAMCHANDRAN > Sent: Friday, May 22, 2020 12:56 PM > To: openstack-discuss at lists.openstack.org > Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs > > [EXTERNAL EMAIL] > Hi all, > > Please help the Interop WG to review and finalize the draft version for the next interop guidelines. > > https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD > > Link for the DNS add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/dns.2020.06.json;h=848e6d800f75e1f3aed8ba19263c3d1f86f4dde6;hb=HEAD > > > Link for the Heat add-on: https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/orchestration.2020.06.json;h=6362abbac5756aaec24e25e5a0803f31497360f3;hb=HEAD > > > > This guidelines is intended to cover "Stein" , "Train", "Ussuri" and "Victoria" releases of OpenStack. > > We request that PTLs or Core team members please confirm any updates you may want to include or exclude in link above. > > The Projects currently covered under OpenStack Powered Programs include "Keystone", "Glance", "Nova", "Neutron", "Cinder", "Swift" and add-on programs "Designate", "Heat". > > > A more human-readable view of the capabilities can be found in RefStack: https://refstack.openstack.org/#/guidelines > > > We would like have feedback by next Interop call on Friday. - Can attend 10 AM PDT call refer etherpad - https://etherpad.opendev.org/p/interop > > > If you have any additional topics to discuss at PTG please add them to https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 > > The conference Bridge will be assigned once we know the number of participants. Please add your names/topics to ether pad interop ptg planning. > > > Optionally reply to this email with your inputs/patches. > > Please Register to attend the Interop WG slot scheduled for Virtual PTG on June 1 6-8 AM PDT / 9-11 AM EST / 14-16 UTC / 9-11 PM BJT > https://www.openstack.org/ptg/ > > > Thanks > for Interop WG > Chair - Prakash Ramchandran > Vice Chair - Mark Voelker > > > > > > > > > > > From balazs.gibizer at est.tech Mon May 25 15:40:41 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 25 May 2020 17:40:41 +0200 Subject: [nova][ptg] Feature Liaison In-Reply-To: <1724c61d79c.e3730188581.5966946061881788712@ghanshyammann.com> References: <1724c61d79c.e3730188581.5966946061881788712@ghanshyammann.com> Message-ID: On Mon, May 25, 2020 at 10:09, Ghanshyam Mann wrote: > ---- On Mon, 25 May 2020 04:31:41 -0500 Sylvain Bauza > wrote ---- > > > > > > On Mon, May 18, 2020 at 5:56 PM Balázs Gibizer > wrote: > > Hi, > > > > [This is a topic from the PTG etherpad [0]. As the PTG time is > > intentionally kept short, let's try to discuss it or even conclude > it > > before the PTG] > > > > Last cycle we introduced the Feature Liaison process [1]. I think > this > > is time to reflect on it. > > Did it helped? > > Do we need to tweak it? > > > > Personally for me it did not help much but I think this is a > fairly low > > cost process so I'm OK to keep it as is. > > > > > > Cool with me. Maybe we could just tell it's optional, so we could > better see who would like to get some mentor for them. > > +1 on optional and keep it for new contributors which can be any time > in the future. So at least if anyone asks we can tell this is > how you can get some dedicated Core for your code review/help. I see that those who responded so far are pretty aligned about the future of the Feature Liaison, so I proposed an update for the Victoria spec template [1] to make this optional. Feel free to continue the discussion here or directly in the review. Cheers, gibi [1] https://review.opendev.org/#/c/730638 > > -gmann > > > -Sylvain > > Cheers, > > gibi > > > > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > [1] https://review.opendev.org/#/c/685857/ > > > > > > > > From balazs.gibizer at est.tech Mon May 25 15:54:51 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 25 May 2020 17:54:51 +0200 Subject: [nova][ptg] bumping compute RPC to 6.0 Message-ID: Hi, [This is a topic from the PTG etherpad [0]. As the PTG time is intentionally kept short, let's try to discuss it or even conclude it before the PTG] Do we want to bump the compute RPC to 6.0 in Victoria? We did not have the full view what we can gain[2] with such bump so Stephen and I searched through nova and collected the things we could clean up [1] eventually if we bump the compute RPC to 6.0 in Victoria. I can work on the RPC 6.0 patch during V and I think I can also help in with the possible cleanups later. Cheers, gibi [0] https://etherpad.opendev.org/p/nova-victoria-ptg [1] https://etherpad.opendev.org/p/compute-rpc-6.0 [2] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-16-16.00.log.html#l-103 From alifshit at redhat.com Mon May 25 16:05:28 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 25 May 2020 12:05:28 -0400 Subject: [nova][ptg] bumping compute RPC to 6.0 In-Reply-To: References: Message-ID: On Mon, May 25, 2020 at 11:58 AM Balázs Gibizer wrote: > > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > Do we want to bump the compute RPC to 6.0 in Victoria? Like it or not, fast-forward upgrades where one or more releases are skipped is something we have to live with and take into account. What would be the FFWD upgrade implications of bumping RPC to 6.0 in Victoria, specifically for someone upgrading from Train to W or later? > > We did not have the full view what we can gain[2] with such bump so > Stephen and I searched through nova and collected the things we could > clean up [1] eventually if we bump the compute RPC to 6.0 in Victoria. > > I can work on the RPC 6.0 patch during V and I think I can also help in > with the possible cleanups later. > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://etherpad.opendev.org/p/compute-rpc-6.0 > [2] > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-16-16.00.log.html#l-103 > > > From smooney at redhat.com Mon May 25 16:43:19 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 25 May 2020 17:43:19 +0100 Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs In-Reply-To: <1724c71cac9.12836da8d1306.8306644386233636073@ghanshyammann.com> References: <1625228792.1113226.1590170188086.ref@mail.yahoo.com> <1625228792.1113226.1590170188086@mail.yahoo.com> <943070801.300159.1590248168801@mail.yahoo.com> <1724c71cac9.12836da8d1306.8306644386233636073@ghanshyammann.com> Message-ID: <8472fa0d7f4e2918233ec73ca241e86a6d511dd0.camel@redhat.com> On Mon, 2020-05-25 at 10:27 -0500, Ghanshyam Mann wrote: > ---- On Sat, 23 May 2020 10:36:08 -0500 prakash RAMCHANDRAN wrote ---- > > Already, > > Please note Ghanshtam Maan had submitted a patch for Cinder v2 deprication and removal and we have merged it for > Cinder under > > https://review.opendev.org/#/c/728564/2/next.json > > > > PTLs or core reviewers or anyone with API expertise can review that for reference like above for compute & storage > based on test failures available in refstack for 2020.06 > > However this message is for community to work on main trunk as required by governance. > > > https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD > > Hi Prakash, > > For, compute guidelines, most of the new capabilities are done with API change( or new API) with microversion. As you > know, interop does not > have the API version based capabilities yet so I will say to continue with the current compute guidelines. and > continue the discussion on adding > the microversion capabilities. We have most of the microversion test covered in Tempest but I am sure it's not 100% as > per the scope of Tempest. > But adding the interop required tests will not take much time. one thing that does feel odd to me regarding the current defintion is that the current compute gudieline require some optional compute features. specificly i think it requires resize and some other ops which prohibits ironic and some contaner driver from being considers acceptable for openstack powered compute. granted there are like 3 mandatory features https://docs.openstack.org/nova/latest/user/support-matrix.html so i think we shoudl actully reconsider this in nova internally too for example i think reboot probably could be made mandatory at this point since all in tree driver support it and we require both stop and start instance which when done back to back is a reboot. https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_reboot there are no other feature on the support matrix that is actually supported by everything but is not mandatory but rebuild is close https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_rebuild rebuild is basically a delete followed by spawnign a new instnace using the same port and volumes but a new root disk image so so it should be genericly supportable. looking at the list specfically i think the following should be remvoed from required and moved to advisory compute-servers-resize (not required to be a vlaid virt driver) (prevent ironic form qualifying) https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_resize compute-servers-rebuild (not required to be a vlaid virt driver) (should be trivial to add but prevent powervm and zvm form qualifiing) https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_rebuild compute-volume-attach (not requried to be a valid virt driver) (prevents ironic zvm and some container driver form qulifying) https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_attach_volume this is currently being verifed wtih test_attach_detach_volume so the name is also missleading. strictly speaking all of the volume capablites "volumes-list-api-versions", 152 "volumes-v3-create-delete", 153 "volumes-v3-snapshot-create-delete", 154 "volumes-v3-get", 155 "volumes-v3-list", 156 "volumes-v3-update", 157 "volumes-v3-copy-image-to-volume", 158 "volumes-v3-clone", 159 "volumes-v3-availability-zones", 160 "volumes-v3-extensions", 161 "volumes-v3-metadata", 162 "volumes-v3-readonly", 163 "volumes-v3-upload" should proably also be advisory as cinder is not really require to have a minimal compute cloud. for example a ci cloud. a minimal compute cloud is nova, placement, neutron, keystone and glance. tenant provisiond volume storage then object storage are certainly the next most important feature to add after a basic compute cloud to round out its capablites but neither shoudl be required for the minium defintion of os_powered_compute. compute-quotas-get is not deprecated yet but assuming os-limtes is addtopted in Victoria it likely will be this cycle. compute-servers-invalid having a test to validate invalid request makes sense but im not sure we should be useing invalid ipv6 to assert that behavior. e.g. we shoudl not fail this on a ipv4 only cloud. so perhapse the test shoudl be chagned for this requirement. also the description is incorrect it is not a basic ops its a negitive test. "description": "Basic server operations in the Compute API", by the way i might not be using advisor correctly as its not really defiend in that git page but i am using it to mean recommended but not mandatory in the context above. so each advisory item would be "you should provide X" not you "must provide X" were as required it the later. > > -gmann > > > We will have patches coming week. > > ThanksPrakash > > > > Sent from Yahoo Mail on Android > > On Fri, May 22, 2020 at 11:38 AM, Arkady.Kanevsky at dell.com wrote: > > Prakash, > > Why not submit a patch for of guidelines for compute, storage, and openstack for review? > > It will show the diffs against previous version. > > Thanks, > > > > From: prakash RAMCHANDRAN > > Sent: Friday, May 22, 2020 12:56 PM > > To: openstack-discuss at lists.openstack.org > > Subject: [all][interop] Please suggest capabilities to deprecate-remove and add to the OpenStack Powered Programs > > > > [EXTERNAL EMAIL] > > Hi all, > > > > Please help the Interop WG to review and finalize the draft version for the next interop guidelines. > > > > > https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=2020.06.json;h=7bc62929e5576519196dc4a2ea2de0627ab319ca;hb=HEAD > > > > Link for the DNS add-on: > https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/dns.2020.06.json;h=848e6d800f75e1f3aed8ba19263c3d1f86f4dde6;hb=HEAD > > > > > > > Link for the Heat add-on: > https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=add-ons/orchestration.2020.06.json;h=6362abbac5756aaec24e25e5a0803f31497360f3;hb=HEAD > > > > > > > > > This guidelines is intended to cover "Stein" , "Train", "Ussuri" and "Victoria" releases of OpenStack. > > > > We request that PTLs or Core team members please confirm any updates you may want to include or exclude in link > above. > > > > The Projects currently covered under OpenStack Powered Programs include "Keystone", "Glance", "Nova", "Neutron", > "Cinder", "Swift" and add-on programs "Designate", "Heat". > > > > > > A more human-readable view of the capabilities can be found in RefStack: > https://refstack.openstack.org/#/guidelines > > > > > > We would like have feedback by next Interop call on Friday. - Can attend 10 AM PDT call refer etherpad - > https://etherpad.opendev.org/p/interop > > > > > > If you have any additional topics to discuss at PTG please add them to > https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 > > > > The conference Bridge will be assigned once we know the number of participants. Please add your names/topics to > ether pad interop ptg planning. > > > > > > Optionally reply to this email with your inputs/patches. > > > > Please Register to attend the Interop WG slot scheduled for Virtual PTG on June 1 6-8 AM PDT / 9-11 AM EST / > 14-16 UTC / 9-11 PM BJT > > https://www.openstack.org/ptg/ > > > > > > Thanks > > for Interop WG > > Chair - Prakash Ramchandran > > Vice Chair - Mark Voelker > > > > > > > > > > > > > > > > > > > > > > > From laurentfdumont at gmail.com Mon May 25 16:47:27 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 25 May 2020 12:47:27 -0400 Subject: [Nova][Scheduler] Reducing race-conditions and re-scheduling during creation of multiple high-ressources instances or instances with anti-affinity. In-Reply-To: <8cd05e52e1a78716d25fc6df8773d4109ebf4f93.camel@redhat.com> References: <8cd05e52e1a78716d25fc6df8773d4109ebf4f93.camel@redhat.com> Message-ID: So we ended up increasing both max_attempts and host_subset_size and it fixed our issue. Hooray. I think I saw a KB from Red Hat on that exact issue - but I can't find the link anymore... Thank you to Sean and Melanie! :) On Wed, May 20, 2020 at 1:55 PM Sean Mooney wrote: > On Wed, 2020-05-20 at 11:32 -0400, Laurent Dumont wrote: > > Hey Melanie, Sean, > > > > Thank you! That should cover most of our uses cases. Is there any > downside > > to a "subset_size" that would be larger than the actual number of > computes? > > We have some env with 4 computes, and others with 100+. > it will basicaly make the weigher irrelevent. > when you use subset_size we basically select randomly form the first > "subset_size" hosts in the list > of host returned so so if subset_size is equal or large then the total > number of host it will just be a random > selection from the hosts that pass the filter/placment query. > > so you want subset_size to be proportionally small (an order of mangniture > or two smaller) compareed to the number of > avlaible hosts and proptionally equivelent (within 1 order of mangniture > or so) of your typeical concurrent instnace > multi create request. > > you want it to be small relitive to the could so the that weigher remian > statistally relevent > and similar to the size of the multi create to make the proablity of the > same host being selected for an instance low. > > > > > > Laurent > > > > On Tue, May 19, 2020 at 7:33 PM Sean Mooney wrote: > > > > > On Tue, 2020-05-19 at 18:23 -0400, Laurent Dumont wrote: > > > > Hey everyone, > > > > > > > > We are seeing a pretty consistent issue with Nova/Scheduler where > some > > > > instances creation are hitting the "max_attempts" limits of the > > > > > > scheduler. > > > Well the answer you are not going to like is nova is working as > expected > > > and > > > we expect this to happen when you use multi > > > create. placment help reduce the issue but > > > there are some fundemtal issue with how we do retries that make this > hard > > > to > > > fix. > > > > > > im not going to go into the detail right now as its not helpful but > > > we have had quryies about this form customer in the past so fortunetly > i > > > do have some > > > recomendation i can share > > > https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 > > > well not that i have made the comment public i can :) > > > > > > > > > > > Env : Red Hat Queens > > > > Computes : All the same hardware and specs (even weight throughout) > > > > Nova : Three nova-schedulers > > > > > > > > This can be due to two different factors (from what we've seen) : > > > > > > > > - Anti-affinity rules are getting triggered during the creation > (two > > > > claims are done within a few milliseconds on the same compute) > which > > > > > > counts > > > > as a retry (we've seen this when spawning 40+ VMs in a single > server > > > > > > group > > > > with maybe 50-55 computes - or even less 14 instances on 20ish > > > > > > computes). > > > yep the only way to completely avoid this issue on queens and > depending on > > > what fature you are using on master > > > is to boot the vms serially waiting for each vm to sapwn. > > > > > > > - We've seen another case where MEMORY_MB becomes an issue (we are > > > > spinning new instances in the same host-aggregate where VMs are > > > > > > already > > > > running. Only one VM can run per compute but there are no > > > > > > anti-affinity > > > > groups to force that between the two deployments. The ressource > > > > requirements prevent anything else from getting spun on those). > > > > - The logs look like the following : > > > > - Unable to submit allocation for instance > > > > 659ef90e-33b8-42a9-9c8e-fac87278240d (409 {"errors": > [{"status": > > > > > > 409, > > > > "request_id": "req-429c2734-2f2d-4d2d-82d1-fa4ebe12c991", > > > > "detail": "There > > > > was a conflict when trying to complete your request.\n\n Unable > > > > to allocate > > > > inventory: Unable to create allocation for 'MEMORY_MB' on > > > > resource provider > > > > '35b78f3b-8e59-4f2f-8cad-eaf116b7c1c7'. The requested amount > would > > > > > > exceed > > > > the capacity. ", "title": "Conflict"}]}) / Setting instance to > > > > > > ERROR > > > > state.: MaxRetriesExceeded: Exceeded maximum number of retries. > > > > > > Exhausted > > > > all hosts available for retrying build failures for instance > > > > f6d06cca-e9b5-4199-8220-e3ff2e5c2a41. > > > > > > in this case you are racing with ohter instance for the that host. > > > basically when doing a multi create if any vm fails to boot it will go > to > > > the next > > > host in the alternate host list and try to create an allcoation againt > > > ther first host in that list. > > > > > > however when the alternate host list was created none of the vms had > been > > > sapwned yet. > > > by the time the rety arrive at the conductor one of the other vms could > > > have been schduled to that host either > > > as a first chose or because that other vm retried first and won the > race. > > > > > > when this happens we then try the next host in the list wehre we can > race > > > again. > > > > > > since the retries happen at the cell conductor level without going > back to > > > the schduler again we are not going to check > > > the current status of the host using the anti affintiy filter or anti > > > affintiy weigher during the retry so while it was > > > vaild intially i can be invalid when we try to use the alternate host. > > > > > > the only way to fix that is to have retrys not use alternate hosts and > > > instead have each retry return the full scudling > > > process so that it can make a desicion based on the current state of > the > > > server not the old view. > > > > - I do believe we are hitting this issue as well : > > > > https://bugs.launchpad.net/nova/+bug/1837955 > > > > - In all the cases where the Stacks creation failed, one > instance > > > > > > was > > > > left in the Build state for 120 minutes and then finally > failed. > > > > > > > > From what we can gather, there are a couple of parameters that be be > > > > tweaked. > > > > > > > > 1. host_subset_size (Return X number of host instead of 1?) > > > > 2. randomize_allocation_candidates (Not 100% on this one) > > > > 3. shuffle_best_same_weighed_hosts (Return a random of X number of > > > > computes if they are all equal (instance of the same list for all > > > > scheduling requests)) > > > > 4. max_attempts (how many times the Scheduler will try to fit the > > > > instance somewhere) > > > > > > > > We've already raised "max_attempts" to 5 from the default of 3 and > will > > > > raise it further. That said, what are the recommendations for the > rest of > > > > the settings? We are not exactly concerned with stacking vs spreading > > > > > > (but > > > > that's always nice) of the instances but rather making sure > deployments > > > > fail because of real reasons and not just because Nova/Scheduler > keeps > > > > stepping on it's own toes. > > > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1759545#c8 > > > has some suggestions > > > but tl;dr it should be safe to set max_attempts=10 if you set > > > subset_size=15 shuffle_best_same_weighed_hosts=true > > > that said i really would not put max_attempts over 10, max_attempts 5 > > > should be more then enough. > > > subset_size=15 is a little bit arbiraty. the best value will depend on > the > > > type ical size of your deplopyment and the > > > size of your cloud. randomize_allocation_candidates help if and only if > > > you have limite the number of allocation > > > candiates retruned by placment to subset of your cloud hosts. > > > > > > e.g. if you set the placemment allcation candiate limit to 10 on for a > > > cloud with 100 host then you should set > > > randomize_allocation_candidates=true so that you do not get a bias that > > > will pack host baded on the natural db order. > > > the default limit for alloction candiates is 1000 so unless you have > more > > > then 1000 hosts or have changed that limit you > > > do not need to set this. > > > > > > > > > > > Thanks! > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon May 25 17:53:59 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 25 May 2020 18:53:59 +0100 Subject: [nova][ptg] bumping compute RPC to 6.0 In-Reply-To: References: Message-ID: On Mon, 2020-05-25 at 12:05 -0400, Artom Lifshitz wrote: > On Mon, May 25, 2020 at 11:58 AM Balázs Gibizer wrote: > > > > Hi, > > > > [This is a topic from the PTG etherpad [0]. As the PTG time is > > intentionally kept short, let's try to discuss it or even conclude it > > before the PTG] > > > > Do we want to bump the compute RPC to 6.0 in Victoria? > > Like it or not, fast-forward upgrades where one or more releases are > skipped is something we have to live with and take into account. we should take them into account but that does not mean we should bend over backwards to make them work etiehr. depending on if you need a functional contolplane druing the FFU the inmpact will be different. if a functional contol plane is required people will have to stop and bring all host up to 6.0 before unpinng and going to 6.X. i think that is reasonable even if that means FFU has to be done in two steps. > What > would be the FFWD upgrade implications of bumping RPC to 6.0 in > Victoria, specifically for someone upgrading from Train to W or later? im trying to remember off the top of my head but when we bump to 5.0, 5.0 was identical to 4.latest. it was reserved for compatiblity on upgades, you had to move all host to 5.0 before you could unping and moveing the contolplane to 5.X to preseve the ablity for the contoler to talk to the compute nodes. this is the text for 4.0 ... Kilo supports messaging version 3.40. So, any changes to existing methods in 3.x after that point should be done so that they can handle the version_cap being set to 3.40 ... Version 4.0 is equivalent to 3.40. Kilo sends version 4.0 by default, can accept 3.x calls from Juno nodes, and can be pinned to 3.x for Juno compatibility. All new changes should go against 4.x. if we followed the patteren in https://github.com/openstack/nova/commit/ebfa09fa197a1d88d1b3ab1f308232c3df7dc009 and https://github.com/openstack/nova/commit/5309120d7954da0b5ea67b0baf7c8d7ca24b8480 victoria would send 6.0 by default and retain support for 5.11. but we would drop compatiblty for 5.x in 6.1. i dont think it's reasonable to require infinite FFU without any stop and an online contol plane. if you were upgrading from icehouse to ussuri today you would have to stop at kilo(4.0) and queens(5.0) due to rpc compatiablity, if you want the api/contol plane to be able to comunicate with the compute nodes during the upgrade. if we bump to 6.0 in vicoria that would become the next point were we have to stop. by stop i mean bring all the compute nodes to 6.0 before upgrading the contole plane past 6.0 that is 4-5 release of RPC compatibility between major version which is reasonable. if you dont require any control plane druing the FFU, you could stop all nova service on the contolers and compute node leaving the vms running and then jump all the nova code to W. you would have to run any db migration that were required but provide we had not online migration that required the compute agent to start then we should not have an FFU inpact. if we have an online update that required the compute agent to start, then provided we dont drop that compatiablity code before W we shoudl also be fine as we can run it there when everythin is on 6.x keeping reshapes or onine update untell the release you finish on is required for all FFUs unless you strop and run the code in the middle. so there is an impact on FFU but i don't necessarily think there is an inpact on how OOO does FFU since the control plane is down during the FFU. i think osa and kolla dont actully aim for FFU but instead aim for 0 downtime upgrades. 0 downtime upgrades at least was kolla-anisbles goal at one point so kolla ansible would have to stop at the RPC boundaries to achive skip level upgradews without control plane downtime. i may be totally wrong about this but i think that is how the RPC compatiblity is inteded to work. hopefully dan will correct me. > > > > > We did not have the full view what we can gain[2] with such bump so > > Stephen and I searched through nova and collected the things we could > > clean up [1] eventually if we bump the compute RPC to 6.0 in Victoria. > > > > I can work on the RPC 6.0 patch during V and I think I can also help in > > with the possible cleanups later. > > > > Cheers, > > gibi > > > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > > [1] https://etherpad.opendev.org/p/compute-rpc-6.0 > > [2] > > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-16-16.00.log.html#l-103 > > > > > > > > From sbaker at redhat.com Mon May 25 23:32:35 2020 From: sbaker at redhat.com (Steve Baker) Date: Tue, 26 May 2020 11:32:35 +1200 Subject: [tripleo] Deprecating tripleo-heat-templates firstboot Message-ID: <1e7132dd-599c-8af6-57ce-ca1a44126b15@redhat.com> Soon nova will be switched off by default on the undercloud and all overcloud deployments will effectively be deployed-server based (either provisioned manually or via the baremetal provision command) This means that the docs for running firstboot scripts[1] will no longer work, and neither will our collection of firstboot scripts[2]. In this email I'm going to propose what we could do about this situation and if there are still unresolved issues by the PTG it might be worth having a short session on it. The baremetal provisioning implementation already uses cloud-init cloud-config internally for user creation and key injection[3] so I'm going to propose an enhancement the the baremetal provisioning yaml format so that custom cloud-config instructions can be included either inline or as a file path. I think it is worth going through each firstboot script[2] and deciding what its fate should be (other than being deprecated in Ussuri): https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/conntectx3_streering.yaml This has no parameters, so it could be converted to a standalone cloud-config file, but should it? Can this be achieved with kernel args? Does it require a reboot anyway, and so can be done with extraconfig? https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/os-net-config-mappings.yaml I'm not sure why this is implemented as first boot, it seems to consume the parameter |NetConfigDataLookup and transforms it to the format os-net-config needs for the file ||/etc/os-net-config/mapping.yaml. It looks like this functionality should be moved to where os-net-config is actually invoked, and the |||NetConfigDataLookup parameter should be officially supported. || | | |https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_dev_rsync.yaml| |I suggest deleting this and including a cloud-config version in the baremetal provisioning docs.| |||| |https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_heat_admin.yaml| |Delete this, there is already an abstraction for this built into the baremetal provisioning format[4]| | | |https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_root_password.yaml| |Delete this and include it as an example in the baremetal provisioning docs.| | | |https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_timesync.yaml| |Maybe this could be converted to an extraconfig/all_nodes script[5], but it would be better if this sort of thing could be implemented as an ansible role or playbook, are there any plans for an extraconfig mechanism which uses plain ansible semantics?| | | |cheers | [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/extra_config.html#firstboot-extra-configuration [2] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot [3] https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L286 [4] https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L132 https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L145 [5] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/all_nodes -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Tue May 26 02:41:43 2020 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 26 May 2020 11:41:43 +0900 Subject: [Horizon] Failing to configure the Instance Launch dialog Message-ID: <1bdaa16f-43e2-baaa-c0f6-5ea999113456@gmail.com> On an Ussuri Devstack, I would like to launch instances with ephemeral storage from Horizon and remove the volume storage switch from the instance launch dialog. Prior to Train, the dialog was configured with the LAUNCH_INSTANCE_DEFAULTS dictionary in /local_settings.py/. This is my dictionary; I set /create_volume /to /False /and /hide_create_volume /to /True/: LAUNCH_INSTANCE_DEFAULTS = {     'config_drive': False,     'create_volume': False,     'hide_create_volume': True,     'disable_image': False,     'disable_instance_snapshot': False,     'disable_volume': False,     'disable_volume_snapshot': False,     'enable_scheduler_hints': True, } Starting with Train, the dictionary is in /.../openstack_dashboard/defaults.py/, but the changes I made there have no effect. Adding the dictionary to /local_settings.py/ has no effect either. To test whether /defaults.py/ is used at all, I added a syntactically incorrect line, and indeed, the syntax error is logged in /horizon_error.log/. I wonder what I am doing wrong. I am convinced it worked like this on Stein. Any ideas? Bernd Bausch -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza.b2008 at gmail.com Tue May 26 07:00:24 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Tue, 26 May 2020 11:30:24 +0430 Subject: [TripleO] External network on compute node In-Reply-To: <639d0b4c7bd27fa125809f9f384db680cae22c4a.camel@redhat.com> References: <98236409f2ea0a5f0a7bffd3eb1054ef5399ba69.camel@redhat.com> <639d0b4c7bd27fa125809f9f384db680cae22c4a.camel@redhat.com> Message-ID: Sorry for the late reply, Could you please take a look at the output and see if it's helpful? http://paste.openstack.org/show/793968/ my external network is working fine on controller nodes, but deployment can't assign IP to compute nodes. (I don't have any Vlan configured in the backend environment) I do still got problem with instances' internet access, Do you think this could be related? Regards, Reza On Tue, 12 May 2020 at 12:24, Harald Jensås wrote: > On Sun, 2020-05-10 at 22:34 +0430, Reza Bakhshayeshi wrote: > > Hi Harald, Thanks for your explanation. > > > > ' /usr/share/openstack-tripleo-heat-templates/ ' was just happened > > during copy-pasting here :) > > > > I exported the overcloud plan, and it was exactly same as what I sent > > before. > > > > Ok, I was worried the jinja2 rendering would produce different results > in the plan, in case you had manually edited these files: > ~/openstack-tripleo-heat-templates/environments/network-isolation.yaml > ~/openstack-tripleo-heat-templates/environments/network-environment.yaml > > > I redeployed everything and it didn't help either. > > > > What are the problems of my network-isolation.yaml and network- > > environment.yaml files in your opinion? > > I'm not quite sure what is wrong, but the fact that you got > 'ip_netmask': '' is an indication that some resource might be a Noop > resource or a custom resource without validations? While it should > actually be a port/network resource. > > The output of 'openstack stack environment show overcloud' might help. > (NOTE: sanitize the output of that removing sensitive data before > posting it to a public place ...) > > > I have to add that in this environment I don't know why the external > > network doesn't provide Internet for VMs. > > But everything else works fine. > > > > I don't have any Vlan configured in my environment and I'm planning > > to only have flat and geneve networks, and having external network > > for every compute node so I can ignore provisioning network > > bottleneck and spof. > > > > Regards, > > Reza > > > > On Wed, 6 May 2020 at 17:10, Harald Jensås > > wrote: > > > On Wed, 2020-05-06 at 14:00 +0430, Reza Bakhshayeshi wrote: > > > > here is my deploy command: > > > > > > > > openstack overcloud deploy \ > > > > --control-flavor control \ > > > > --compute-flavor compute \ > > > > --templates ~/openstack-tripleo-heat-templates \ > > > > -r /home/stack/roles_data.yaml \ > > > > -e /home/stack/containers-prepare-parameter.yaml \ > > > > -e environment.yaml \ > > > > -e /usr/share/openstack-tripleo-heat- > > > > templates/environments/services/octavia.yaml \ > > > > > > This is not related, but: > > > Why use '/usr/share/openstack-tripleo-heat-templates/' and not > > > '~/openstack-tripleo-heat-templates/' here? > > > > > > > -e ~/openstack-tripleo-heat- > > > templates/environments/services/neutron- > > > > ovn-dvr-ha.yaml \ > > > > -e ~/openstack-tripleo-heat-templates/environments/docker-ha.yaml > > > \ > > > > -e ~/openstack-tripleo-heat-templates/environments/network- > > > > isolation.yaml \ > > > > -e ~/openstack-tripleo-heat-templates/environments/network- > > > > environment.yaml \ > > > > > > Hm, I'm not sure network-isolation.yaml and network- > > > environment.yaml > > > contains what you expect. Can you do a plan export? > > > > > > openstack overcloud plan export --output-file oc-plan.tar.gz > > > overcloud > > > > > > Then have a look at `environments/network-isolation.yaml` and > > > `environments/network-environment.yaml` in the plan? > > > > > > I think you may want to copy these two files out of the templates > > > tree > > > and use the out of tree copies instead. > > > > > > > --timeout 360 \ > > > > --ntp-server time.google.com -vvv > > > > > > > > network-environment.yaml: > > > > http://paste.openstack.org/show/793179/ > > > > > > > > network-isolation.yaml: > > > > http://paste.openstack.org/show/793181/ > > > > > > > > compute-dvr.yaml > > > > http://paste.openstack.org/show/793183/ > > > > > > > > I didn't modify network_data.yaml > > > > > > > > > > > > > > > > > -- > > > Harald > > > > > > > On Wed, 6 May 2020 at 05:27, Harald Jensås > > > > wrote: > > > > > On Tue, 2020-05-05 at 23:25 +0430, Reza Bakhshayeshi wrote: > > > > > > Hi all. > > > > > > The default way of compute node for accessing Internet if > > > through > > > > > > undercloud. > > > > > > I'm going to assign an IP from External network to each > > > compute > > > > > node > > > > > > with default route. > > > > > > But the deployment can't assign an IP to br-ex and fails > > > with: > > > > > > > > > > > > " raise AddrFormatError('invalid IPNetwork > > > %s' > > > > > % > > > > > > addr)", > > > > > > "netaddr.core.AddrFormatError: invalid > > > IPNetwork > > > > > ", > > > > > > > > > > > > Actually 'ip_netmask': '' is empty during deployment for > > > compute > > > > > > nodes. > > > > > > I've added external network to compute node role: > > > > > > External: > > > > > > subnet: external_subnet > > > > > > > > > > > > and for network interface: > > > > > > - type: ovs_bridge > > > > > > name: bridge_name > > > > > > mtu: > > > > > > get_param: ExternalMtu > > > > > > dns_servers: > > > > > > get_param: DnsServers > > > > > > use_dhcp: false > > > > > > addresses: > > > > > > - ip_netmask: > > > > > > get_param: ExternalIpSubnet > > > > > > routes: > > > > > > list_concat_unique: > > > > > > - get_param: ExternalInterfaceRoutes > > > > > > - - default: true > > > > > > next_hop: > > > > > > get_param: > > > > > ExternalInterfaceDefaultRoute > > > > > > members: > > > > > > - type: interface > > > > > > name: nic3 > > > > > > mtu: > > > > > > get_param: ExternalMtu > > > > > > use_dhcp: false > > > > > > primary: true > > > > > > > > > > > > Any suggestion would be grateful. > > > > > > Regards, > > > > > > Reza > > > > > > > > > > > > > > > > I think we need more information to see what the issue is. > > > > > - your deploy command? > > > > > - content of network_data.yaml used (unless the default) > > > > > - environment files related to network-isolation, network- > > > > > environment, > > > > > network-isolation? > > > > > > > > > > > > > > > -- > > > > > Harald > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue May 26 08:35:44 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 26 May 2020 10:35:44 +0200 Subject: [nova][neutron][ptg] How to increase the minimum bandwidth guarantee of a running instance In-Reply-To: References: <20200519195510.chfuo5byodcrooaj@skaplons-mac> Message-ID: On Tue, May 19, 2020 at 23:48, Sean Mooney wrote: > On Tue, 2020-05-19 at 21:55 +0200, Slawek Kaplonski wrote: >> Hi, >> >> Thx for starting this thread. >> I can share some thoughts from the Neutron point of view. >> >> On Tue, May 19, 2020 at 04:08:18PM +0200, Balázs >> Gibizer wrote: >> > Hi, >> > >> > [This is a topic from the PTG etherpad [0]. As the PTG time is >> intentionally >> > kept short, let's try to discuss it or even conclude it before >> the PTG] >> > >> > As a next step in the minimum bandwidth QoS support I would like >> to solve >> > the use case where a running instance has some ports with minimum >> bandwidth >> > but then user wants to change (e.g. increase) the minimum >> bandwidth used by >> > the instance. >> > >> > I see two generic ways to solve the use case: >> > >> > Option A - interface attach >> > --------------------------- >> > >> > Attach a new port with minimum bandwidth to the instance to >> increase the >> > instance's overall bandwidth guarantee. >> > >> > This only impacts Nova's interface attach code path: >> > 1) The interface attach code path needs to read the port's >> resource request >> > 2) Call Placement GET /allocation_candidates?in_tree=> of the >> > instance> >> > 3a) If placement returns candidates then select one and modify >> the current >> > allocation of the instance accordingly and continue the existing >> interface >> > attach code path. >> > 3b) If placement returns no candidates then there is no free >> resource left >> > on the instance's current host to resize the allocation locally. > so currently we dont support attaching port with resouce request. > if we were to do that i would prefer to make it more generic e.g. > support attich sriov devices as well. For me supporting interface attach with resource request is a different feature from supporting interface attach with vnic_type direct or direct_physical. However supporting increasing minimum bandwidth of an instance by attaching new SRIOV ports with bigger qos rule would require both features to be implemented. So yes at the end I would need both. > > i dont think we should ever support this for the usecase of changing > qos policies or bandwith allocations > but i think this is a good feature in its own right. >> > >> > >> > Option B - QoS rule update >> > -------------------------- >> > >> > Allow changing the minimum bandwidth guarantee of a port that is >> already >> > bound to the instance. >> > >> > Today Neutron rejects such QoS rule update. If we want to support >> such >> > update then: >> > * either Neutron should call placement allocation_candidates API >> and the >> > update the instance's allocation. Similarly what Nova does in >> Option A. >> > * or Neutron should tell Nova that the resource request of the >> port has been >> > changed and then Nova needs to call Placement and update >> instance's >> > allocation. >> >> In this case, if You update QoS rule, don't forget that policy with >> this rule >> can be used by many ports already. So we will need to find all of >> them and >> call placement for each. >> What if that will be fine for some ports but not for all? > i think if we went with a qos rule update we would not actully modify > the rule itself > that would break to many thing and instead change change the qos rule > that is applied to the port. > > e.g. if you have a 1GBps rule and and 10GBps then we could support > swaping between the rules > but we should not support chnaging the 1GBps rule to a 2GBps rule. > > neutron should ideally do the placement check and allocation update > as part of the qos rule update > api action and raise an exception if it could not. >> >> > >> > >> > The Option A and Option B are not mutually exclusive but still I >> would like >> > to see what is the preference of the community. Which direction >> should we >> > move forward? >> >> There is also 3rd possible option, very similar to Option B which >> is change of >> the QoS policy for the port. It's basically almost the same as >> Option B, but >> that way You have always only one port to update (unless it's not >> policy >> associated with network). So because of that reason, maybe a bit >> easier to do. > > yes that is what i was suggesting above and its one of the option we > discused when first > desigining the minium bandwith policy. this i think is the optimal > solution and i dont think we should do > option a or b although A could be done as a sperate feature just not > as a way we recommend to update qos policies. > >> >> > >> > >> > Both options have the limitation that if the instance's current >> host does >> > not have enough free resources for the requested change then Nova >> will not >> > do a full scheduling and move the instance to another host where >> resource is >> > available. This seems a hard problem to me. > i honestly dont think it is we condiered this during the design of > the feature with the > intent of one day supporting it. option c was how i always assumed it > would work. > support attach and detach for port or other things with reqsouce > requests is a seperate topic > as it applies to gpu hotplug, sriov port and cyborg so i would ignore > that for now and focuse > on what is basicaly a qos resize action where we are swaping between > predefiend qos policies. >> > >> > Do you have any idea how can we remove / ease this limitation >> without >> > boiling the ocean? >> > >> > For example: Does it make sense to implement a bandwidth weigher >> in the >> > scheduler so instances can be spread by free bandwidth during >> creation? > we discussed this in the passed breifly. i always belived that was a > good idea but it would require the allocation > candiates to be passed to the weigher and the provider summaries. we > have other usecases that could benifit form that > too but i think in the past that was see as to much work when we did > not even have the basic support working yet. > now i think it would be a resonable next step and as i said we will > need the ability to weigh based on allcoation > candiates in the future of for other feature too so this might be a > nice time to intoduce that. >> > >> > >> > Cheers, >> > gibi >> > >> > >> > [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> > >> > >> > >> >> > From rdopiera at redhat.com Tue May 26 10:28:25 2020 From: rdopiera at redhat.com (Radomir Dopieralski) Date: Tue, 26 May 2020 12:28:25 +0200 Subject: [Horizon] Failing to configure the Instance Launch dialog In-Reply-To: <1bdaa16f-43e2-baaa-c0f6-5ea999113456@gmail.com> References: <1bdaa16f-43e2-baaa-c0f6-5ea999113456@gmail.com> Message-ID: You shouldn't be editing defaults.py, just put your settings in local_settings.py as normal. Looking at the code, the only two options that are currently used in LAUNCH_INSTANCE_DEFAULTS are "config_drive" and "enable_scheduler_hints". On Tue, May 26, 2020 at 4:49 AM Bernd Bausch wrote: > On an Ussuri Devstack, I would like to launch instances with ephemeral > storage from Horizon and remove the volume storage switch from the instance > launch dialog. > > Prior to Train, the dialog was configured with the > LAUNCH_INSTANCE_DEFAULTS dictionary in *local_settings.py*. This is my > dictionary; I set *create_volume *to *False *and *hide_create_volume *to > *True*: > > LAUNCH_INSTANCE_DEFAULTS = { > 'config_drive': False, > 'create_volume': False, > 'hide_create_volume': True, > 'disable_image': False, > 'disable_instance_snapshot': False, > 'disable_volume': False, > 'disable_volume_snapshot': False, > 'enable_scheduler_hints': True, > } > > Starting with Train, the dictionary is in > *.../openstack_dashboard/defaults.py*, but the changes I made there have > no effect. Adding the dictionary to *local_settings.py* has no effect > either. > > To test whether *defaults.py* is used at all, I added a syntactically > incorrect line, and indeed, the syntax error is logged in > *horizon_error.log*. > > I wonder what I am doing wrong. I am convinced it worked like this on > Stein. Any ideas? > > Bernd Bausch > > > -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdopiera at redhat.com Tue May 26 10:32:24 2020 From: rdopiera at redhat.com (Radomir Dopieralski) Date: Tue, 26 May 2020 12:32:24 +0200 Subject: [Horizon] Failing to configure the Instance Launch dialog In-Reply-To: References: <1bdaa16f-43e2-baaa-c0f6-5ea999113456@gmail.com> Message-ID: Sorry, it seems that the other options are used as well, just in a more convoluted way that is not so easy to find. On Tue, May 26, 2020 at 12:28 PM Radomir Dopieralski wrote: > You shouldn't be editing defaults.py, just put your settings in > local_settings.py as normal. > Looking at the code, the only two options that are currently used in > LAUNCH_INSTANCE_DEFAULTS are "config_drive" and "enable_scheduler_hints". > > On Tue, May 26, 2020 at 4:49 AM Bernd Bausch > wrote: > >> On an Ussuri Devstack, I would like to launch instances with ephemeral >> storage from Horizon and remove the volume storage switch from the instance >> launch dialog. >> >> Prior to Train, the dialog was configured with the >> LAUNCH_INSTANCE_DEFAULTS dictionary in *local_settings.py*. This is my >> dictionary; I set *create_volume *to *False *and *hide_create_volume *to >> *True*: >> >> LAUNCH_INSTANCE_DEFAULTS = { >> 'config_drive': False, >> 'create_volume': False, >> 'hide_create_volume': True, >> 'disable_image': False, >> 'disable_instance_snapshot': False, >> 'disable_volume': False, >> 'disable_volume_snapshot': False, >> 'enable_scheduler_hints': True, >> } >> >> Starting with Train, the dictionary is in >> *.../openstack_dashboard/defaults.py*, but the changes I made there have >> no effect. Adding the dictionary to *local_settings.py* has no effect >> either. >> >> To test whether *defaults.py* is used at all, I added a syntactically >> incorrect line, and indeed, the syntax error is logged in >> *horizon_error.log*. >> >> I wonder what I am doing wrong. I am convinced it worked like this on >> Stein. Any ideas? >> >> Bernd Bausch >> >> >> > > -- > Radomir Dopieralski > -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Tue May 26 10:49:02 2020 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 26 May 2020 19:49:02 +0900 Subject: [Horizon] Failing to configure the Instance Launch dialog In-Reply-To: References: <1bdaa16f-43e2-baaa-c0f6-5ea999113456@gmail.com> Message-ID: <6f9e3bc4-1118-7927-8815-5b9411fd102a@gmail.com> Thanks Radomir. Actually, the documentation [1] makes this clear. However, my changes are ignored. The Create New Volume switch is still there. I feel tricked. Time to analyze the source :/ [1] https://docs.openstack.org/horizon/latest/configuration/settings.html On 5/26/2020 7:28 PM, Radomir Dopieralski wrote: > You shouldn't be editing defaults.py, just put your settings in > local_settings.py as normal. > Looking at the code, the only two options that are currently used in > LAUNCH_INSTANCE_DEFAULTS are "config_drive" and "enable_scheduler_hints". > > On Tue, May 26, 2020 at 4:49 AM Bernd Bausch > wrote: > > On an Ussuri Devstack, I would like to launch instances with > ephemeral storage from Horizon and remove the volume storage > switch from the instance launch dialog. > > Prior to Train, the dialog was configured with the > LAUNCH_INSTANCE_DEFAULTS dictionary in /local_settings.py/. This > is my dictionary; I set /create_volume /to /False /and > /hide_create_volume /to /True/: > > LAUNCH_INSTANCE_DEFAULTS = { >     'config_drive': False, >     'create_volume': False, >     'hide_create_volume': True, >     'disable_image': False, >     'disable_instance_snapshot': False, >     'disable_volume': False, >     'disable_volume_snapshot': False, >     'enable_scheduler_hints': True, > } > > Starting with Train, the dictionary is in > /.../openstack_dashboard/defaults.py/, but the changes I made > there have no effect. Adding the dictionary to /local_settings.py/ > has no effect either. > > To test whether /defaults.py/ is used at all, I added a > syntactically incorrect line, and indeed, the syntax error is > logged in /horizon_error.log/. > > I wonder what I am doing wrong. I am convinced it worked like this > on Stein. Any ideas? > > Bernd Bausch > > > > > -- > Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Tue May 26 14:55:39 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Tue, 26 May 2020 16:55:39 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes Message-ID: Hi all. I am sing kolla-ansible 4.0 with and external ceph already deployed. I have in globals.yml the following setting: enable_ceph: "no" glance_backend_ceph: "yes" nova_backend_ceph: "yes" cinder_backend_ceph: "yes" enable_cinder_backend_lvm: "no" When I run kolla-ansible I got an error about not having cinder-volumes group Cannot process volume group cinder-volumes Do you know why it keeps looking for the cinder volumes? I ve done it before with no problem but maybe something in the global not correctly configured? Cheers -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue May 26 15:22:27 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Tue, 26 May 2020 17:22:27 +0200 Subject: [all][dev][qa] cirros 0.5.1 Message-ID: <0a471df1-6e9b-f2e3-6340-7465f328a5a1@gmail.com> Hello OpenStack Folks! Since it's early in the Victoria cycle, it's a good time to revisit the cirros 0.5.1 migration in common CI. The patch [1] is getting a cleanup due to nova fix increasing instance memory limits upfront but I propose to merge it as soon as we clean it up properly. It has been tested with Ussuri in Kolla CI and had positive results from Tempest jobs for Neutron and Ironic. It would be great if we made sure projects no longer use the old cirros 0.4.0 nor some old Ubuntus. [1] https://review.opendev.org/711492 -yoctozepto From radoslaw.piliszek at gmail.com Tue May 26 15:31:29 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Tue, 26 May 2020 17:31:29 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes In-Reply-To: References: Message-ID: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> On 2020-05-26 16:55, Alfredo De Luca wrote: > Hi all. Hi Alfredo! > I am sing kolla-ansible 4.0 with and external ceph already deployed. Kolla Ansible 4.0 is for Ocata which is long unsupported by the Kolla project. A lot has changed since then: if you read current docs, they are likely not applicable. Is there any particular reason you are on K-A 4.0? > Do you know why it keeps looking for the cinder volumes? I ve done it > before with no problem but maybe something in the global not correctly > configured? It could be the case Ocata did not have the lvm switch. Or had it under a (bit) different name. -yoctozepto From alfredo.deluca at gmail.com Tue May 26 15:34:37 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Tue, 26 May 2020 17:34:37 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes In-Reply-To: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> References: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> Message-ID: I am really sorry . It's actually kolla-ansible 8.0 so Openstack Stein. Apologies. On Tue, May 26, 2020 at 5:31 PM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > On 2020-05-26 16:55, Alfredo De Luca wrote: > > Hi all. > > Hi Alfredo! > > > I am sing kolla-ansible 4.0 with and external ceph already deployed. > > Kolla Ansible 4.0 is for Ocata which is long unsupported by the Kolla > project. A lot has changed since then: if you read current docs, they > are likely not applicable. > Is there any particular reason you are on K-A 4.0? > > > Do you know why it keeps looking for the cinder volumes? I ve done it > > before with no problem but maybe something in the global not > correctly > configured? > > It could be the case Ocata did not have the lvm switch. Or had it under > a (bit) different name. > > -yoctozepto > -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue May 26 15:39:47 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Tue, 26 May 2020 17:39:47 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes In-Reply-To: References: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> Message-ID: <916e71c9-0e44-68c2-7381-bfc5b01ae846@gmail.com> On 2020-05-26 17:34, Alfredo De Luca wrote: > It's actually kolla-ansible 8.0 so Openstack Stein. Ah, then it's very much supported. :-) The switch does work in the latest release for Stein for sure, that is 8.1.1, I suggest you try updating to this version first. If it does not help, please attach full globals.yml and let us know which task kolla-ansible fails on. -yoctozepto From isanjayk5 at gmail.com Tue May 26 15:43:36 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Tue, 26 May 2020 21:13:36 +0530 Subject: [kolla-ansible][kolla][cinder]Cinder volume is not attached to instance Message-ID: Hello Openstackers, I have used Cinder stein release (v14.0.2) source base and deployed in All-In-One setup on ubuntu16.04 using Kolla-Ansible. The cinder image was built using python 2.7 somewhere which i am trying to deploy in my new setup having python3.5. All services are deployed and running fine. However when I try to attach one volume to an instance, it shows below error in /var/log/kolla/cinder/cinder-volume.log. 2020-05-26 08:10:07.026 35 INFO oslo.privsep.daemon [req-db287360-876c-425e-b8df-a4f3b0db3163 bc13e1a279854a6fa1d553c7b7a0214c 9bdfc4ffefaa4edb823ff9ba5383ab42 - default default] Running privsep helper: ['sudo', 'cinder-rootwrap', '/etc/cinder/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/cinder/cinder.conf', '--privsep_context', 'cinder.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpan8c8r/privsep.sock'] 2020-05-26 08:10:10.952 35 INFO oslo.privsep.daemon [req-db287360-876c-425e-b8df-a4f3b0db3163 bc13e1a279854a6fa1d553c7b7a0214c 9bdfc4ffefaa4edb823ff9ba5383ab42 - default default] Spawned new privsep daemon via rootwrap 2020-05-26 08:10:10.647 1071 INFO oslo.privsep.daemon [-] privsep daemon starting 2020-05-26 08:10:10.721 1071 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0 2020-05-26 08:10:10.725 1071 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none 2020-05-26 08:10:10.726 1071 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1071 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server [req-db287360-876c-425e-b8df-a4f3b0db3163 bc13e1a279854a6fa1d553c7b7a0214c 9bdfc4ffefaa4edb823ff9ba5383ab42 - default default] Exception during message handling: ProcessExecutionError: Unexpected error while running command. Command: tgtadm --lld iscsi --op show --mode target Exit code: 107 Stdout: u'' Stderr: u'tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\n' 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/cinder/volume/manager.py", line 4517, in attachment_update 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server connector) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/cinder/volume/manager.py", line 4445, in _connection_create 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server volume, connector) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 830, in create_export 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server volume_path) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/cinder/volume/targets/iscsi.py", line 210, in create_export 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server **portals_config) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/cinder/utils.py", line 818, in _wrapper 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server return r.call(f, *args, **kwargs) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/retrying.py", line 206, in call 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server return attempt.get(self._wrap_exception) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/retrying.py", line 247, in get 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server six.reraise(self.value[0], self.value[1], self.value[2]) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/retrying.py", line 200, in call 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server attempt = Attempt(fn(*args, **kwargs), attempt_number, False) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/cinder/volume/targets/tgt.py", line 126, in create_iscsi_target 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server (out, err) = cinder.privsep.targets.tgt.tgtadm_show() 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 244, in _wrap 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server return self.channel.remote_call(name, args, kwargs) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 203, in remote_call 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server raise exc_type(*result[2]) 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server ProcessExecutionError: Unexpected error while running command. 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server Command: tgtadm --lld iscsi --op show --mode target 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server Exit code: 107 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server Stdout: u'' 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server Stderr: u'tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\n' 2020-05-26 08:10:11.550 35 ERROR oslo_messaging.rpc.server When I check inside cinder-volume container, the tgtd is not running. SO I started tgtd from root user. Now the volume attachment happens for few seconds and then it is detached. 2020-05-26 08:14:39.221 35 INFO cinder.volume.manager [req-5da81388-33ce-4994-9114-52442a08ac2a bc13e1a279854a6fa1d553c7b7a0214c 9bdfc4ffefaa4edb823ff9ba5383ab42 - default default] attachment_update completed successfully. 2020-05-26 08:14:52.622 35 INFO cinder.volume.manager [req-cbaa1b6c-4fa8-4ece-b08b-8ca09f78ecaf bc13e1a279854a6fa1d553c7b7a0214c 9bdfc4ffefaa4edb823ff9ba5383ab42 - default default] Terminate volume connection completed successfully. 2020-05-26 08:14:52.840 35 INFO cinder.volume.targets.tgt [req-cbaa1b6c-4fa8-4ece-b08b-8ca09f78ecaf bc13e1a279854a6fa1d553c7b7a0214c 9bdfc4ffefaa4edb823ff9ba5383ab42 - default default] Removing iscsi_target for Volume ID: d713ce55-1fa6-4d96-a69e-69368494f8f9 2020-05-26 08:14:53.739 35 WARNING py.warnings [req-cbaa1b6c-4fa8-4ece-b08b-8ca09f78ecaf bc13e1a279854a6fa1d553c7b7a0214c 9bdfc4ffefaa4edb823ff9ba5383ab42 - default default] /var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/orm/evaluator.py:99: SAWarning: Evaluating non-mapped column expression 'updated_at' onto ORM instances; this is a deprecated use case. Please make use of the actual mapped columns in ORM-evaluated UPDATE / DELETE expressions. "UPDATE / DELETE expressions." % clause I also used the cinder 14.0.4 source tag to build the new image, but the same error persists. Can you please point me what is the issue in my test case or setup. Appreciate your help. best regards, Sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue May 26 15:58:37 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Tue, 26 May 2020 17:58:37 +0200 Subject: [kolla-ansible][kolla][cinder]Cinder volume is not attached to instance In-Reply-To: References: Message-ID: <3164cb4a-aff8-06a9-793f-6dd008266fa8@gmail.com> On 2020-05-26 17:43, Sanjay K wrote: > When I check inside cinder-volume container, the tgtd is not running. SO > I started tgtd from root user. Now the volume attachment happens for few > seconds and then it is detached. Hi Sanjay, from Kolla Ansible perspective, the tgtd should be running in a dedicated container (aptly named tgtd), and not inside the cinder container. Kolla Ansible should have set up tgtd so that it works with cinder properly. Please check if you don't have tgtd container running. You might need to ensure you enabled the cinder iscsi backend (normally done automatically with lvm backend). -yoctozepto From alfredo.deluca at gmail.com Tue May 26 16:30:32 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Tue, 26 May 2020 18:30:32 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes In-Reply-To: <916e71c9-0e44-68c2-7381-bfc5b01ae846@gmail.com> References: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> <916e71c9-0e44-68c2-7381-bfc5b01ae846@gmail.com> Message-ID: Hi Radosław. I have update kolla-ansible to 8.1.1 but I am still getting the same issue with cinder-volume. I have attached the globals.yml as you suggested. Cheers On Tue, May 26, 2020 at 5:39 PM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > On 2020-05-26 17:34, Alfredo De Luca wrote: > > It's actually kolla-ansible 8.0 so Openstack Stein. > > Ah, then it's very much supported. :-) > > The switch does work in the latest release for Stein for sure, that is > 8.1.1, I suggest you try updating to this version first. > > If it does not help, please attach full globals.yml and let us know > which task kolla-ansible fails on. > > -yoctozepto > > -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: globals.yml Type: application/x-yaml Size: 17498 bytes Desc: not available URL: From gagehugo at gmail.com Tue May 26 16:36:15 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 26 May 2020 11:36:15 -0500 Subject: [openstack-helm] Stepping down as core reviewer In-Reply-To: References: Message-ID: Hey Steve, This is sad news to hear, but I completely understand. Thanks for all the contributions you made to OpenStack-Helm over the years, from helping bring the project up to becoming an official OpenStack one to all the expertise you brought to the LMA stack. I personally learned a lot from you as I began contributing with zero helm/kubernetes experience and it definitely helped me get to where I am today. I wish you all the best in your future endeavors and you're always welcome back should you ever find time again in the future. On Wed, May 20, 2020 at 2:00 PM Steve Wilkerson wrote: > Hey everyone, > > I hope you're all staying safe and well. I switched jobs a few months back > that took me in a different direction from my previous employer, and as a > result I've not had the time to remain involved with the OpenStack > community (especially as my day job no longer involves working with > OpenStack). I think it's best that I step down from the openstack-helm core > reviewer team, as I can't realistically make the time to do my due > diligence in providing thorough, useful code reviews. I greatly appreciate > the opportunities that working with the OpenStack community at large has > provided me over the past 5 years. > > Cheers, > srwilkers > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue May 26 17:15:00 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Tue, 26 May 2020 19:15:00 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes In-Reply-To: References: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> <916e71c9-0e44-68c2-7381-bfc5b01ae846@gmail.com> Message-ID: On 2020-05-26 18:30, Alfredo De Luca wrote: > I have attached the globals.yml as you suggested. And the failing task? -yoctozepto From hjensas at redhat.com Tue May 26 17:30:12 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 26 May 2020 19:30:12 +0200 Subject: [tripleo] Deprecating tripleo-heat-templates firstboot In-Reply-To: <1e7132dd-599c-8af6-57ce-ca1a44126b15@redhat.com> References: <1e7132dd-599c-8af6-57ce-ca1a44126b15@redhat.com> Message-ID: <769e266ed580747fcfa39a71efc6489ee5af91f3.camel@redhat.com> On Tue, 2020-05-26 at 11:32 +1200, Steve Baker wrote: > Soon nova will be switched off by default on the undercloud and all > overcloud deployments will effectively be deployed-server based > (either provisioned manually or via the baremetal provision command) > > This means that the docs for running firstboot scripts[1] will no > longer work, and neither will our collection of firstboot scripts[2]. > In this email I'm going to propose what we could do about this > situation and if there are still unresolved issues by the PTG it > might be worth having a short session on it. > If we had gone the other way around, and done the Heat Stack with "dummy" server resources before deploying baremetal we could have done this seamless, i.e passed these cloud-configs based on the stack to the baremetal provisioning yaml's extention you mention below. But that train departed, long ago ... Do we need to add some deprecation and/or validation? Something that ensure we stop the deployment in case one of the resources OS::TripleO::NodeAdminUserData, OS::TripleO::NodeTimesyncUserData, OS::TripleO::NodeUserData or OS::TripleO::{{role.name}}::NodeUserData is defined in the resource registry, with a pointer to docs on how to move it to the baremetal provisioning yaml, or extraconfig. > The baremetal provisioning implementation already uses cloud-init > cloud-config internally for user creation and key injection[3] so I'm > going to propose an enhancement the the baremetal provisioning yaml > format so that custom cloud-config instructions can be included > either inline or as a file path. > ++ > I think it is worth going through each firstboot script[2] and > deciding what its fate should be (other than being deprecated in > Ussuri): > > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/conntectx3_streering.yaml > > This has no parameters, so it could be converted to a standalone > cloud-config file, but should it? Can this be achieved with kernel > args? Does it require a reboot anyway, and so can be done with > extraconfig? > Maybe this could be done using this module in ansible instead: https://docs.ansible.com/ansible/latest/modules/modprobe_module.html#modprobe-module > > > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/os-net-config-mappings.yaml > > I'm not sure why this is implemented as first boot, it seems to > consume the parameter NetConfigDataLookup and transforms it to the > format os-net-config needs for the file /etc/os-net- > config/mapping.yaml. It looks like this functionality should be moved > to where os-net-config is actually invoked, and the > NetConfigDataLookup parameter should be officially supported. > I agree, I have been thinking about moving this for a while actually. > > > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_dev_rsync.yaml > > I suggest deleting this and including a cloud-config version in the > baremetal provisioning docs. > +1 > > > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_heat_admin.yaml > > Delete this, there is already an abstraction for this built into the > baremetal provisioning format[4] > > +1 > > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_root_password.yaml > > Delete this and include it as an example in the baremetal > provisioning docs. > > +1 > > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_timesync.yaml > > Maybe this could be converted to an extraconfig/all_nodes script[5], > but it would be better if this sort of thing could be implemented as > an ansible role or playbook, are there any plans for an extraconfig > mechanism which uses plain ansible semantics? > > > > cheers > > [1] > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/extra_config.html#firstboot-extra-configuration > > [2] > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot > > [3] > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L286 > > [4] > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L132 > > > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L145 > > [5] > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/all_nodes From alfredo.deluca at gmail.com Tue May 26 17:22:32 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Tue, 26 May 2020 19:22:32 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes In-Reply-To: References: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> <916e71c9-0e44-68c2-7381-bfc5b01ae846@gmail.com> Message-ID: It fails at kolla-ansible prechecks and in particular [image: image.png] On Tue, May 26, 2020 at 7:15 PM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > On 2020-05-26 18:30, Alfredo De Luca wrote: > > I have attached the globals.yml as you suggested. > > And the failing task? > > -yoctozepto > > -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69164 bytes Desc: not available URL: From kennelson11 at gmail.com Tue May 26 18:06:33 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 26 May 2020 11:06:33 -0700 Subject: [all] Week Out PTG Details & Registration Reminder Message-ID: Hello Everyone! We are so excited to see you next week at our first virtual PTG! Below are all the details you need to have a fun and successful event. First and foremost: PLEASE REGISTER; https://virtualptgjune2020.eventbrite.com/ If you don't register for the event, we have no way of sending you information about the event, specifically the passwords that we will have set on the zoom rooms. Registration is free. =Final Schedule= The final schedule for the event is available here[1] and in the PTGBot[2]. =Tooling & Passwords= For each virtual PTG meeting room, we will provide an official Zoom videoconference room, which will be reused by various groups across the day. Please make sure to leave the room at the end of your team meeting! We will be setting passwords on the rooms to prevent vandalism/harassment, and we will send those out 24 hours before the start of the event. OpenDev's Jitsi Meet instance is also available at https://meetpad.opendev.org as an alternative tooling option to create team-specific meetings. This is more experimental and may not work as well for larger groups, but has the extra benefit of including etherpad integration (the videoconference doubles as an etherpad document, avoiding jumping between windows). PTGBot can be used to publish the chosen meeting URL if you decide to not use the official Zoom one. The OpenDev Sysadmins can be reached in the #opendev IRC channel if any problems arise. We recommend you try out Zoom and Meetpad ahead of the PTG to solve audio/video issues. =IRC= The main form of synchronous communication between attendees during the PTG is on IRC. If you are not on IRC, learn how to get started here [3]. The main PTG IRC channel is #openstack-ptg on Freenode, and it's used to interact with the PTGbot. The OpenStack Foundation (OSF) staff will be present to help answer questions. =PTGbot= The PTGbot[2] is an open source tool that PTG room moderators use to surface what's currently happening at the event. Room moderators will send messages to the bot via IRC, and from that information the bot publishes a webpage with several sections of information: - The discussion topics currently discussed in the room ("now") - An indicative set of discussion topics coming up next ("next") - The schedule for the day with available extra slots you can book Learn more about the PTGbot via the documentation here [4]. =Help Desk= We are here to help! If you have any questions during the event week, we will have a dedicated Zoom room where an OSF staff member will be available to answer your event related questions. If a staff member is not there to help, you can always reach someone at ptg at openstack.org or on IRC in the #openstack-ptg channel. =Feedback= We have created an etherpad[5] to collect all of your feedback throughout the event. Please add your thoughts so we can continue improving our virtual events! =Virtual Event Best Practices= Check out our list of virtual event best practices[6] prior to the event to help make the most of your time at the PTG. =Code of Conduct= The PTG is for everyone. We have zero tolerance for harassment and other violations of the OSF Community Code of Conduct. Before the PTG begins, please review the Code of Conduct[7] and know how to report an issue. -The Kendalls (diablo_rojo & wendallkaters) [1] Schedule: https://www.openstack.org/ptg#tab_schedule [2] Live PTGBot: http://ptg.openstack.org/ptg.html [3] IRC Setup: https://docs.openstack.org/contributors/common/irc.html [4] PTGBot Documentation: https://github.com/openstack/ptgbot/blob/master/README.rst [5] Feedback Etherpad: https://etherpad.opendev.org/p/June2020-PTG-Feedback [6] PTG Best Practices: https://etherpad.opendev.org/p/virtual-ptg-best-practices [7] Code of Conduct: https://www.openstack.org/legal/community-code-of-conduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 26 19:08:33 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 May 2020 14:08:33 -0500 Subject: [all][policy] Secure Default Policies Popup team meeting Message-ID: <1725262eb1e.f71b92f421673.8966811038784138572@ghanshyammann.com> Hello Everyone, To speed up the work on Secure Default Policies Popup team[1], we are going to start the biweekly meeting to discuss the pending things to finish if any or project need help on implementing the new default policy. We encourage projects interested to implement the new defaults or at least those listed on the wiki page [2] to join that meeting. Meeting details are on the wiki[3] also I have proposed the patch to book the slot: https://review.opendev.org/#/c/730935/ Agenda is slightly open for now which we will be moving to concrete as we progress and timing is biweekly-even on Thursday starting from this week on 28th 1800 UTC. We can adjust the timing also if needed. NOTE: First meeting on this week Thursday 28th 1800UTC on #openstack-meeting [1] https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies [2] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team [3] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting -gmann From ruslanas at lpic.lt Tue May 26 21:02:12 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 26 May 2020 23:02:12 +0200 Subject: [TripleO][rdo][train][centos7] fails at ovn_south_db_server: Start containers for step 4 using paunch In-Reply-To: References: Message-ID: hi all again. I still and still and again fail on similar steps. Using CentOS7 and eth# as net device names. still running on tripleo-ansible version number 4.1.0 (instead of 5.0.0) which should be fixed [0]. 1) failing on OVN config, should I try not using OVN? just OVS? 2) first most annoying problem is that on a first deployment it cannot connect to the controller/compute, but I can ssh using ssh -l heat-admin IP BUT, on the second launch it is always able to connect from first attempt. 3) When using undercloud as repo for container images, it fails, as undercloud ins not added to insecure registries, how should I add it? should it be specified in undercloud.conf as insecure registry? could you please give me some hints, please? external links: [0] https://review.opendev.org/#/c/725783 On Wed, 13 May 2020 at 20:14, Ruslanas Gžibovskis wrote: > Hi all, > > I am running this deployment described here [1]. > I am getting error on: Start containers for step 4 using paunch > Error in link bellow [2] > Running podman containers [3] > > > links: > 1 - https://github.com/qw3r3wq/homelab/tree/master/overcloud > 2 - https://pastebin.com/HTUbz7Ry > 3 - https://pastebin.com/1ApfiEyE > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue May 26 21:18:43 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 26 May 2020 23:18:43 +0200 Subject: [PTG][neutron] Draft of the agenda Message-ID: <20200526211843.dexjd3ndsqo5kuwp@skaplons-mac> Hi neutrinos, I just prepared draft of the agenda for the Neutron team for the upcoming virtual PTG. It's available in the etherpad [1]. Please take a look into it and reach out to me if You: * want to change day/time of topic(s) which You are interested in mostly due to e.g. some conflicts which You may have with some other meetings, * if You want to add some new topics which, to make sure that agenda is modified to include that, * if You see anything else which You thinks that should be changed there :) See You all on the virtual PTG next week :) [1] https://etherpad.opendev.org/p/neutron-victoria-ptg -- Slawek Kaplonski Senior software engineer Red Hat From sbaker at redhat.com Tue May 26 21:33:52 2020 From: sbaker at redhat.com (Steve Baker) Date: Wed, 27 May 2020 09:33:52 +1200 Subject: [tripleo] Deprecating tripleo-heat-templates firstboot In-Reply-To: <769e266ed580747fcfa39a71efc6489ee5af91f3.camel@redhat.com> References: <1e7132dd-599c-8af6-57ce-ca1a44126b15@redhat.com> <769e266ed580747fcfa39a71efc6489ee5af91f3.camel@redhat.com> Message-ID: <2ada6e70-d3dc-899d-3fdc-401a4fea56be@redhat.com> On 27/05/20 5:30 am, Harald Jensås wrote: > On Tue, 2020-05-26 at 11:32 +1200, Steve Baker wrote: >> Soon nova will be switched off by default on the undercloud and all >> overcloud deployments will effectively be deployed-server based >> (either provisioned manually or via the baremetal provision command) >> >> This means that the docs for running firstboot scripts[1] will no >> longer work, and neither will our collection of firstboot scripts[2]. >> In this email I'm going to propose what we could do about this >> situation and if there are still unresolved issues by the PTG it >> might be worth having a short session on it. >> > If we had gone the other way around, and done the Heat Stack with > "dummy" server resources before deploying baremetal we could have done > this seamless, i.e passed these cloud-configs based on the stack to the > baremetal provisioning yaml's extention you mention below. But that > train departed, long ago ... Also this still wouldn't have handled the manual provisioning deployed server case. > Do we need to add some deprecation and/or validation? Something that > ensure we stop the deployment in case one of the resources > OS::TripleO::NodeAdminUserData, OS::TripleO::NodeTimesyncUserData, > OS::TripleO::NodeUserData or OS::TripleO::{{role.name}}::NodeUserData > is defined in the resource registry, with a pointer to docs on how to > move it to the baremetal provisioning yaml, or extraconfig. I think if OS::TripleO::*Server: is not mapped to OS::Nova::Server then the deployment should halt with a message if OS::TripleO::NodeUserData or OS::TripleO::{{role.name}}::NodeUserData are mapped to something other than userdata_default.yaml As for OS::TripleO::NodeTimesyncUserData, it looks like this functionality is duplicated by deployment/timesync/chrony-baremetal-ansible.yaml which is mapped to OS::TripleO::Services::Timesync and included in every role, but NodeTimesyncUserData was added recently to handle some early config timestamp issues: https://opendev.org/openstack/tripleo-heat-templates/commit/eafe3908535ec766866efb74110e057ea2509c45 https://bugs.launchpad.net/tripleo/+bug/1776869 Maybe this becomes less of an issue with no other config tasks happening at first boot, I've tagged in Alex for his thoughts. One option could be to enable and configure chrony during overcloud-full image build, then document how to disable it or change the ntp servers in cloud-config? >> The baremetal provisioning implementation already uses cloud-init >> cloud-config internally for user creation and key injection[3] so I'm >> going to propose an enhancement the the baremetal provisioning yaml >> format so that custom cloud-config instructions can be included >> either inline or as a file path. >> > ++ > >> I think it is worth going through each firstboot script[2] and >> deciding what its fate should be (other than being deprecated in >> Ussuri): >> >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/conntectx3_streering.yaml >> >> This has no parameters, so it could be converted to a standalone >> cloud-config file, but should it? Can this be achieved with kernel >> args? Does it require a reboot anyway, and so can be done with >> extraconfig? >> > Maybe this could be done using this module in ansible instead: > https://docs.ansible.com/ansible/latest/modules/modprobe_module.html#modprobe-module > >> >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/os-net-config-mappings.yaml >> >> I'm not sure why this is implemented as first boot, it seems to >> consume the parameter NetConfigDataLookup and transforms it to the >> format os-net-config needs for the file /etc/os-net- >> config/mapping.yaml. It looks like this functionality should be moved >> to where os-net-config is actually invoked, and the >> NetConfigDataLookup parameter should be officially supported. >> > I agree, I have been thinking about moving this for a while actually. > >> >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_dev_rsync.yaml >> >> I suggest deleting this and including a cloud-config version in the >> baremetal provisioning docs. >> > +1 > >> >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_heat_admin.yaml >> >> Delete this, there is already an abstraction for this built into the >> baremetal provisioning format[4] >> >> > +1 > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_root_password.yaml >> >> Delete this and include it as an example in the baremetal >> provisioning docs. >> >> > +1 > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_timesync.yaml >> >> Maybe this could be converted to an extraconfig/all_nodes script[5], >> but it would be better if this sort of thing could be implemented as >> an ansible role or playbook, are there any plans for an extraconfig >> mechanism which uses plain ansible semantics? >> >> >> >> cheers >> >> [1] >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/extra_config.html#firstboot-extra-configuration >> >> [2] >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot >> >> [3] >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L286 >> >> [4] >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L132 >> >> >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L145 >> >> [5] >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/all_nodes From whayutin at redhat.com Tue May 26 22:04:00 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 26 May 2020 16:04:00 -0600 Subject: [tripleo] TripleO Ussuri has been released Message-ID: Greetings, I'm pleased to announce that TripleO Ussuri has been released today. Special thanks to Chandan Kumar and Emilien Macchi for the extra time and effort in getting everything in place for the release. Thanks to everyone else as well for your contributions, bug reports, patches and participation! Some highlights from the TripleO release: * The mistral to ansible conversion blueprint is complete * The nova-less-deploy feature is complete * The TripleO-Operator-Ansible feature is complete * The RDO and CI teams completed work on component based DLRN rpm repos and CI. * TLS and TripleO deployments tested together in upstream CI. * Support for CentOS-8 * Many other improvements and updates Congratulations to the following individual contributors for their reviews across ALL openstack projects!! Alex Schultz, the #1 reviewer for all projects in Ussuri Herve Beraud, the #3 reviewer for all projects in Ussuri Emilien Macchi, the #4 reviewer for all projects in Ussuri Followed closely behind in reviews by Stephen Finucane Dmitry Tantsur Slawek Kaplonski Thanks to all our TripleO contributors!! https://www.stackalytics.com/?release=ussuri&metric=commits&project_type=openstack&module=tripleo-group Oh by the way TripleO fixed 558 bugs this release as well :) Additional statistics can be found here: https://docs.google.com/presentation/d/1RV30OVxmXv1y_z33LuXMVB56TA54Urp7oHIoTNwrtzA/ Release notes can be found per TripleO project: https://docs.openstack.org/releasenotes/ At the time of writing this email not all the release notes have been published. Thanks all and I'm looking forward to seeing you at the PTG! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue May 26 22:21:59 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 26 May 2020 15:21:59 -0700 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: Message-ID: On Tue, May 26, 2020 at 11:14 AM Kendall Nelson wrote: > Hello Everyone! > > We are so excited to see you next week at our first virtual PTG! Below are > all the details you need to have a fun and successful event. > > First and foremost: PLEASE REGISTER; > https://virtualptgjune2020.eventbrite.com/ > > If you don't register for the event, we have no way of sending you > information about the event, specifically > the passwords that we will have set on the zoom rooms. Registration is > free. > > =Final Schedule= > > The final schedule for the event is available here[1] and in the PTGBot[2]. > > =Tooling & Passwords= > > For each virtual PTG meeting room, we will provide an official Zoom > videoconference room, which will be reused by various groups across the > day. Please make sure to leave the room at the end of your team meeting! > > Great and thank you for setting this up! However, I responded privately (my bad, please ignore) that we're looking to record meetings during our time slots for the Manila team PTG, and share the recordings after the event. Recording meetings has always been crucial to us, since we have a number of contributors who can't join us synchronously, and they'd watch the recordings and contribute after. We've tried to record/broadcast even when things got hard (read: Shanghai) - however, we strive because this is part of our commitment to being open and transparent about these design meetings. I am aware that we can't do this with our Jitsi Meet instance currently. Zoom meetings won't let you record without the host unless you jump through some hoops ( https://support.zoom.us/hc/en-us/articles/204101699-Recording-without-the-Host). Do you think we'll still have recordings? If not, is it a hard requirement to stick to Zoom/Jitsi? Since I'm going to host the whole manila PTG, I can use my BlueJeans account and record/broadcast the meeting easily. Do let me know :) > We will be setting passwords on the rooms to prevent vandalism/harassment, > and we will send those out 24 hours before the start of the event. > > OpenDev's Jitsi Meet instance is also available at > https://meetpad.opendev.org as an alternative tooling option to create > team-specific meetings. This is more experimental and may not work as well > for larger groups, but has the extra benefit of including etherpad > integration (the videoconference doubles as an etherpad document, avoiding > jumping between windows). PTGBot can be used to publish the chosen meeting > URL if you decide to not use the official Zoom one. The OpenDev Sysadmins > can be reached in the #opendev IRC channel if any problems arise. > > We recommend you try out Zoom and Meetpad ahead of the PTG to solve > audio/video issues. > > =IRC= > > The main form of synchronous communication between attendees during the > PTG is on IRC. If you are not on IRC, learn how to get started here [3]. > > The main PTG IRC channel is #openstack-ptg on Freenode, and it's used to > interact with the PTGbot. The OpenStack Foundation (OSF) staff will be > present to help answer questions. > > =PTGbot= > > The PTGbot[2] is an open source tool that PTG room moderators use to > surface what's currently happening at the event. > > Room moderators will send messages to the bot via IRC, and from that > information the bot publishes a webpage with several sections of > information: > > - The discussion topics currently discussed in the room ("now") > - An indicative set of discussion topics coming up next ("next") > - The schedule for the day with available extra slots you can book > > Learn more about the PTGbot via the documentation here [4]. > > =Help Desk= > > We are here to help! If you have any questions during the event week, we > will have a dedicated Zoom room where an OSF staff member will be available > to answer your event related questions. If a staff member is not there to > help, you can always reach someone at ptg at openstack.org or on IRC in the > #openstack-ptg channel. > > =Feedback= > > We have created an etherpad[5] to collect all of your feedback throughout > the event. Please add your thoughts so we can continue improving our > virtual events! > > =Virtual Event Best Practices= > > Check out our list of virtual event best practices[6] prior to the event > to help make the most of your time at the PTG. > > =Code of Conduct= > > The PTG is for everyone. We have zero tolerance for harassment and other > violations of the OSF Community Code of Conduct. Before the PTG begins, > please review the Code of Conduct[7] and know how to report an issue. > > -The Kendalls (diablo_rojo & wendallkaters) > > [1] Schedule: https://www.openstack.org/ptg#tab_schedule > [2] Live PTGBot: http://ptg.openstack.org/ptg.html > [3] IRC Setup: https://docs.openstack.org/contributors/common/irc.html > [4] PTGBot Documentation: > https://github.com/openstack/ptgbot/blob/master/README.rst > [5] Feedback Etherpad: > https://etherpad.opendev.org/p/June2020-PTG-Feedback > [6] PTG Best Practices: > https://etherpad.opendev.org/p/virtual-ptg-best-practices > [7] Code of Conduct: > https://www.openstack.org/legal/community-code-of-conduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue May 26 22:28:28 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 26 May 2020 17:28:28 -0500 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: Message-ID: Goutham Pacha Ravi wrote on 5/26/20 5:21 PM: > > > Great and thank you for setting this up! > > > > I am aware that we can't do this with our Jitsi Meet instance > currently. Zoom meetings won't let you record without the host unless > you jump through some hoops > (https://support.zoom.us/hc/en-us/articles/204101699-Recording-without-the-Host). > Do you think we'll still have recordings? Hi.  The way we're setting up zoom will allow any PTL to set the meeting to record.  Should be fairly seamless. > > If not, is it a hard requirement to stick to Zoom/Jitsi? Since I'm > going to host the whole manila PTG, I can use my BlueJeans account and > record/broadcast the meeting easily. Do let me know :) > > We will be setting passwords on the rooms to prevent > vandalism/harassment, and we will send those out 24 hours before > the start of the event. > > OpenDev's Jitsi Meet instance is also available at > https://meetpad.opendev.org  as an > alternative tooling option to create team-specific meetings. This > is more experimental and may not work as well for larger groups, > but has the extra benefit of including etherpad integration (the > videoconference doubles as an etherpad document, avoiding jumping > between windows). PTGBot can be used to publish the chosen meeting > URL if you decide to not use the official Zoom one. The OpenDev > Sysadmins can be reached in the #opendev IRC channel if any > problems arise. > > We recommend you try out Zoom and Meetpad ahead of the PTG to > solve audio/video issues. > > =IRC= > > The main form of synchronous communication between attendees > during the PTG is on IRC. If you are not on IRC, learn how to get > started here [3]. > > The main PTG IRC channel is #openstack-ptg on Freenode, and it's > used to interact with the PTGbot. The OpenStack Foundation (OSF) > staff will be present to help answer questions. > > =PTGbot= > > The PTGbot[2] is an open source tool that PTG room moderators use > to surface what's currently happening at the event. > > Room moderators will send messages to the bot via IRC, and from > that information the bot publishes a webpage with several sections > of information: > > - The discussion topics currently discussed in the room ("now") > - An indicative set of discussion topics coming up next ("next") > - The schedule for the day with available extra slots you can book > > Learn more about the PTGbot via the documentation here [4]. > > =Help Desk= > > We are here to help! If you have any questions during the event > week, we will have a dedicated Zoom room where an OSF staff member > will be available to answer your event related questions. If a > staff member is not there to help, you can always reach someone at > ptg at openstack.org  or on IRC in the > #openstack-ptg channel. > > =Feedback= > > We have created an etherpad[5] to collect all of your feedback > throughout the event. Please add your thoughts so we can continue > improving our virtual events! > > =Virtual Event Best Practices= > > Check out our list of virtual event best practices[6] prior to the > event to help make the most of your time at the PTG. > > =Code of Conduct= > > The PTG is for everyone. We have zero tolerance for harassment and > other violations of the OSF Community Code of Conduct. Before the > PTG begins, please review the Code of Conduct[7] and know how to > report an issue. > > -The Kendalls (diablo_rojo & wendallkaters) > > [1] Schedule: https://www.openstack.org/ptg#tab_schedule > [2] Live PTGBot: http://ptg.openstack.org/ptg.html > [3] IRC Setup: https://docs.openstack.org/contributors/common/irc.html > [4] PTGBot Documentation: > https://github.com/openstack/ptgbot/blob/master/README.rst > [5] Feedback Etherpad: > https://etherpad.opendev.org/p/June2020-PTG-Feedback > [6] PTG Best Practices: > https://etherpad.opendev.org/p/virtual-ptg-best-practices > [7] Code of Conduct: > https://www.openstack.org/legal/community-code-of-conduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue May 26 22:29:49 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 26 May 2020 15:29:49 -0700 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: Message-ID: On Tue, May 26, 2020 at 3:28 PM Jimmy McArthur wrote: > > > Goutham Pacha Ravi wrote on 5/26/20 5:21 PM: > > > > Great and thank you for setting this up! > > > > > > I am aware that we can't do this with our Jitsi Meet instance currently. > Zoom meetings won't let you record without the host unless you jump through > some hoops ( > https://support.zoom.us/hc/en-us/articles/204101699-Recording-without-the-Host). > Do you think we'll still have recordings? > > Hi. The way we're setting up zoom will allow any PTL to set the meeting > to record. Should be fairly seamless. > That's fantastic! Thank you for considering that. > > If not, is it a hard requirement to stick to Zoom/Jitsi? Since I'm going > to host the whole manila PTG, I can use my BlueJeans account and > record/broadcast the meeting easily. Do let me know :) > > > >> We will be setting passwords on the rooms to prevent >> vandalism/harassment, and we will send those out 24 hours before the start >> of the event. >> >> OpenDev's Jitsi Meet instance is also available at >> https://meetpad.opendev.org as an alternative tooling option to create >> team-specific meetings. This is more experimental and may not work as well >> for larger groups, but has the extra benefit of including etherpad >> integration (the videoconference doubles as an etherpad document, avoiding >> jumping between windows). PTGBot can be used to publish the chosen meeting >> URL if you decide to not use the official Zoom one. The OpenDev Sysadmins >> can be reached in the #opendev IRC channel if any problems arise. >> >> We recommend you try out Zoom and Meetpad ahead of the PTG to solve >> audio/video issues. >> >> =IRC= >> >> The main form of synchronous communication between attendees during the >> PTG is on IRC. If you are not on IRC, learn how to get started here [3]. >> >> The main PTG IRC channel is #openstack-ptg on Freenode, and it's used to >> interact with the PTGbot. The OpenStack Foundation (OSF) staff will be >> present to help answer questions. >> >> =PTGbot= >> >> The PTGbot[2] is an open source tool that PTG room moderators use to >> surface what's currently happening at the event. >> >> Room moderators will send messages to the bot via IRC, and from that >> information the bot publishes a webpage with several sections of >> information: >> >> - The discussion topics currently discussed in the room ("now") >> - An indicative set of discussion topics coming up next ("next") >> - The schedule for the day with available extra slots you can book >> >> Learn more about the PTGbot via the documentation here [4]. >> >> =Help Desk= >> >> We are here to help! If you have any questions during the event week, we >> will have a dedicated Zoom room where an OSF staff member will be available >> to answer your event related questions. If a staff member is not there to >> help, you can always reach someone at ptg at openstack.org or on IRC in the >> #openstack-ptg channel. >> >> =Feedback= >> >> We have created an etherpad[5] to collect all of your feedback throughout >> the event. Please add your thoughts so we can continue improving our >> virtual events! >> >> =Virtual Event Best Practices= >> >> Check out our list of virtual event best practices[6] prior to the event >> to help make the most of your time at the PTG. >> >> =Code of Conduct= >> >> The PTG is for everyone. We have zero tolerance for harassment and other >> violations of the OSF Community Code of Conduct. Before the PTG begins, >> please review the Code of Conduct[7] and know how to report an issue. >> >> -The Kendalls (diablo_rojo & wendallkaters) >> >> [1] Schedule: https://www.openstack.org/ptg#tab_schedule >> [2] Live PTGBot: http://ptg.openstack.org/ptg.html >> [3] IRC Setup: https://docs.openstack.org/contributors/common/irc.html >> [4] PTGBot Documentation: >> https://github.com/openstack/ptgbot/blob/master/README.rst >> [5] Feedback Etherpad: >> https://etherpad.opendev.org/p/June2020-PTG-Feedback >> [6] PTG Best Practices: >> https://etherpad.opendev.org/p/virtual-ptg-best-practices >> [7] Code of Conduct: >> https://www.openstack.org/legal/community-code-of-conduct/ >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed May 27 03:04:36 2020 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 26 May 2020 23:04:36 -0400 Subject: [tripleo] TripleO Ussuri has been released In-Reply-To: References: Message-ID: Nicely done everyone! Just a friendly reminder that at that time of the cycle, it's important that people keep track of their patches which land into master and appropriately backport them to stable/ussuri (and stable/train as usual if needed). I've proposed all the Ussuri backports for patches that landed up to today into the branched repos; but going forward it's up to the owners to do it themselves if the patches need backports. Thanks for your help and congrats again! On Tue, May 26, 2020 at 6:16 PM Wesley Hayutin wrote: > Greetings, > > I'm pleased to announce that TripleO Ussuri has been released today. > Special thanks to Chandan Kumar and Emilien Macchi for the extra time and > effort in getting everything in place for the release. Thanks to everyone > else as well for your contributions, bug reports, patches and participation! > > Some highlights from the TripleO release: > * The mistral to ansible conversion blueprint is complete > * The nova-less-deploy feature is complete > * The TripleO-Operator-Ansible feature is complete > * The RDO and CI teams completed work on component based DLRN rpm repos > and CI. > * TLS and TripleO deployments tested together in upstream CI. > * Support for CentOS-8 > * Many other improvements and updates > > Congratulations to the following individual contributors for their reviews > across ALL openstack projects!! > > Alex Schultz, the #1 reviewer for all projects in Ussuri > Herve Beraud, the #3 reviewer for all projects in Ussuri > Emilien Macchi, the #4 reviewer for all projects in Ussuri > > Followed closely behind in reviews by > Stephen Finucane > Dmitry Tantsur > Slawek Kaplonski > > Thanks to all our TripleO contributors!! > > https://www.stackalytics.com/?release=ussuri&metric=commits&project_type=openstack&module=tripleo-group > > Oh by the way TripleO fixed 558 bugs this release as well :) > > Additional statistics can be found here: > > https://docs.google.com/presentation/d/1RV30OVxmXv1y_z33LuXMVB56TA54Urp7oHIoTNwrtzA/ > > > > Release notes can be found per TripleO project: > https://docs.openstack.org/releasenotes/ > > At the time of writing this email not all the release notes have been > published. > > Thanks all and I'm looking forward to seeing you at the PTG! > > > > > > > > > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Wed May 27 06:04:39 2020 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 27 May 2020 11:34:39 +0530 Subject: [tripleo][operators] Removal of mistral from the TripleO Undercloud In-Reply-To: References: Message-ID: On Sat, May 23, 2020 at 8:37 PM John Fulton wrote: > Update: > > The underlying Ansible to support derived parameters within a TripleO > deployment has merged [1]. We now only need to call and write some new > Ansible modules to do the derivation. Then we can merge the THT change > [2] and this feature's migration from Mistral will be complete. Until > then anyone who wants this feature needs to enable and use Mistral on > the undercloud. > > I would have liked the feature to have been migrated completely (rather than placeholders) before disabling mistral on the undercloud by default. If the NFV/HCI teams don't have any issues with the current plan, then it's probably ok:) At Least we should document it somewhere so that it's not missed in upgrade. > For anyone who wants to write new Ansible modules for derived params, > now that the merge is done, if you build an undercloud from TripleO > master and deploy with -p with the new plan-environment [2], the > derived_parameters will be in the deployment plan and whatever that > parameter contains will be applied as usual. The merged [1] patch has > a clear indication of where [3] to call the new modules. > > Molecule tests the feature and has mock data so Saravanan, Jaganathan, > and I should be able to develop and test [4] the new modules. I've > documented how I extracted the mock data from my real deployment > (based on Kevin's example) and shrank them in a personal repos [5] > just in case the NFV team finds that information useful for the NFV > modules and need to bring in more mock data. I'm personally planning > to start the HCI derive params Ansible module after the PTG. > > Let's keep this thread updated during the Victoria cycle so that we > can get the feature completely migrated from Mistral. > > Thanks, > John > > [1] https://review.opendev.org/#/c/719466 > [2] https://review.opendev.org/#/c/714217 > [3] > https://opendev.org/openstack/tripleo-ansible/src/commit/f99dea3b508f345e96a0e27e0250523e826eadbb/tripleo_ansible/roles/tripleo_derived_parameters/tasks/main.yml#L237 > [4] cd tripleo-ansible; ./scripts/run-local-test tripleo_derived_param > [5] https://github.com/fultonj/ussuri/tree/master/derive/data (I think > a personal repos is sufficient but let me know if you want me to put > it in the actual project; seems too ad hoc for TripleO itself to me) > > On Tue, Apr 28, 2020 at 8:10 PM John Fulton wrote: > > > > On Fri, Mar 20, 2020 at 6:07 PM John Fulton wrote: > > > > > > On Thu, Mar 19, 2020 at 5:37 AM Saravanan KR > wrote: > > > > > > > > On Thu, Mar 19, 2020 at 1:02 AM John Fulton > wrote: > > > > > > > > > > On Sat, Mar 14, 2020 at 8:06 AM Rabi Mishra > wrote: > > > > > > > > > > > > On Sat, Mar 14, 2020 at 2:10 AM John Fulton > wrote: > > > > > >> > > > > > >> On Fri, Mar 13, 2020 at 3:27 PM Kevin Carter < > kecarter at redhat.com> wrote: > > > > > >> > > > > > > >> > Hello stackers, > > > > > >> > > > > > > >> > In the pursuit to remove Mistral from the TripleO undercloud, > we've discovered an old capability that we need to figure out how best to > handle. Currently, we provide the ability for an end-user (operator / > deployer) to pass in "N" Mistral workflows as part of a given deployment > plan which is processed by python-tripleoclient at runtime [0][1]. From > what we have documented, and what we can find within the code-base, we're > not using this feature by default. That said, we do not remove something if > it is valuable in the field without an adequate replacement. The ability to > run arbitrary Mistral workflows at deployment time was first created in > 2017 [2] and while present all this time, its documented [3] and > intra-code-base uses are still limited to samples [4]. > > > > > >> > > > > > >> As it stands now, we're on track to making Mistral inert this > cycle and if our progress holds over the next couple of weeks the > capability to run arbitrary Mistral workflows will be the only thing left > within our codebase that relies on Mistral running on the Undercloud. > > > > > >> > > > > > >> > > > > > > >> > So the question is what do we do with functionality. Do we > remove this ability out right, do we convert the example workflow [5] into > a stand-alone Ansible playbook and change the workflow runner to an > arbitrary playbook runner, or do we simply leave everything as-is and > deprecate it to be removed within the next two releases? > > > > > > > > > > > > > > > > > > Yeah, as John mentioned, > tripleo.derive_params.v1.derive_parameters workflow is surely being used > for NFV (DPDK/SR-IOV) and HCI use cases and can't be deprecated or > dropped. Though we've a generic interface in tripleoclient to run any > workflow in plan-environment, I have not seen it being used for anything > other than the mentioned workflow. > > > > > > > > > > > > In the scope of 'mistral-ansible' work, we seem to have two > options. > > > > > > > > > > > > 1. Convert the workflow to ansible playbook 'as-is' i.e > calculate and merge the derived parameters in plan-environment and as > you've mentioned, change tripleoclient code to call any playbook in > plan-environment.yaml and the parameters/vars. > > > > > > > > > > Nice idea. I hadn't thought of that. > > > > > > > > > > If there's a "hello world" example of this (which results in a THT > > > > > param in the deployment plan being set to "hello world"), then I > could > > > > > try writing an ansible module to derive the HCI parameters and set > > > > > them in place of the "hello world". > > > > > > > > > I am fine with the approach, but the only concern is, we have plans > to > > > > remove Heat in the coming cycles. One of inputs for the Mistral > derive > > > > parameters is fetched from the heat stack. If we are going to retain > > > > it, then it has to be re-worked during the Heat removal. Mistral to > > > > ansible could be the first step towards it. > > > > > > Hey Saravanan, > > > > > > That works for me. I'm glad we were able to come up with a way to do > this. > > > > > > Kevin put some patches together today that will help a lot on this. > > > > > > 1. tht: https://review.opendev.org/#/c/714217/ > > > 2. tripleo-ansible: https://review.opendev.org/#/c/714232/ > > > 3. trilpeoclient: https://review.opendev.org/#/c/714198/ > > > > > > If I put these on my undercloud, then I think I can run: > > > > > > 'openstack overcloud deploy ... -p > plan-environment-derived-params.yaml' > > > > > > as usual and then the updated tripleoclient and tht patch should > > > trigger the new tripleo-ansible playbook in place of the Mistral > > > workbook. > > > > > > I think I can then just update that tripleo-ansible patch to have it > > > include a new derive_params_hci role and add a new derive_params_hci > > > module where I'll stick code from the original Python prototype I did > > > for it. I'll probably just shell to `openstack baremetal introspection > > > data save ID` from ansible to get the Ironic data. I'll give it a try > > > next week and update this thread. Even if Heat is not in the flow, at > > > least the Ansible role and module can be reused. > > > > > > Note that it uses the new tripleo_plan_parameters_update module that > > > Rabi wrote so that should make it easier to deal with the deployment > > > plan itself (https://review.opendev.org/712604). > > > > Kevin and Rabi have made a lot of progress and with their unmerged > > patches [1] 'openstack overcloud deploy -p > > plan-environment-derived-params.yaml' has Ansible is running a > > playbook [2] with place holders for us to use derive params which > > looks like it will push the changes back to the deployment plan as > > discussed above. > > > > As far as I can tell 'openstack overcloud deploy -p > > plan-environment-derived-params.yaml' from master won't work the old > > way as Mistral isn't in the picture anymore when 'openstack overcloud > > deploy -p' is run (someone please correct me if I'm wrong). Thus, > > derive params are not going to work with the U release unless we > > finish the above. I don't think we should undo the progress. We're now > > in the RC so time is short. As long as we still ship the Mistral > > container on the undercloud in U, a deployer could have it derive the > > params in theory if the workbook is run manually and the resultant > > params are applied as overrides. Do we GA with that as a known issue > > with workaround and then circle back to fix the above and backport it? > > I don't think we should delay the release for it. I think we should > > instead push harder on getting the playbook and roles that Kevin > > started [2] (they could still land in U). > > > > John > > > > [1] > https://review.opendev.org/#/q/topic:mistral_to_ansible+(status:open+OR+status:merged) > > [2] > https://review.opendev.org/#/c/719466/13/tripleo_ansible/roles/tripleo_derived_parameters/tasks/main.yml at 170 > > > > > > > > John > > > > > > > Regards, > > > > Saravanan KR > > > > > > > > > John > > > > > > > > > > > 2. Move the functionality further down the component chain in > TripleO to have the required ansible host/group_vars set for them to be > used by config-download playbooks/ansible/puppet. > > > > > > > > > > > > I guess option 1 would be easier within the timelines. I've done > some preliminary work to move some of the functionality in relevant mistral > actions to utils modules[1], so that they can be called from ansible > modules without depending on mistral/mistral-lib and use those in a > playbook that kinda replicate the tasks in the mistral workflow. > > > > > > > > > > > > Having said that, it would be good to know what the DFG:NFV > folks think, as they are the original authors/maintainers of that workflow. > > > > > > > > > > > > > > > > > > > > > > > >> The Mistral based workflow took advantage of the deployment > plan which > > > > > >> was stored in Swift on the undercloud. My understanding is that > too is > > > > > >> going away. > > > > > > > > > > > > > > > > > > I'm not sure that would be in the scope of 'mstral-to-ansible' > work. Dropping swift would probably be a bit more complex, as we use it to > store templates, plan-environment, plan backups (possibly missing few more) > etc and would require significant design rework (may be possible when we > get rid of heat in undercloud). In spite of heat using the templates from > swift and merging environments on the client side, we've had already bumped > heat's REST API json body size limit (max_json_body_size) on the undercloud > to 4MB[2] from the default 1MB and sending all required templates as part > of API request would not be a good idea from undercloud scalability pov. > > > > > > > > > > > > [1] https://review.opendev.org/#/c/709546/ > > > > > > [2] > https://github.com/openstack/tripleo-heat-templates/blob/master/environments/undercloud.yaml#L109 > > > > > > > > > > > > -- > > > > > > Regards, > > > > > > Rabi Mishra > > > > > > > > > > > > > > > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed May 27 07:42:37 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 27 May 2020 09:42:37 +0200 Subject: [cyborg] Edge sync up at the PTG Message-ID: <861D5AE5-B388-41E3-BEBD-BAB4B628BBDD@gmail.com> Hi Cyborg Team, I’m reaching out to check if you are available during next week to continue discussions on edge related topics that we started in Shanghai. The OSF Edge Computing Group is meeting next week and we have a slot reserved to sync up with OpenStack projects such as Cyborg on Wednesday (June 3) at 1300 UTC - 1400 UTC. Would the team be available to join at that time? Thanks, Ildikó From amoralej at redhat.com Wed May 27 08:06:44 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 27 May 2020 10:06:44 +0200 Subject: [rdo-users] [TripleO][rdo][train][centos7] fails at ovn_south_db_server: Start containers for step 4 using paunch In-Reply-To: References: Message-ID: On Tue, May 26, 2020 at 11:02 PM Ruslanas Gžibovskis wrote: > hi all again. > > I still and still and again fail on similar steps. > Using CentOS7 and eth# as net device names. > still running on tripleo-ansible version number 4.1.0 (instead of 5.0.0) > which should be fixed [0]. > You mentioned tripleo-ansible but the link is for a new release of paunch. I understand you need a build for paunch-5.3.2. We are in the process and building it and pushing to train repos in centos mirrors. In the meanwhile you can get the python2-paunch package from https://trunk.rdoproject.org/centos7-train/consistent/ > > 1) failing on OVN config, should I try not using OVN? just OVS? > > 2) first most annoying problem is that on a first deployment it cannot > connect to the controller/compute, but I can ssh using ssh -l heat-admin IP > BUT, on the second launch it is always able to connect from first attempt. > > 3) When using undercloud as repo for container images, it fails, as > undercloud ins not added to insecure registries, how should I add it? > should it be specified in undercloud.conf as insecure registry? > > could you please give me some hints, please? > > > external links: > [0] https://review.opendev.org/#/c/725783 > > > On Wed, 13 May 2020 at 20:14, Ruslanas Gžibovskis > wrote: > >> Hi all, >> >> I am running this deployment described here [1]. >> I am getting error on: Start containers for step 4 using paunch >> Error in link bellow [2] >> Running podman containers [3] >> >> >> links: >> 1 - https://github.com/qw3r3wq/homelab/tree/master/overcloud >> 2 - https://pastebin.com/HTUbz7Ry >> 3 - https://pastebin.com/1ApfiEyE >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > _______________________________________________ > users mailing list > users at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscribe at lists.rdoproject.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Wed May 27 09:40:23 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 27 May 2020 11:40:23 +0200 Subject: [rdo-users] [TripleO][rdo][train][centos7] fails at ovn_south_db_server: Start containers for step 4 using paunch In-Reply-To: References: Message-ID: @Alfredo > tripleo-ansible-5.0.0 use some function, which was not added to paunch lib. On Wed, 27 May 2020 at 10:07, Alfredo Moralejo Alonso wrote: > > > On Tue, May 26, 2020 at 11:02 PM Ruslanas Gžibovskis > wrote: > >> hi all again. >> >> I still and still and again fail on similar steps. >> Using CentOS7 and eth# as net device names. >> still running on tripleo-ansible version number 4.1.0 (instead of 5.0.0) >> which should be fixed [0]. >> > > You mentioned tripleo-ansible but the link is for a new release of paunch. > I understand you need a build for paunch-5.3.2. We are in the process and > building it and pushing to train repos in centos mirrors. In the meanwhile > you can get the python2-paunch package from > https://trunk.rdoproject.org/centos7-train/consistent/ > > >> >> 1) failing on OVN config, should I try not using OVN? just OVS? >> >> 2) first most annoying problem is that on a first deployment it cannot >> connect to the controller/compute, but I can ssh using ssh -l heat-admin IP >> BUT, on the second launch it is always able to connect from first attempt. >> >> 3) When using undercloud as repo for container images, it fails, as >> undercloud ins not added to insecure registries, how should I add it? >> should it be specified in undercloud.conf as insecure registry? >> >> could you please give me some hints, please? >> >> >> external links: >> [0] https://review.opendev.org/#/c/725783 >> >> >> On Wed, 13 May 2020 at 20:14, Ruslanas Gžibovskis >> wrote: >> >>> Hi all, >>> >>> I am running this deployment described here [1]. >>> I am getting error on: Start containers for step 4 using paunch >>> Error in link bellow [2] >>> Running podman containers [3] >>> >>> >>> links: >>> 1 - https://github.com/qw3r3wq/homelab/tree/master/overcloud >>> 2 - https://pastebin.com/HTUbz7Ry >>> 3 - https://pastebin.com/1ApfiEyE >>> -- >>> Ruslanas Gžibovskis >>> +370 6030 7030 >>> >> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> _______________________________________________ >> users mailing list >> users at lists.rdoproject.org >> http://lists.rdoproject.org/mailman/listinfo/users >> >> To unsubscribe: users-unsubscribe at lists.rdoproject.org >> > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Wed May 27 11:26:30 2020 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 27 May 2020 20:26:30 +0900 Subject: [glance] "true is not of type boolean" when creating protected image Message-ID: This problem occurs on a stable Ussuri Devstack. I issue this command to create a protected image: $ openstack image create myimage2 --file ~/myvm1.qcow2 --disk-format qcow2  --protected and get this error message back: BadRequestException: 400: Client Error for url: http://192.168.1.200/image/v2/images, 400 Bad Request: On instance['protected']:: 'type': 'boolean'}: {'description': 'If true, image will not be deletable.',: 'True': Provided object does not match schema 'image': 'True' is not of type 'boolean': Failed validating 'type' in schema['properties']['protected']: It seems that image creation fails because 'True' is not of type 'boolean' Am I looking at a bug or am I doing something wrong? I do have a great workaround: First, create the image without the protected flag. Then: $ openstack image set --protected myimage2 This works. From ruslanas at lpic.lt Wed May 27 11:37:57 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 27 May 2020 13:37:57 +0200 Subject: [rdo-users] [TripleO][rdo][train][centos7] fails at ovn_south_db_server: Start containers for step 4 using paunch In-Reply-To: References: Message-ID: latest execution (3rd openstack deploy command if to be a bit more precise): https://pastebin.com/nKbqZRX0 < this is paunch.log on controller https://pastebin.com/NP8DNgQ7 < this is ansible output from deploy command. I have issues with OVN deployment, Is there an option to use OVS (as I understand OVS component was replaced by OVN?). Deploy command: _THT="/usr/share/openstack-tripleo-heat-templates" _LTHT="$(pwd)" time openstack --verbose overcloud deploy \ --templates \ --stack rem0te \ -r ${_LTHT}/roles_data.yaml \ -n ${_LTHT}/network_data.yaml \ -e ${_LTHT}/containers-prepare-parameter.yaml \ -e ${_LTHT}/overcloud_images.yaml \ -e ${_THT}/environments/disable-telemetry.yaml \ -e ${_THT}/environments/host-config-and-reboot.yaml \ --ntp-server config files: 1 - https://github.com/qw3r3wq/homelab/tree/master/overcloud Thank you for reading up to here. ;) double thank you for reply -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed May 27 13:18:42 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 27 May 2020 15:18:42 +0200 Subject: [ironic] [ptg] PTG week schedule Message-ID: Hi ironicers! The PTG is next week, and I'd like to share our plans for it. We don't have anything planned on Monday, and the weekly meeting is cancelled as well. 2-hour slots are reserved Tuesday to Friday 14:00 UTC - 16:00 UTC in the Liberty room. The up-to-date schedule and notes are available on https://etherpad.opendev.org/p/Ironic-VictoriaPTG-Planning, the current version is as follows: Tue, Jun 2nd: 14:00-15:00: Releases, perceptions and our place in the world: governance and perceptions 15:00-16:00: Releases, perceptions and our place in the world: standalone future Wed, Jun 3rd: 14:00-15:00: Deployment Efficency 15:00-16:00: Aborting non-waiting transient states Thu, Jun 4th: 14:00-15:00: Fishy future 15:00-16:00: IPAM integrations outside of Neutron Fri, Jun 5th: 14:00-15:00: Operator suggestions/feedback/painpoints 15:00-16:00: Ideas and brainstorming I would like to highlight the operators feedback slot on Friday at 14:00 UTC. We would like to hear more opinions from people running ironic! Please join us. Hear from you soon, Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Wed May 27 12:03:32 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Wed, 27 May 2020 14:03:32 +0200 Subject: [kolla-ansible] External Ceph and cinder-volumes In-Reply-To: References: <9aae5872-2037-18b3-7646-5ac4615b87ed@gmail.com> <916e71c9-0e44-68c2-7381-bfc5b01ae846@gmail.com> Message-ID: Hi all. It was my mistake to not check the globals.yml was the right one the deployment server. All ok now and it's working. Cheers On Tue, May 26, 2020 at 7:22 PM Alfredo De Luca wrote: > It fails at kolla-ansible prechecks and in particular > > [image: image.png] > > On Tue, May 26, 2020 at 7:15 PM Radosław Piliszek < > radoslaw.piliszek at gmail.com> wrote: > >> On 2020-05-26 18:30, Alfredo De Luca wrote: >> > I have attached the globals.yml as you suggested. >> >> And the failing task? >> >> -yoctozepto >> >> > > -- > */Alfredo* > > -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69164 bytes Desc: not available URL: From tobias.urdin at binero.com Wed May 27 14:29:51 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Wed, 27 May 2020 14:29:51 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <1589966309759.42370@binero.com> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> , <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org>, <1589966309759.42370@binero.com> Message-ID: <1590589791908.4669@binero.com> Hello Thomas, Please see below suggestion. Best regards ________________________________________ From: Tobias Urdin Sent: Wednesday, May 20, 2020 11:18 AM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Puppet 5 is officially unsupported in Victoria release Hello, I'm thinking that we maybe we can compromise on certain points here. I would say it's pretty certain we would not break any Puppet 5 code in itself in the Victoria cycle since we are already behind most of the new features anyways. What if: * We officially state Puppet 5 is unsupported * Remove Puppet 5 as supported in metadata.json * Only run Puppet 6 integration testing for CentOS and Ubuntu But we: * Keep a promise to not break Puppet 5 usage in Victoria * Keep Puppet 5 unit/syntax testing (as well as Puppet 6 of course) * Debian can run integration testing with Puppet 5 if you fix those up The benefit here is that we would not expose new consumers to use Puppet 5 but there is also drawbacks in that: * You cannot use Puppet 5 and do a "puppet module install" since metadata.json would cause Puppet 5 to not be supported (a note here is that we don't even test that this is possible with the current state of the modules i.e checking or conflict or faulty dependencies in the metadata.json files) Since the only issue here is that downstream Debian wants to use Puppet 5 I think this is a fair compromise and since you package the Puppet modules in Debian I assume you don't need any support being stated in metadata.json for Puppet 5 explicitly. What do you think? Best regards Tobias ________________________________________ From: Thomas Goirand Sent: Monday, May 11, 2020 5:37 PM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Puppet 5 is officially unsupported in Victoria release On 5/11/20 2:03 PM, Takashi Kajinami wrote: > Hi, > > > IIUC the most important reason behind puppet 5 removal is that puppet 5 > is EOLed soon, this month. > https://puppet.com/docs/puppet/latest/about_agent.html > > As you know puppet-openstack has some external dependencies, this can > cause the problem with our support for puppet 5. > For example if any dependencies remove their compatibility with puppet 5, > we should pin all of them to keep puppet 5 tests running. > This is the biggest concern I know about keeping puppet 5 support. > > While it makes sense to use puppet 5 for existing stable branches from a > stable > management perspective, I don't think it's actually reasonable to extend > support > for EOLed stuff in master development with possibly adding pins to old > modules. > IMO we can delay the actual removal a bit until puppet 6 gets ready in > Debian, > but I'd like to hear some actual plans to have puppet 6 available in Debian > so that we can expect short gap about puppet 5 eol timing, between > puppet-openstack > and puppet itself. > > Thank you, > Takashi Thank you, a bit more time, is the only thing I was asking for! About the plan for packaging Puppet 6 in Debian: I don't know yet, as one will have to do the work, and that's probably going to be me, since nobody is volunteering... :( Now, about dependencies: if supporting Puppet 5 gets on the way to use a newer dependency, then I suppose we can try to manage this when it happens. Worst case: forget about Puppet 5 if we get into such a bad situation. Until we're there, let's hope it doesn't happen too soon. I can tell you when I know more about the amount of work that there is to do. At the moment, it's still a bit blurry to me. Cheers, Thomas Goirand (zigo) From nicolas.ghirlanda at everyware.ch Wed May 27 14:47:13 2020 From: nicolas.ghirlanda at everyware.ch (Nicolas Ghirlanda) Date: Wed, 27 May 2020 16:47:13 +0200 Subject: [Nova][CentOS] extrem slow first boot Centos 8.1 with failures Message-ID: <1fd7bf7a-140b-3bfa-5dde-614df92cdddb@everyware.ch> Hello all, I created a centos image with a pretty similar kickstart config like from that link https://github.com/CentOS/sig-cloud-instance-build/pull/159/commits/2c542bb2b1bc54c007fbf57a5da0a3213ce73fbb Packer is building the image fine, also the post installation tasks after a reboot in qemu (which is running fine and fast), so everything looks good so far. Then upload to openstack and on first boot, the instance is booting fine until that point of boot: [[0;32m OK [0m] Started D-Bus System Message Bus. [[0;32m OK [0m] Started Hardware RNG Entropy Gatherer Daemon. Starting OpenSSH rsa Server Key Generation... Starting System Security Services Daemon... [ 9.913211] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ 9.988763] input: PC Speaker as /devices/platform/pcspkr/input/input6 [ 10.137969] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 10.332234] cirrus 0000:00:02.0: vgaarb: deactivate vga console [ 10.419654] Console: switching to colour dummy device 80x25 [ 10.421488] [TTM] Zone kernel: Available graphics memory: 2017692 kiB [ 10.422946] [TTM] Initializing pool allocator [ 10.424036] [TTM] Initializing DMA pool allocator [ 10.425630] [drm] fb mappable at 0xFC000000 [ 10.426612] [drm] vram aper at 0xFC000000 [ 10.427436] [drm] size 33554432 [ 10.428108] [drm] fb depth is 16 [ 10.428884] [drm] pitch is 2048 [ 10.449202] fbcon: cirrusdrmfb (fb0) is primary device [ 10.465230] Console: switching to colour frame buffer device 128x48 [ 10.499394] cirrus 0000:00:02.0: fb0: cirrusdrmfb frame buffer device [ 10.524033] [drm] Initialized cirrus 1.0.0 20110418 for 0000:00:02.0 on minor 0 On the console there is no further update from here. In the boot log it slowly progresses, but with Failures of starting services e.g. Starting RPC Bind... Starting Security Auditing Service... [ 9.500362] audit: type=1400 audit(1590589814.846:4): avc: denied { read } for pid=784 comm="auditd" name="group" dev="sda1" ino=259654 scontext=system_u:system_r:auditd_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=0 [ 9.517988] audit: type=1400 audit(1590589814.863:5): avc: denied { read } for pid=783 comm="rpcbind" name="passwd" dev="sda1" ino=259634 scontext=system_u:system_r:rpcbind_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=0 [[0;1;31mFAILED[0m] Failed to start RPC Bind. See 'systemctl status rpcbind.service' for details. or [[0;32m OK [0m] Started OpenSSH ecdsa Server Key Generation. [[0;32m OK [0m] Started GSSAPI Proxy Daemon. [[0;32m OK [0m] Started OpenSSH ed25519 Server Key Generation. [[0;32m OK [0m] Started OpenSSH rsa Server Key Generation. [[0;1;31mFAILED[0m] Failed to start Authorization Manager. See 'systemctl status polkit.service' for details. [[0;1;33mDEPEND[0m] Dependency failed for Dynamic System Tuning Daemon. [[0;1;31mFAILED[0m] Failed to start NTP client/server. See 'systemctl status chronyd.service' for details. Also cloud-init is not able to assign an ip. [[0;32m OK [0m] Started D-Bus System Message Bus. Starting Network Manager... [ 550.962041] cloud-init[1070]: Cloud-init v. 18.5 running 'init' at Wed, 27 May 2020 14:37:46 +0000. Up 460.76 seconds. [ 550.964516] cloud-init[1070]: ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++ [ 550.967275] cloud-init[1070]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+ [ 550.970075] cloud-init[1070]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | [ 550.972524] cloud-init[1070]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+ [ 550.975221] cloud-init[1070]: ci-info: | ens3 | False | . | . | . | fa:16:3e:a3:7d:d5 | [ 550.977970] cloud-init[1070]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | [ 550.980623] cloud-init[1070]: ci-info: | lo | True | ::1/128 | . | host | . | [ 550.983300] cloud-init[1070]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+ [ 550.986164] cloud-init[1070]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ [ 550.988469] cloud-init[1070]: ci-info: +-------+-------------+---------+-----------+-------+ [ 550.991092] cloud-init[1070]: ci-info: | Route | Destination | Gateway | Interface | Flags | [ 550.993576] cloud-init[1070]: ci-info: +-------+-------------+---------+-----------+-------+ [ 550.996326] cloud-init[1070]: ci-info: +-------+-------------+---------+-----------+-------+ After that time, the system is up, but no hostname is set (due to cloud-init issues). If I now reboot, everything is working fine, cloud-init is working etc. If I set selinux from enforcing to permissive in the kickstart config, the issues are not showing up, but I need selinux to stay enforcing. Anyone with the same issues or any hints appreciated. regards Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2818 bytes Desc: not available URL: From nicolas.ghirlanda at everyware.ch Wed May 27 15:15:13 2020 From: nicolas.ghirlanda at everyware.ch (Nicolas Ghirlanda) Date: Wed, 27 May 2020 17:15:13 +0200 Subject: [Nova][CentOS] extrem slow first boot Centos 8.1 with failures In-Reply-To: <1fd7bf7a-140b-3bfa-5dde-614df92cdddb@everyware.ch> References: <1fd7bf7a-140b-3bfa-5dde-614df92cdddb@everyware.ch> Message-ID: Update - I'm running virt-sysprep  and virt-sparsify after the image is built. One of those must cause the issue.. I will update once identified. cheers On 27.05.20 16:47, Nicolas Ghirlanda wrote: > > > Hello all, > > > I created a centos image with a pretty similar kickstart config like > from that link > > https://github.com/CentOS/sig-cloud-instance-build/pull/159/commits/2c542bb2b1bc54c007fbf57a5da0a3213ce73fbb > > > Packer is building the image fine, also the post installation tasks > after a reboot in qemu (which is running fine and fast), so everything > looks good so far. > > > Then upload to openstack and on first boot, the instance is booting > fine until that point of boot: > > > [[0;32m OK [0m] Started D-Bus System Message Bus. > [[0;32m OK [0m] Started Hardware RNG Entropy Gatherer Daemon. > Starting OpenSSH rsa Server Key Generation... > Starting System Security Services Daemon... > [ 9.913211] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 > [ 9.988763] input: PC Speaker as /devices/platform/pcspkr/input/input6 > [ 10.137969] sd 0:0:0:0: Attached scsi generic sg0 type 0 > [ 10.332234] cirrus 0000:00:02.0: vgaarb: deactivate vga console > [ 10.419654] Console: switching to colour dummy device 80x25 > [ 10.421488] [TTM] Zone kernel: Available graphics memory: 2017692 kiB > [ 10.422946] [TTM] Initializing pool allocator > [ 10.424036] [TTM] Initializing DMA pool allocator > [ 10.425630] [drm] fb mappable at 0xFC000000 > [ 10.426612] [drm] vram aper at 0xFC000000 > [ 10.427436] [drm] size 33554432 > [ 10.428108] [drm] fb depth is 16 > [ 10.428884] [drm] pitch is 2048 > [ 10.449202] fbcon: cirrusdrmfb (fb0) is primary device > [ 10.465230] Console: switching to colour frame buffer device 128x48 > [ 10.499394] cirrus 0000:00:02.0: fb0: cirrusdrmfb frame buffer device > [ 10.524033] [drm] Initialized cirrus 1.0.0 20110418 for 0000:00:02.0 on minor 0 > > > On the console there is no further update from here. > > In the boot log it slowly progresses, but with Failures of starting > services > > e.g. > > > Starting RPC Bind... > Starting Security Auditing Service... > [ 9.500362] audit: type=1400 audit(1590589814.846:4): avc: denied { read } for pid=784 comm="auditd" name="group" dev="sda1" ino=259654 scontext=system_u:system_r:auditd_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=0 > [ 9.517988] audit: type=1400 audit(1590589814.863:5): avc: denied { read } for pid=783 comm="rpcbind" name="passwd" dev="sda1" ino=259634 scontext=system_u:system_r:rpcbind_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=0 > [[0;1;31mFAILED[0m] Failed to start RPC Bind. > See 'systemctl status rpcbind.service' for details. > > or > > [[0;32m OK [0m] Started OpenSSH ecdsa Server Key Generation. > [[0;32m OK [0m] Started GSSAPI Proxy Daemon. > [[0;32m OK [0m] Started OpenSSH ed25519 Server Key Generation. > [[0;32m OK [0m] Started OpenSSH rsa Server Key Generation. > [[0;1;31mFAILED[0m] Failed to start Authorization Manager. > See 'systemctl status polkit.service' for details. > [[0;1;33mDEPEND[0m] Dependency failed for Dynamic System Tuning Daemon. > [[0;1;31mFAILED[0m] Failed to start NTP client/server. > See 'systemctl status chronyd.service' for details. > > > Also cloud-init is not able to assign an ip. > > [[0;32m OK [0m] Started D-Bus System Message Bus. > Starting Network Manager... > [ 550.962041] cloud-init[1070]: Cloud-init v. 18.5 running 'init' at Wed, 27 May 2020 14:37:46 +0000. Up 460.76 seconds. > [ 550.964516] cloud-init[1070]: ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++ > [ 550.967275] cloud-init[1070]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+ > [ 550.970075] cloud-init[1070]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | > [ 550.972524] cloud-init[1070]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+ > [ 550.975221] cloud-init[1070]: ci-info: | ens3 | False | . | . | . | fa:16:3e:a3:7d:d5 | > [ 550.977970] cloud-init[1070]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | > [ 550.980623] cloud-init[1070]: ci-info: | lo | True | ::1/128 | . | host | . | > [ 550.983300] cloud-init[1070]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+ > [ 550.986164] cloud-init[1070]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ > [ 550.988469] cloud-init[1070]: ci-info: +-------+-------------+---------+-----------+-------+ > [ 550.991092] cloud-init[1070]: ci-info: | Route | Destination | Gateway | Interface | Flags | > [ 550.993576] cloud-init[1070]: ci-info: +-------+-------------+---------+-----------+-------+ > [ 550.996326] cloud-init[1070]: ci-info: +-------+-------------+---------+-----------+-------+ > > > > After that time, the system is up, but no hostname is set (due to > cloud-init issues). > > > If I now reboot, everything is working fine, cloud-init is working etc. > > > If I set selinux from enforcing to permissive in the kickstart config, > the issues are not showing up, but I need selinux to stay enforcing. > > > Anyone with the same issues or any hints appreciated. > > > regards > > > Nicolas > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2818 bytes Desc: not available URL: From whayutin at redhat.com Wed May 27 15:36:33 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 27 May 2020 09:36:33 -0600 Subject: [tripleo] Ussuri containers Message-ID: Greetings, FYI we had a squatter in docker.io steal away the tripleoussuri namespace. We are investigating using quay but we need to do quite a bit of testing for it. For now please remind folks that the default container registry namespace is not tripleo$release for ussuri. We're using "tripleou" [1]. If you want to pull from the rdo registry, you will find those containers in tripleoussuri for now [2], but will be moved to use tripleou as well for consistency. Note the following update to tripleo-common as well [3] Full story is that we attempted to contact docker.io support several times as they have a policy around removing squatting namespaces. The support team never responded. Our goal is drop docker.io and use quay.io but taking a step by step phased approach to that. Thanks all [1] https://hub.docker.com/u/tripleou [2] https://registry.rdoproject.org:8443/oapi/v1/namespaces/tripleoussuri/imagestreamtags/ [3] https://review.opendev.org/#/c/731244/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Wed May 27 15:40:28 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 27 May 2020 17:40:28 +0200 Subject: [sdk][osc][cli][ptg][project_cleanup] PTG Week planning Message-ID: <792968FC-D75F-47D3-BF98-97A7D6FF7182@gmail.com> Hi everyone interested :) The PTG planning for the OpenStackSDK and the CLI is there: https://etherpad.opendev.org/p/SDK-VictoriaPTG-Planning Please enter yourself there if interested and add topics you want to discuss. We have currently 2 slots: 01 June: 15:00 - 17:00 UTC Juno 03 June: 15:00 - 17:00 UTC Bexar And we plan to be using meetpad (fingers crossed). Interesting topics: - June 01 15:00-16:00 project cleanup - long standing topic across the projects - June 01 16:00-17:00 openstackclient-first - a one time failed community goal, and now maybe multi-release goal for joining all individual service clients into OSC to give users seamless UX Hear from you soon, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Wed May 27 15:42:39 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 27 May 2020 17:42:39 +0200 Subject: [sdk][api-sig][ptg] Victoria PTG planning Message-ID: <1EC68B95-F20D-492E-B4E5-BEF09446788F@gmail.com> Hi everyone interested :) The PTG planning for the API-SIG is there: https://etherpad.opendev.org/p/API-SIG-VictoriaPTG-Planning Please enter yourself there if interested and add topics you want to discuss. We have currently 1 slot, so hurry up not to loose the chance: 02 June: 16:00 - 17:00 UTC Liberty And we plan to be using meetpad (fingers crossed). Hear from you soon, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed May 27 16:03:18 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 27 May 2020 12:03:18 -0400 Subject: [cinder][security] propose Rajat Dhasmana for cinder coresec Message-ID: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> Jay Bryant is stepping down from the Cinder core security team due to time constraints. I am proposing to replace him with Rajat Dhasmana (whoami-rajat on IRC). Rajat is has been a cinder core since January 2019 and is a thorough and respected reviewer and developer. Additionally, he'll extend our time zone coverage so that there will always be someone available to review/work on security patches around the clock. I intend to add Rajat to cinder coresec on Tuesday, 2 June; please communicate any concerns to me before then. cheers, brian From jungleboyj at gmail.com Wed May 27 16:13:14 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 27 May 2020 11:13:14 -0500 Subject: [cinder][security] propose Rajat Dhasmana for cinder coresec In-Reply-To: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> References: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> Message-ID: +1 Sorry I haven't had time to put into this lately.  Thank you Rajat for stepping up! Jay On 5/27/2020 11:03 AM, Brian Rosmaita wrote: > Jay Bryant is stepping down from the Cinder core security team due to > time constraints.  I am proposing to replace him with Rajat Dhasmana > (whoami-rajat on IRC).  Rajat is has been a cinder core since January > 2019 and is a thorough and respected reviewer and developer. > Additionally, he'll extend our time zone coverage so that there will > always be someone available to review/work on security patches around > the clock. > > I intend to add Rajat to cinder coresec on Tuesday, 2 June; please > communicate any concerns to me before then. > > > cheers, > brian > From balazs.gibizer at est.tech Wed May 27 17:05:07 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 27 May 2020 19:05:07 +0200 Subject: [nova][ptg] virtual PTG In-Reply-To: References: <6VQA9Q.TH47VQEIJBW43@est.tech> Message-ID: Hi, I've did some reorganization on the PTG etherpad [1] and assigned time slots for different discussions. Please check the etherpad and let me know any issues with timing or organization of the topics. Cheers, gibi [1] https://etherpad.opendev.org/p/nova-victoria-ptg On Tue, May 19, 2020 at 16:29, Balázs Gibizer wrote: > Hi, > > The PTG is two weeks from now. So I would like to encourage you to > look at the PTG etherpad [0]. If you have topics for the PTG then > please start discussing them now on the ML or in a spec. (See the > threads [1][2][3][4][5] I have already started) Such preparation is > needed as we will only have limited time to conclude the topics > during the PTG. > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014916.html > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014917.html > [3] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014919.html > [4] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014921.html > [5] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014938.html > > On Wed, Apr 29, 2020 at 11:38, Balázs Gibizer > wrote: >> Hi, >> >> Based on the doodle I've booked the following slots for Nova, in the >> Rocky room[1]. >> >> * June 3 Wednesday 13:00 UTC - 15:00 UTC >> * June 4 Thursday 13:00 UTC - 15:00 UTC >> * June 5 Friday 13:00 UTC - 15:00 UTC >> >> I have synced with Slaweq and we agreed to have the Neutron - Nova >> cross project discussion on Friday form 13:00 UTC. >> >> If it turns out that we need more time then we can arrange extra >> slots per topic on the week after. >> >> Cheers, >> gibi >> >> [1] https://ethercalc.openstack.org/126u8ek25noy >> >> >> On Fri, Apr 24, 2020 at 16:28, Balázs Gibizer >>  wrote: >>> >>> >>> On Wed, Apr 15, 2020 at 10:26, Balázs Gibizer >>>  wrote: >>>> Hi, >>>> >>>> I need to book slots for nova discussions on the official >>>> schedule[2] of the virtual PTG[1]. I need two things to do >>>> that: >>>> >>>> 1) What topics we have that needs real-time discussion. Please add >>>> those to the etherpad [3] >>>> 2) Who wants to join such real-time discussion and what time slots >>>> works for you. Please fill the doodle[4] >>>> >>>> Based on the current etherpad content we need 2-3 slots for nova >>>> discussion and a half slot for cross project (current >>>> neutron) discussion. >>> >>> We refined our schedule during the Nova meeting [5]. Based on that >>> my current plan is to book one 2 hours slot for 3 consecutive >>> days (Wed-Fri) during the PTG week. I talked to Slaweq about a >>> neutron-nova cross project session. We agreed to book a one hour >>> slot for that. >>> >>> If you haven't filled the doodle[4] then please do so until early >>> next week. >>> >>> Thanks >>> gibi >>> >>>> >>>> Please try to provide your topics and options before the >>>> Thursday's nova meeting. >>>> >>>> Cheers, >>>> gibi >>>> >>>> >>>> [1] >>>> http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.html >>>> [2] https://ethercalc.openstack.org/126u8ek25noy >>>> [3] https://etherpad.opendev.org/p/nova-victoria-ptg >>>> [4] https://doodle.com/poll/ermn3vxy9v53aayy >>>> >>> [5] >>> http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-23-16.00.log.html#l-119 >>> >>> >>> >> >> >> > > > From pramchan at yahoo.com Wed May 27 18:34:09 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Wed, 27 May 2020 18:34:09 +0000 (UTC) Subject: [all][interop WG] Call for Participationin for Virtual PTG interop WG and seek ambassadors References: <1183055408.566292.1590604449554.ref@mail.yahoo.com> Message-ID: <1183055408.566292.1590604449554@mail.yahoo.com> Hi all, We are looking for participation & presenters, contributors  form different OpenStack and adjacent projects in Open Infrastructure , k8s, CNCF, OPNFV, ONAP, OCN etc. Please register in etherpad and review  for anything you like to share wrt user level API for Multi Cloud, Hybrid Cloud and Edge cloud Interop capability or compatibility. The bridge will be tested  this  Friday May 29th before we finalize the location or bridge, Add you name topic and your inputs for rapid delivery slots of say 5-10 mts each on June  1st. https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 Schedule June 1 (UTC 13-15 hrs = 6AM-8 AM PDT = 9AM-11 AM EST = 9PM-11PM BJT) First hr will focus on  Project team like Nova, Neutron, Ciner, Keystone, Glance & Swift.Second Hour - Open for new ideas  Additionally we would like ambassadors who can register and attend other Virtual PTG projects and provide us feedback after attending them as what they can provide to us for Interop WG from all other projects. ThanksFor Interop WGChair PrakashVice Chair Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed May 27 19:29:53 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 27 May 2020 21:29:53 +0200 Subject: [sdk][api-sig][ptg] Victoria PTG planning In-Reply-To: <1EC68B95-F20D-492E-B4E5-BEF09446788F@gmail.com> References: <1EC68B95-F20D-492E-B4E5-BEF09446788F@gmail.com> Message-ID: Hey On Wed, May 27, 2020 at 5:44 PM Artem Goncharov wrote: > Hi everyone interested :) > > The PTG planning for the API-SIG is there: > > https://etherpad.opendev.org/p/API-SIG-VictoriaPTG-Planning > > > Please enter yourself there if interested and add topics you want to > discuss. > Microversions support in OSC is a good potential topic, but I'm not sure I'll be able to chair it. Dmitry > > We have currently 1 slot, so hurry up not to loose the chance: > 02 June: 16:00 - 17:00 UTC Liberty > > And we plan to be using meetpad (fingers crossed). > > > > Hear from you soon, > Artem > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed May 27 20:36:17 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 27 May 2020 22:36:17 +0200 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: Message-ID: <885d5553-958c-0253-3b19-3b3c1ef6fa54@debian.org> On 5/26/20 8:06 PM, Kendall Nelson wrote: > For each virtual PTG meeting room, we will provide an official Zoom > videoconference room, which will be reused by various groups across the > day. Sorry for ranting, but I can't help it on this topic... I really think it's a shame that it's been decided on a non-free video conference service. Why aren't we self-hosting it, and using free software instead of Zoom, which on top of this had major security problems recently? Cheers, Thomas Goirand (zigo) From zigo at debian.org Wed May 27 20:45:59 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 27 May 2020 22:45:59 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <1589966309759.42370@binero.com> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> <1589966309759.42370@binero.com> Message-ID: <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> On 5/20/20 11:18 AM, Tobias Urdin wrote: > Hello, > I'm thinking that we maybe we can compromise on certain points here. > > I would say it's pretty certain we would not break any Puppet 5 code in itself in the > Victoria cycle since we are already behind most of the new features anyways. > > What if: > > * We officially state Puppet 5 is unsupported > * Remove Puppet 5 as supported in metadata.json > * Only run Puppet 6 integration testing for CentOS and Ubuntu > > But we: > > * Keep a promise to not break Puppet 5 usage in Victoria > * Keep Puppet 5 unit/syntax testing (as well as Puppet 6 of course) > * Debian can run integration testing with Puppet 5 if you fix those up > > The benefit here is that we would not expose new consumers to use Puppet 5 > but there is also drawbacks in that: > > * You cannot use Puppet 5 and do a "puppet module install" since metadata.json > would cause Puppet 5 to not be supported (a note here is that we don't even test > that this is possible with the current state of the modules i.e checking or conflict or faulty dependencies > in the metadata.json files) > > Since the only issue here is that downstream Debian wants to use Puppet 5 I think this is a fair > compromise and since you package the Puppet modules in Debian I assume you don't need any > support being stated in metadata.json for Puppet 5 explicitly. > > What do you think? > > Best regards > Tobias Hi Tobias, Thanks a lot for all of the above, which I agree, except that I don't think we should remove Puppet 5 from metadata.json on all projects. I saw that today, your patch to remove Puppet 5 testing was merged. That's a good thing. Today, I've been able to make scenario001 run successfully on a Debian VM, up to the puppet 2nd run which shows indenpotancy. It later failed when running puppet, but that's because the tempest binary was set to /usr/bin/pyhon3-tempest instead of /usr/bin/tempest. In other words, I'm on very good tracks to get Debian support gating again like 2 years ago. This time, I wont give up on it and let it go, I promise! I'll also try to get a colleague be a backup of me, so we can be 2 watching on the gate if Debian was failing. FYI, to have Debian to work, we need these 2 patches: https://review.opendev.org/#/c/730881/ https://review.opendev.org/#/c/730851/ Until I get scenario001 fully working locally, these 2 are still WIP, I'll let you know on IRC when I'm done. Of course, gating in Debian is done with what I'm using, which is puppet from Debian, as you suggested. As for the packaging of Puppet in Debian Sid, the main issue is what I told you about: support for Ruby 2.7 in upstream puppet. Until this happens, it is very difficult for us to package Puppet 6. But I know upstream is actively working on it, so when it happens, I'll try to work on it myself. And when that's done, hopefully, I'll be able to backport the package to Buster, so we can gate on it. Thanks for your understanding and proposal, Cheers, Thomas Goirand (zigo) From cboylan at sapwetik.org Wed May 27 21:46:59 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 27 May 2020 14:46:59 -0700 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: <885d5553-958c-0253-3b19-3b3c1ef6fa54@debian.org> References: <885d5553-958c-0253-3b19-3b3c1ef6fa54@debian.org> Message-ID: On Wed, May 27, 2020, at 1:36 PM, Thomas Goirand wrote: > On 5/26/20 8:06 PM, Kendall Nelson wrote: > > For each virtual PTG meeting room, we will provide an official Zoom > > videoconference room, which will be reused by various groups across the > > day. This is what you snipped out of the previous email: > > OpenDev's Jitsi Meet instance is also available at > > https://meetpad.opendev.org as an alternative tooling option to create > > team-specific meetings. This is more experimental and may not work as > > well for larger groups, but has the extra benefit of including etherpad > > integration (the videoconference doubles as an etherpad document, > > avoiding jumping between windows). PTGBot can be used to publish the > > chosen meeting URL if you decide to not use the official Zoom one. The > > OpenDev Sysadmins can be reached in the #opendev IRC channel if any > > problems arise. > Sorry for ranting, but I can't help it on this topic... > > I really think it's a shame that it's been decided on a non-free video > conference service. Why aren't we self-hosting it, and using free > software instead of Zoom, which on top of this had major security > problems recently? As noted in the portion of the email you removed: we are. The OpenDev team has been working to ensure that Jitsi Meet is ready to go (just today we added in the ability to scale up jvb processes to balance load), but it is a very new service to us and alternatives are a good thing. We're going to try and estimate usage ahead of time and from that be ready for Monday. If you're testing it this week or having trouble next week please let us know. We're going to do our best, but it will likely be a learning experience. As another option my understanding is that Zoom supports quite large rooms which may be necessary for some groups that are meeting. And it is good to have a backup, particularly since we haven't done this before (and that applies in both directions, it is entirely possible some people using Zoom may find meetpad is a better fit and those trying meetpad may want to switch to Zoom for reasons). Zoom provides dial in numbers for much of the world (though not all of it), as well as providing a web client that doesn't require you to install any additional software beyond your web browser. Clark From tobias.urdin at binero.com Wed May 27 21:54:52 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Wed, 27 May 2020 21:54:52 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> <1589966309759.42370@binero.com>, <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> Message-ID: <1590616492646.63126@binero.com> Hello Thomas, Any reason why you think we should keep Puppet 5 as supported in metadata.json? I would like to prepare as much as possible this release so everything is done for the next one while it's minimal changes it would be nice to have it done already. I don't think changing that would block any Debian usage with Puppet 5. Best regards ________________________________________ From: Thomas Goirand Sent: Wednesday, May 27, 2020 10:45 PM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Puppet 5 is officially unsupported in Victoria release On 5/20/20 11:18 AM, Tobias Urdin wrote: > Hello, > I'm thinking that we maybe we can compromise on certain points here. > > I would say it's pretty certain we would not break any Puppet 5 code in itself in the > Victoria cycle since we are already behind most of the new features anyways. > > What if: > > * We officially state Puppet 5 is unsupported > * Remove Puppet 5 as supported in metadata.json > * Only run Puppet 6 integration testing for CentOS and Ubuntu > > But we: > > * Keep a promise to not break Puppet 5 usage in Victoria > * Keep Puppet 5 unit/syntax testing (as well as Puppet 6 of course) > * Debian can run integration testing with Puppet 5 if you fix those up > > The benefit here is that we would not expose new consumers to use Puppet 5 > but there is also drawbacks in that: > > * You cannot use Puppet 5 and do a "puppet module install" since metadata.json > would cause Puppet 5 to not be supported (a note here is that we don't even test > that this is possible with the current state of the modules i.e checking or conflict or faulty dependencies > in the metadata.json files) > > Since the only issue here is that downstream Debian wants to use Puppet 5 I think this is a fair > compromise and since you package the Puppet modules in Debian I assume you don't need any > support being stated in metadata.json for Puppet 5 explicitly. > > What do you think? > > Best regards > Tobias Hi Tobias, Thanks a lot for all of the above, which I agree, except that I don't think we should remove Puppet 5 from metadata.json on all projects. I saw that today, your patch to remove Puppet 5 testing was merged. That's a good thing. Today, I've been able to make scenario001 run successfully on a Debian VM, up to the puppet 2nd run which shows indenpotancy. It later failed when running puppet, but that's because the tempest binary was set to /usr/bin/pyhon3-tempest instead of /usr/bin/tempest. In other words, I'm on very good tracks to get Debian support gating again like 2 years ago. This time, I wont give up on it and let it go, I promise! I'll also try to get a colleague be a backup of me, so we can be 2 watching on the gate if Debian was failing. FYI, to have Debian to work, we need these 2 patches: https://review.opendev.org/#/c/730881/ https://review.opendev.org/#/c/730851/ Until I get scenario001 fully working locally, these 2 are still WIP, I'll let you know on IRC when I'm done. Of course, gating in Debian is done with what I'm using, which is puppet from Debian, as you suggested. As for the packaging of Puppet in Debian Sid, the main issue is what I told you about: support for Ruby 2.7 in upstream puppet. Until this happens, it is very difficult for us to package Puppet 6. But I know upstream is actively working on it, so when it happens, I'll try to work on it myself. And when that's done, hopefully, I'll be able to backport the package to Buster, so we can gate on it. Thanks for your understanding and proposal, Cheers, Thomas Goirand (zigo) From Arkady.Kanevsky at dell.com Thu May 28 00:50:38 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 28 May 2020 00:50:38 +0000 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: <885d5553-958c-0253-3b19-3b3c1ef6fa54@debian.org> Message-ID: <905c6954d49a4f11a72781fe7699035c@AUSX13MPS308.AMER.DELL.COM> As long as we have something that works and all of us can use - who cares. -----Original Message----- From: Clark Boylan Sent: Wednesday, May 27, 2020 4:47 PM To: openstack-discuss at lists.openstack.org Subject: Re: [all] Week Out PTG Details & Registration Reminder [EXTERNAL EMAIL] On Wed, May 27, 2020, at 1:36 PM, Thomas Goirand wrote: > On 5/26/20 8:06 PM, Kendall Nelson wrote: > > For each virtual PTG meeting room, we will provide an official Zoom > > videoconference room, which will be reused by various groups across > > the day. This is what you snipped out of the previous email: > > OpenDev's Jitsi Meet instance is also available at > > https://meetpad.opendev.org as an alternative tooling option to > > create team-specific meetings. This is more experimental and may not > > work as well for larger groups, but has the extra benefit of > > including etherpad integration (the videoconference doubles as an > > etherpad document, avoiding jumping between windows). PTGBot can be > > used to publish the chosen meeting URL if you decide to not use the > > official Zoom one. The OpenDev Sysadmins can be reached in the > > #opendev IRC channel if any problems arise. > Sorry for ranting, but I can't help it on this topic... > > I really think it's a shame that it's been decided on a non-free video > conference service. Why aren't we self-hosting it, and using free > software instead of Zoom, which on top of this had major security > problems recently? As noted in the portion of the email you removed: we are. The OpenDev team has been working to ensure that Jitsi Meet is ready to go (just today we added in the ability to scale up jvb processes to balance load), but it is a very new service to us and alternatives are a good thing. We're going to try and estimate usage ahead of time and from that be ready for Monday. If you're testing it this week or having trouble next week please let us know. We're going to do our best, but it will likely be a learning experience. As another option my understanding is that Zoom supports quite large rooms which may be necessary for some groups that are meeting. And it is good to have a backup, particularly since we haven't done this before (and that applies in both directions, it is entirely possible some people using Zoom may find meetpad is a better fit and those trying meetpad may want to switch to Zoom for reasons). Zoom provides dial in numbers for much of the world (though not all of it), as well as providing a web client that doesn't require you to install any additional software beyond your web browser. Clark From gmann at ghanshyammann.com Thu May 28 03:37:15 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 May 2020 22:37:15 -0500 Subject: [nova][ptg] Healthchecks API Message-ID: <172595b00d3.b613e1c25381.1035685189391512038@ghanshyammann.com> Hello Everyone, This is one of the PTG topic[1] but it will be good if we can get some discussion to happen before PTG and utilize the PTG time. I am sorry for starting this thread late. To provide the healthchecks API for nova, I have done 1 PoC which is basically extending the oslo healthchecks middleware plugins to add the real checks. - https://review.opendev.org/#/c/731396/ I am writing the details on this etherpad[2] where you can see the response and the flow of new plugins. Also describing briefly here: oslo provides healthchecks middleware with plugin framework. oslo provides two basic plugins which does file/port based health checks - disable_by_file - disable_by_files_ports New Idea is to extend those checks with new plugins on nova side: Nova can provide three plugins which are configurable[3]: 1. Nova_DB_healthcheck: Checks if API, cell0 and at least one cell DB is up. If so then return Healthy otherwise Unhealthy 2. Nova_MQ_healthcheck: Checks if at least one cells MQ is up. If so then return Healthy otherwise Unhealthy 3. Nova_services_healthcheck: Checks if at least one cell has at least one conductor and one compute service running. If so then return Healthy otherwise Unhealthy All plugins will return the dict of results for example DB dict of API, all cells DB with status, Please refer the example response in later part. TODO: Auth part for various plugins Flow diagrame of new plugins: - Nova_DB_healthcheck: -->API DB is up | --> cell0 DB and at least one cell DB is up I --> Return OK - Nova_MQ_healthcheck: -->API MQ is up | --> At leask: one cell MQ is up I --> Return OK - Nova_services_healthcheck: --> At least one cell has at least one conductor and one compute running --> TODO: need to check other services for example scheduler at least I --> Return OK Result: 200 OK if all enabled plugins return OK otherwise 503 Service Unavailable. Opinion/thoughts? [1] https://etherpad.opendev.org/p/nova-victoria-ptg [2]https://etherpad.opendev.org/p/nova-healthchecks [3] https://review.opendev.org/#/c/731396/1/etc/nova/api-paste.ini at 107 From sundar.nadathur at intel.com Thu May 28 04:36:54 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Thu, 28 May 2020 04:36:54 +0000 Subject: [all] [kolla] Kolla Builder - next generation In-Reply-To: References: Message-ID: > From: Radosław Piliszek > Sent: Wednesday, April 22, 2020 6:11 AM > To: Mark Goddard > Cc: openstack-discuss > Subject: Re: [all] [kolla] Kolla Builder - next generation > [...] > A very particular example for the moment would be cyborg-agent, which > became unbuildable because OPAE SDK was not available for the supported > platforms, while it looks like it is an optional component but there was no > way to specify that reasoning (except for a comment but it did not make it it > seems). We are aware of this issue. A proposal to remove this OPAE dependency has been in the works. We will bring this up in the Victoria PTG and aim to make it a goal for the V release. Regards, Sundar From gouthampravi at gmail.com Thu May 28 05:12:10 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 27 May 2020 22:12:10 -0700 Subject: [manila][ptg] Victoria Virtual PTG Items Message-ID: Hello Zorillas and other interested Stackers, If you haven't already registered to be part of the virtual PTG, please do so as soon as possible [1]. There's now an etherpad posted to gather topics for the Ussuri Team Retrospective [2]. Please take a look and add your thoughts before the event. Also take a look at our Virtual PTG planning etherpad [3], and let me know if you have a problem with the draft schedule posted. As a reminder, these are the time slots for our meeting: == Monday, 1st June 2020 == 1400 UTC - 1700 UTC == Tuesday, 2nd June 2020 == 1500 UTC - 1700 UTC 2100 UTC - 2300 UTC == Friday, 5th June 2020 == 1400 UTC - 1700 UTC Our meetings are on Zoom and will be recorded. The Zoom Room is called "Cactus" and is linked out of the PTG Schedule page [4]. So make sure to bookmark it :) Looking forward to a happy week of designing with you all! -- Goutham Ravi [1] https://virtualptgjune2020.eventbrite.com/ [2] https://etherpad.opendev.org/p/manila-ussuri-retrospective [3] https://etherpad.opendev.org/p/vancouver-ptg-manila-planning [4] http://ptg.openstack.org/ptg.html From akekane at redhat.com Thu May 28 06:48:50 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 28 May 2020 12:18:50 +0530 Subject: [glance][ptg] Victoria Virtual PTG updates Message-ID: Hello All, Greetings!!! First virtual PTG is around the corner and if you haven't already registered, please do so as soon as possible [1]. I have created a Virtual PTG planning etherpad [2] and also added day wise topics along with timings we are going to discuss. Kindly let me know if you have any concerns with allotted time slots. As a reminder, these are the time slots for our discussion. Monday 01 June 2020 1400 UTC to 1700 UTC Folsom Tuesday 02 June 2020 2100 UTC to 00 UTC Austin Wednesday 03 June 2020 2100 UTC to 00 UTC Austin Thursday 04 June 2020 2100 UTC to 00 UTC Folsom Friday 05 June 2020 1400 UTC to 1700 UTC Folsom We will be using meetpad [3] for our discussion, kindly try to use it once before the actual discussion. I will publish the meeting URL before the PTG and which will be the same throughout the PTG. [1] https://virtualptgjune2020.eventbrite.com/ [2] https://etherpad.opendev.org/p/Glance-Victoria-PTG-planning [3] https://meetpad.opendev.org Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Thu May 28 06:49:29 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 28 May 2020 14:49:29 +0800 Subject: [Cyborg]Edge sync up at the PTG References: <9FE3AE2A-C685-41B8-A749-2555BCB282A7.ref@yahoo.com> Message-ID: <9FE3AE2A-C685-41B8-A749-2555BCB282A7@yahoo.com> Hi Ildikó, Thanks for the post! Cyborg team would be glad to join ! I have added this edge group meeting session to our PTG schedule : https://etherpad.opendev.org/p/cyborg-victoria-goals Please help to add links for the discussion if there is any. Regards, Yumeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu May 28 07:12:40 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 28 May 2020 09:12:40 +0200 Subject: [Cyborg]Edge sync up at the PTG In-Reply-To: <9FE3AE2A-C685-41B8-A749-2555BCB282A7@yahoo.com> References: <9FE3AE2A-C685-41B8-A749-2555BCB282A7.ref@yahoo.com> <9FE3AE2A-C685-41B8-A749-2555BCB282A7@yahoo.com> Message-ID: <66500EA2-7D91-4295-AD89-5A05ACC0AC9D@gmail.com> Hi Yumeng, Great, thank you! All our PTG planning information is on this etherpad: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 We are planning to start the hour at 1300 UTC with a quick Neutron sync and then we can move over to the Cyborg related topics. I will make sure to keep the PTG bot up to date with progress in case you would join a little later than 1300 UTC. If you have any topics please add it to the above etherpad. Thanks, Ildikó > On May 28, 2020, at 08:49, yumeng bao wrote: > > Hi Ildikó, > > Thanks for the post! > Cyborg team would be glad to join ! I have added this edge group meeting session to our PTG schedule : https://etherpad.opendev.org/p/cyborg-victoria-goals Please help to add links for the discussion if there is any. > > Regards, > Yumeng From skaplons at redhat.com Thu May 28 07:35:12 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 May 2020 09:35:12 +0200 Subject: [sdk][osc][cli][ptg][project_cleanup] PTG Week planning In-Reply-To: <792968FC-D75F-47D3-BF98-97A7D6FF7182@gmail.com> References: <792968FC-D75F-47D3-BF98-97A7D6FF7182@gmail.com> Message-ID: <20200528073512.zoiwow3kaaovfcg2@skaplons-mac> Hi, Thx for heads up. I wil try to be there, especially on Monday. On Wednesday I have overlapping Neutron session but wil do my best to be there at least for some time too. On Wed, May 27, 2020 at 05:40:28PM +0200, Artem Goncharov wrote: > Hi everyone interested :) > > The PTG planning for the OpenStackSDK and the CLI is there: > > https://etherpad.opendev.org/p/SDK-VictoriaPTG-Planning > > Please enter yourself there if interested and add topics you want to discuss. > > We have currently 2 slots: > 01 June: 15:00 - 17:00 UTC Juno > 03 June: 15:00 - 17:00 UTC Bexar > And we plan to be using meetpad (fingers crossed). > > Interesting topics: > > - June 01 15:00-16:00 project cleanup - long standing topic across the projects > - June 01 16:00-17:00 openstackclient-first - a one time failed community goal, and now maybe multi-release goal for joining all individual service clients into OSC to give users seamless UX > > > Hear from you soon, > Artem -- Slawek Kaplonski Senior software engineer Red Hat From zigo at debian.org Thu May 28 08:31:12 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 28 May 2020 10:31:12 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <1590616492646.63126@binero.com> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> <1589966309759.42370@binero.com> <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> <1590616492646.63126@binero.com> Message-ID: <287ca10a-2467-dc09-546a-7326b0358c20@debian.org> On 5/27/20 11:54 PM, Tobias Urdin wrote: > Hello Thomas, > Any reason why you think we should keep Puppet 5 as supported in metadata.json? > > I would like to prepare as much as possible this release so everything is done for the next one > while it's minimal changes it would be nice to have it done already. > > I don't think changing that would block any Debian usage with Puppet 5. > > Best regards Hi Tobias, Well, I am scared it breaks something indeed. Why don't you think it wont break anything with Puppet 5? If we are to continue gating on Puppet 5 (in Debian at least), what is the problem in keeping Puppet 5 in metadata.json files? Cheers, Thomas Goirand (zigo) From mark at stackhpc.com Thu May 28 08:52:19 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 28 May 2020 09:52:19 +0100 Subject: [kolla] Virtual PTG Message-ID: Hi, Just a reminder that it's the virtual PTG next week. We have an Etherpad [1] which has lots of information, and remember to register (free) [2] if you haven't already. Kendall sent some useful information [3] recently also. If you plan to attend, please add your name to the Etherpad so we know who to expect for each session (although you are welcome to turn up if you have not). The Etherpad is currently quite light on proposed discussion topics. Please do spend some time to consider what we should be using this time to discuss. It doesn't have to be a shiny new feature, it could be a long standing bug that we should agree on how to address, or a discussion on how to improve our processes. If we are left with time to spare however, I think it will be well spent on some reflection, maintenance, and planning. Cheers, Mark [1] https://etherpad.opendev.org/p/kolla-victoria-ptg [2] https://virtualptgjune2020.eventbrite.com/ [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015054.html From yumeng_bao at yahoo.com Thu May 28 09:10:36 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 28 May 2020 17:10:36 +0800 Subject: [Cyborg]Edge sync up at the PTG In-Reply-To: <66500EA2-7D91-4295-AD89-5A05ACC0AC9D@gmail.com> References: <66500EA2-7D91-4295-AD89-5A05ACC0AC9D@gmail.com> Message-ID: <96F3FFC6-945B-406E-9654-D3FE99307592@yahoo.com> Got it, thanks! Regards, Yumeng > On May 28, 2020, at 3:12 PM, Ildiko Vancsa wrote: > > Hi Yumeng, > > Great, thank you! > > All our PTG planning information is on this etherpad: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 > > We are planning to start the hour at 1300 UTC with a quick Neutron sync and then we can move over to the Cyborg related topics. I will make sure to keep the PTG bot up to date with progress in case you would join a little later than 1300 UTC. > > If you have any topics please add it to the above etherpad. > > Thanks, > Ildikó > > >> On May 28, 2020, at 08:49, yumeng bao wrote: >> >> Hi Ildikó, >> >> Thanks for the post! >> Cyborg team would be glad to join ! I have added this edge group meeting session to our PTG schedule : https://etherpad.opendev.org/p/cyborg-victoria-goals Please help to add links for the discussion if there is any. >> >> Regards, >> Yumeng > From yumeng_bao at yahoo.com Thu May 28 09:10:36 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 28 May 2020 17:10:36 +0800 Subject: [Cyborg]Edge sync up at the PTG In-Reply-To: <66500EA2-7D91-4295-AD89-5A05ACC0AC9D@gmail.com> References: <66500EA2-7D91-4295-AD89-5A05ACC0AC9D@gmail.com> Message-ID: <96F3FFC6-945B-406E-9654-D3FE99307592@yahoo.com> Got it, thanks! Regards, Yumeng > On May 28, 2020, at 3:12 PM, Ildiko Vancsa wrote: > > Hi Yumeng, > > Great, thank you! > > All our PTG planning information is on this etherpad: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 > > We are planning to start the hour at 1300 UTC with a quick Neutron sync and then we can move over to the Cyborg related topics. I will make sure to keep the PTG bot up to date with progress in case you would join a little later than 1300 UTC. > > If you have any topics please add it to the above etherpad. > > Thanks, > Ildikó > > >> On May 28, 2020, at 08:49, yumeng bao wrote: >> >> Hi Ildikó, >> >> Thanks for the post! >> Cyborg team would be glad to join ! I have added this edge group meeting session to our PTG schedule : https://etherpad.opendev.org/p/cyborg-victoria-goals Please help to add links for the discussion if there is any. >> >> Regards, >> Yumeng > From mark at stackhpc.com Thu May 28 10:11:01 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 28 May 2020 11:11:01 +0100 Subject: [all] [kolla] Kolla Builder - next generation In-Reply-To: References: Message-ID: On Thu, 28 May 2020 at 05:36, Nadathur, Sundar wrote: > > > > From: Radosław Piliszek > > Sent: Wednesday, April 22, 2020 6:11 AM > > To: Mark Goddard > > Cc: openstack-discuss > > Subject: Re: [all] [kolla] Kolla Builder - next generation > > [...] > > A very particular example for the moment would be cyborg-agent, which > > became unbuildable because OPAE SDK was not available for the supported > > platforms, while it looks like it is an optional component but there was no > > way to specify that reasoning (except for a comment but it did not make it it > > seems). > > We are aware of this issue. A proposal to remove this OPAE dependency has been in the works. We will bring this up in the Victoria PTG and aim to make it a goal for the V release. Hi Sundar, thanks for the reply. Can you confirm whether this is a hard dependency on OPAE? If so, we will be unable to support Cyborg on CentOS in Ussuri. > > Regards, > Sundar From tobias.urdin at binero.com Thu May 28 10:12:41 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 28 May 2020 10:12:41 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <287ca10a-2467-dc09-546a-7326b0358c20@debian.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> <1589966309759.42370@binero.com> <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> <1590616492646.63126@binero.com>, <287ca10a-2467-dc09-546a-7326b0358c20@debian.org> Message-ID: <1590660761517.77469@binero.com> Hello Thomas, The version stated in metadata.json is pretty much only used to show support on PuppetForge when a module is uploaded, it could mean you cannot do "puppet module install openstack-" if your puppet binary is version 5. We'd want any new users to use Puppet 6, the metadata which shows on PuppetForge would be the only place saying Puppet 5 which I don't like. Best regards ________________________________________ From: Thomas Goirand Sent: Thursday, May 28, 2020 10:31 AM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Puppet 5 is officially unsupported in Victoria release On 5/27/20 11:54 PM, Tobias Urdin wrote: > Hello Thomas, > Any reason why you think we should keep Puppet 5 as supported in metadata.json? > > I would like to prepare as much as possible this release so everything is done for the next one > while it's minimal changes it would be nice to have it done already. > > I don't think changing that would block any Debian usage with Puppet 5. > > Best regards Hi Tobias, Well, I am scared it breaks something indeed. Why don't you think it wont break anything with Puppet 5? If we are to continue gating on Puppet 5 (in Debian at least), what is the problem in keeping Puppet 5 in metadata.json files? Cheers, Thomas Goirand (zigo) From yumeng_bao at yahoo.com Thu May 28 12:50:19 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 28 May 2020 20:50:19 +0800 Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration References: Message-ID:  Hi all, In cyborg pre-PTG meeting conducted last week[0],shaohe from Intel introduced SmartNIC support integrations,and we've reached some initial agreements: The workflow for a user to create a server with network acceleartor(accelerator is managed by Cyborg) is: 1. create a port with accelerator request specified into binding_profile field NOTE: Putting the accelerator request(device_profile) into binding_profile is one possible solution implemented in our POC. Another possible solution,adding a new attribute to port object for cyborg specific use instead of using binding_profile, is discussed in shanghai Summit[1]. This needs check with neutron team, which neutron team would suggest? 2.create a server with the port created Cyborg-nova-neutron integration workflow can be found on page 3 of the slide[2] presented in pre-PTG. And we also record the introduction! Please find the pre-PTG meeting vedio record in [3] and [4], they are the same, just for different region access. [0]http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014987.html [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw [3]pre-PTG vedio records in Youtube:https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu.be [4]pre-PTG vedio records in Youku:http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom=iphone&sharekey=51459cbd599407990dd09940061b374d4 Regards, Yumeng From rosmaita.fossdev at gmail.com Thu May 28 12:59:03 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 28 May 2020 08:59:03 -0400 Subject: [glance] "true is not of type boolean" when creating protected image In-Reply-To: References: Message-ID: On 5/27/20 7:26 AM, Bernd Bausch wrote: > This problem occurs on a stable Ussuri Devstack. I issue this command to > create a protected image: > > $ openstack image create myimage2 --file ~/myvm1.qcow2 --disk-format > qcow2  --protected > > and get this error message back: > > BadRequestException: 400: Client Error for url: > http://192.168.1.200/image/v2/images, 400 Bad Request: On > instance['protected']:: 'type': > 'boolean'}: {'description': 'If true, image > will not be deletable.',: 'True': Provided object does > not match schema 'image': 'True' is not of type > 'boolean': Failed validating 'type' in > schema['properties']['protected']: > > It seems that image creation fails because > > 'True' is not of type 'boolean' 'True' (capital T) is not a valid JSON boolean value, so if that's what's in the request JSON, it should fail schema validation. > > Am I looking at a bug or am I doing something wrong? Looks like a bug. > > I do have a great workaround: First, create the image without the > protected flag. Then: > > $ openstack image set --protected myimage2 > > This works. I suggest making both calls with the --debug option so you can compare the JSON being passed in the unsuccessful vs. the successful request. This will give you a better idea of whether the problem is client-side or server-side, and then you can file a bug with the appropriate project. From pierre at stackhpc.com Thu May 28 13:09:26 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 28 May 2020 15:09:26 +0200 Subject: [blazar][ptg] IRC meetings cancelled due to PTG Message-ID: Hello, The Blazar community will meet several times through the PTG event next week. Let's cancel the IRC meetings originally scheduled on June 2 and June 4. Talk to you soon, Pierre Riteau (priteau) From m2elsakha at gmail.com Thu May 28 13:46:42 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Thu, 28 May 2020 09:46:42 -0400 Subject: Uniting TC and UC In-Reply-To: References: <6f8571bf-0d23-8a3b-1225-8c2bdd95285d@openstack.org> Message-ID: Thanks all. Regarding the user-committee mailing list, we had a discussion and there's no reason to keep it alive. So it can be merged once the UC/TC merger is done. Thanks, Thierry for adding it to the list. Do we have a specific order that items will be discussed in the TC PTG sessions? On Sun, May 10, 2020 at 1:54 PM Jay S. Bryant wrote: > > On 5/7/2020 11:00 AM, Thierry Carrez wrote: > > Mohamed Elsakhawy wrote: > >> As you may know already, there has been an ongoing discussion to > >> merge UC and TC under a single body. Three options were presented, > >> along with their impact on the bylaws. > >> > >> > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html > >> > >> > >> We had several discussions in the UC on the three options as well as > >> the UC items that need to be well-represented under the common > >> committee, and here’s what we propose for the merger: > >> [...] > > > > Thanks Mohamed for the detailed plan. With Jeremy's caveats, it sounds > > good to me. I'm happy to assist by filing the necessary changes in the > > various governance repositories. > > > >> [...] > >> In addition to discussions over the mailing list, we also have the > >> opportunity of "face-to-face" discussions at the upcoming PTG. > > > > I added this topic to the etherpad at: > > https://etherpad.opendev.org/p/tc-victoria-ptg > > > I agree. Thanks Mohamed for putting this together. It is consistent > with the plan we discussed earlier in the year. > > Thanks for adding it to the Etherpad ttx. > > Jay > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu May 28 14:08:49 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 28 May 2020 15:08:49 +0100 Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration In-Reply-To: References: Message-ID: On Thu, 2020-05-28 at 20:50 +0800, yumeng bao wrote: >  > Hi all, > > > In cyborg pre-PTG meeting conducted last week[0],shaohe from Intel introduced SmartNIC support integrations,and we've > reached some initial agreements: > > The workflow for a user to create a server with network acceleartor(accelerator is managed by Cyborg) is: > > 1. create a port with accelerator request specified into binding_profile field > NOTE: Putting the accelerator request(device_profile) into binding_profile is one possible solution implemented in > our POC. the binding profile field is not really intended for this. https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/portbindings.py#L31-L34 its intended to pass info from nova to neutron but not the other way around. it was orgininally introduced so that nova could pass info to the neutron plug in specificly the sriov pci address. it was not intended for two way comunicaiton to present infom form neutron to nova. we kindo of broke that with the trusted vf feature but since that was intended to be admin only as its a security risk in a mulit tenant cloud its a slightl different case. i think we should avoid using the binding profile for passing info form neutron to nova and keep it for its orginal use of passing info from the virt dirver to the network backend. > Another possible solution,adding a new attribute to port object for cyborg specific use instead of using > binding_profile, is discussed in shanghai Summit[1]. > This needs check with neutron team, which neutron team would suggest? from a nova persepctive i would prefer if this was a new extention. the binding profile is admin only by default so its not realy a good way to request features be enabled. you can use neutron rbac policies to alther that i belive but in genral i dont think we shoudl advocate for non admins to be able to modify the binding profile as they can break nova. e.g. by modifying the pci addres. if we want to supprot cyborg/smartnic integration we should add a new device-profile extention that intoduces the ablity for a non admin user to specify a cyborg device profile name as a new attibute on the port. the neutron server could then either retirve the request groups form cyborg and pass them as part of the port resouce request using the mechanium added for minium bandwidth or it can leave that to nova to manage. i would kind of prefer neutron to do this but both could work. > > 2.create a server with the port created > > Cyborg-nova-neutron integration workflow can be found on page 3 of the slide[2] presented in pre-PTG. > > And we also record the introduction! Please find the pre-PTG meeting vedio record in [3] and [4], they are the same, > just for different region access. > > > [0]http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014987.html > [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj > [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw > [3]pre-PTG vedio records in Youtube:https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu.be > [4]pre-PTG vedio records in Youku: > http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom=iphone&sharekey=51459cbd599407990dd09940061b374d4 > > Regards, > Yumeng > From mnaser at vexxhost.com Thu May 28 14:25:44 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 28 May 2020 10:25:44 -0400 Subject: [tc] Monthly update (to become Weekly) Message-ID: Hello everyone, Here’s an update of what’s happened in the OpenStack TC during the last month. Moving forward, I’ll be sending an update on a weekly basis. You can get more information by checking for changes in openstack/governance repository. # CHANGES PENDING - Rename ansible-role-lunasa-hsm deliverable: https://review.opendev.org/#/c/731313/ - Add whitebox-tempest-plugin under QA: https://review.opendev.org/#/c/714480/ - Select migrate-to-focal goal for Victoria cycle: https://review.opendev.org/#/c/731213/ - Clarify the support for linux distro: https://review.opendev.org/#/c/727238/ - Add QA branchless projects also in py3.5 support list: https://review.opendev.org/#/c/729325/ - Remove congress and tricircle project teams: https://review.opendev.org/#/c/728818/ - [draft] Add assert:supports-standalone: https://review.opendev.org/#/c/722399/ # PROJECT UPDATES - Adding Storlets in the list of keeping py2: https://review.opendev.org/#/c/720714/ - Transfer Policy team co-lead role: https://review.opendev.org/#/c/722101/ - Add keystoneauth to the python3.5 list: https://review.opendev.org/#/c/721093/ - Add note that OpenStackSDK kept 3.5 support: https://review.opendev.org/#/c/721083/ - Retire i18n-specs: https://review.opendev.org/#/c/721723/ - Migrate from mock to built in unittest.mock: https://review.opendev.org/#/c/722924/ - Migrate from olso.rootwrap to oslo.privsep: https://review.opendev.org/#/c/718177/ - Appoint Yasufumi Ogawa as Tacker PTL: https://review.opendev.org/#/c/719091/ # GOAL UPDATES - Infrastructure project team is now the TaCT SIG: https://review.opendev.org/#/c/721621/ - Add ansible role for managing Luna SA HSM: https://review.opendev.org/#/c/721348/ - Drop deprecated release-management flag for xstatic repos: https://review.opendev.org/#/c/721046/ # ABANDONED CHANGES - Remove CentOS 8 from tested runtimes for Ussuri: https://review.opendev.org/#/c/720005/ - Goals: add container-images: https://review.opendev.org/#/c/720107/ # GENERAL UPDATES - Add goal to migrate CI/CD jobs to Ubuntu Focal: https://review.opendev.org/#/c/728158/ - Fix hacking min version to 3.0.1: https://review.opendev.org/#/c/730161/ - Appoint Javier Peña as rpm-packaging PTL: https://review.opendev.org/#/c/728088/ - Add repository for oslo.metrics: https://review.opendev.org/#/c/725848/ - Require PTL signoff for project-update changes: https://review.opendev.org/#/c/727704/ - Allow for faster addition of projects: https://review.opendev.org/#/c/726932/ - Retire syntribos: https://review.opendev.org/#/c/726507/ - Loosen voting for community goal proposals: https://review.opendev.org/#/c/724142/ - Add TrilioVault Charms to OpenStack Charms: https://review.opendev.org/#/c/720533/ - Cleanup retired repositories: https://review.opendev.org/#/c/724334/ - Move i18n into a SIG: https://review.opendev.org/#/c/721605/ # OTHER - Cap jsonschema 3.2.0 as the minimal version: https://review.opendev.org/#/c/730948/ - Fix bullet list formatting for drop-py2 goal: https://review.opendev.org/#/c/729314/ Thanks and sorry for the delay in getting those out. -- Mohammed Naser VEXXHOST, Inc. From fungi at yuggoth.org Thu May 28 14:47:28 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 28 May 2020 14:47:28 +0000 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: <905c6954d49a4f11a72781fe7699035c@AUSX13MPS308.AMER.DELL.COM> References: <885d5553-958c-0253-3b19-3b3c1ef6fa54@debian.org> <905c6954d49a4f11a72781fe7699035c@AUSX13MPS308.AMER.DELL.COM> Message-ID: <20200528144728.kdbuvp5qqferlkld@yuggoth.org> On 2020-05-28 00:50:38 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: > As long as we have something that works and all of us can use - > who cares. [...] Software freedom is one of the underlying tenets of the OpenStack project. I understand that it may not be important to everyone, but many of us value free/libre open source software and prefer to use it when possible over proprietary alternatives. Otherwise you might say the same about OpenStack itself: AWS, Vsphere, and so on work and all of us *can* use those, so why bother making OpenStack? Why put effort into touting the freedom our software affords its users if we don't in turn value the same freedom in software we choose for ourselves? So I guess my answer to your question is, "me... I care." -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Thu May 28 14:55:02 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 28 May 2020 16:55:02 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <1590660761517.77469@binero.com> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> <1589966309759.42370@binero.com> <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> <1590616492646.63126@binero.com> <287ca10a-2467-dc09-546a-7326b0358c20@debian.org> <1590660761517.77469@binero.com> Message-ID: <85be2041-5ab0-a659-3a70-bd9d0d877412@debian.org> Hi Tobias, On 5/28/20 12:12 PM, Tobias Urdin wrote: > The version stated in metadata.json is pretty much only used to show support > on PuppetForge when a module is uploaded, it could mean you cannot do > "puppet module install openstack-" if your puppet binary is version 5. Which is probably what we don't want to forbid (yet), no? Cheers, Thomas Goirand (zigo From amy at demarco.com Thu May 28 14:36:30 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 28 May 2020 09:36:30 -0500 Subject: Ussuri RDO Release Announcement Message-ID: If you're having trouble with the formatting, this release announcement is available online https://blogs.rdoproject.org/2020/05/rdo-ussuri-released/ --- *RDO Ussuri Released* The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ussuri for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ussuri is the 21st release from the OpenStack project, which is the work of more than 1,000 contributors from around the world. The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/. The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds. All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first. PLEASE NOTE: At this time, RDO Ussuri provides packages for CentOS8 only. Please use the previous release, Train, for CentOS7 and python 2.7. *Interesting things in the Ussuri release include:* - Within the Ironic project, a bare metal service that is capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner, UEFI and device selection is now available for Software RAID. - The Kolla project, the containerised deployment of OpenStack used to provide production-ready containers and deployment tools for operating OpenStack clouds, streamlined the configuration of external [Ceph]( https://ceph.io/) integration, making it easy to go from Ceph-Ansible-deployed Ceph cluster to enabling it in OpenStack. *Other improvements include:* - Support for IPv6 is available within the Kuryr project, the bridge between container framework networking models and OpenStack networking abstractions. - Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/ussuri/highlights.html. - A new Neutron driver networking-omnipath has been included in RDO distribution which enables the Omni-Path switching fabric in OpenStack cloud. - OVN Neutron driver has been merged in main neutron repositon from networking-ovn. *Contributors* During the Ussuri cycle, we saw the following new RDO contributors: - Amol Kahat - Artom Lifshitz - Bhagyashri Shewale - Brian Haley - Dan Pawlik - Dmitry Tantsur - Dougal Matthews - Eyal - Harald Jensås - Kevin Carter - Lance Albertson - Martin Schuppert - Mathieu Bultel - Matthias Runge - Miguel Garcia - Riccardo Pittau - Sagi Shnaidman - Sandeep Yadav - SurajP - Toure Dunnon Welcome to all of you and Thank You So Much for participating! But we wouldn’t want to overlook anyone. A super massive Thank You to all 54 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories: - Adam Kimball - Alan Bishop - Alan Pevec - Alex Schultz - Alfredo Moralejo - Amol Kahat - Artom Lifshitz - Arx Cruz - Bhagyashri Shewale - Brian Haley - Cédric Jeanneret - Chandan Kumar - Dan Pawlik - David Moreau Simard - Dmitry Tantsur - Dougal Matthews - Emilien Macchi - Eric Harney - Eyal - Fabien Boucher - Gabriele Cerami - Gael Chamoulaud - Giulio Fidente - Harald Jensås - Jakub Libosvar - Javier Peña - Joel Capitao - Jon Schlueter - Kevin Carter - Lance Albertson - Lee Yarwood - Marc Dequènes (Duck) - Marios Andreou - Martin Mágr - Martin Schuppert - Mathieu Bultel - Matthias Runge - Miguel Garcia - Mike Turek - Nicolas Hicher - Rafael Folco - Riccardo Pittau - Ronelle Landy - Sagi Shnaidman - Sandeep Yadav - Soniya Vyas - Sorin Sbarnea - SurajP - Toure Dunnon - Tristan de Cacqueray - Victoria Martinez de la Cruz - Wes Hayutin - Yatin Karel - Zoltan Caplovic *The Next Release Cycle* At the end of one release, focus shifts immediately to the next, Victoria, which has an estimated GA the week of 12-16 October 2020. The full schedule is available at https://releases.openstack.org/victoria/schedule.html. Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 25-26 June 2020 for Milestone One and 17-18 September 2020 for Milestone Three. *Get Started* There are three ways to get started with RDO. To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works. For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order. Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world. *Get Help* The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users at lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev at lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available athttps://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org. The #rdo channel on Freenode IRC is also an excellent place to find and give help. We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues. *Get Involved* To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation. Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube. Amy Marrich (spotz) https://www.rdoproject.org http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Thu May 28 15:07:20 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 28 May 2020 15:07:20 +0000 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <85be2041-5ab0-a659-3a70-bd9d0d877412@debian.org> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> <1589966309759.42370@binero.com> <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> <1590616492646.63126@binero.com> <287ca10a-2467-dc09-546a-7326b0358c20@debian.org> <1590660761517.77469@binero.com>, <85be2041-5ab0-a659-3a70-bd9d0d877412@debian.org> Message-ID: <1590678440941.1665@binero.com> Hello Thomas, My proposal was that we officially state that Puppet 5 is unsupported but we dont break it for the Victoria cycle to make sure nobody actually uses Puppet 5. Best regards ________________________________________ From: Thomas Goirand Sent: Thursday, May 28, 2020 4:55 PM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Puppet 5 is officially unsupported in Victoria release Hi Tobias, On 5/28/20 12:12 PM, Tobias Urdin wrote: > The version stated in metadata.json is pretty much only used to show support > on PuppetForge when a module is uploaded, it could mean you cannot do > "puppet module install openstack-" if your puppet binary is version 5. Which is probably what we don't want to forbid (yet), no? Cheers, Thomas Goirand (zigo From emilien at redhat.com Thu May 28 15:29:10 2020 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 28 May 2020 11:29:10 -0400 Subject: [tripleo] Deprecating Paunch in Ussuri In-Reply-To: References: <4f4f8145-bc95-cee9-6ce8-28371cd1e261@suse.com> Message-ID: Hi folks, Now we released Ussuri, we're looking at retiring Paunch from TripleO. We'll do it step by step to avoid any CI issue. First step is to remove it from THT: https://review.opendev.org/#/c/731565 The changes should be fully transparent for our users (deployment, updates, upgrades) except for the day 2 operations like container lifecycle management (paunch CLI is being retired). The new role is documentere here: https://docs.openstack.org/tripleo-ansible/latest/roles/role-tripleo_container_manage.html If you have any concern or need any help on that transition, feel free to ask here, or on #tripleo or even ping me directly. Thanks, On Thu, Apr 23, 2020 at 9:00 AM Emilien Macchi wrote: > > > On Thu, Apr 23, 2020 at 8:59 AM Andreas Jaeger wrote: > >> On 23.04.20 14:33, Emilien Macchi wrote: >> > Hi folks, >> > >> > In Ussuri we manage to successfully replace Paunch by Ansible (role + >> > modules) to manage the container lifecycle in TripleO. >> > As a reminder, this effort was part of the Simplification work that is >> > being done in TripleO to reduce our number of tools and align it with >> > our deployment framework based on Ansible. >> > >> > >> https://docs.openstack.org/tripleo-ansible/latest/roles/role-tripleo_container_manage.html >> > >> > Paunch has been turned off for some time now and isn't tested anymore in >> > upstream CI (master only). >> > I would like to propose its deprecation in Ussuri and a removal in >> > Victoria (or later if any valid pushback). >> > >> > https://review.opendev.org/#/c/720598/ >> >> I propose to do it like django_openstack_auth: Remove all content and >> just leave a README in there so that nobody commits changes to master, >> > > Yes we will do that in Victoria cycle; if the deprecation is granted. > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu May 28 17:03:03 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 May 2020 12:03:03 -0500 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL Message-ID: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> Hello Everyone, Ocata is in the 'Extended Maintenance' phase[1] but its gate is broken and difficult to fix now. The main issue started for gate is about Tempest and its deps constraint is correctly used on Ocata or not. We fixed most of those issues when things started failing during py2 drop[2]. But now things are broken due to newer stestr is installed in Ocata jobs[2]. As Sean also mentioned in this backport[4] that stestr is not constraint in Ocata[5] as it was not in g-r that time so cannot be added in u-c. Tempest old tag cannot be fixed to have that with cap on the tempest side. The only option left to unblock the gate is to remove the integration testing from Ocata gate. But it will untested after that as we cannot say unit testing is enough to mark it tested. At least from QA perspective, I would not say it is tested if there is no integration job running there. Keeping it untested (no integration testing) and saying it is Maintained in the community can be very confusing things. I think its time to move it to the 'Unmaintained' phase and then EOL. Thought? [1] https://releases.openstack.org/ [2] https://review.opendev.org/#/q/topic:fix-stable-gate+(status:open+OR+status:merged) [3] https://zuul.opendev.org/t/openstack/build/e477d0919ff14eea9cd8a6235ea00c3d/log/logs/devstacklog.txt#18575 [4] https://review.opendev.org/#/c/726931/ [5] https://opendev.org/openstack/requirements/src/branch/stable/ocata/upper-constraints.txt -gmann From mnaser at vexxhost.com Thu May 28 18:02:23 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 28 May 2020 14:02:23 -0400 Subject: [tc] Monthly meeting Message-ID: Hi everyone, Our monthly TC meeting is scheduled for next Thursday, June 4, at 1400 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, June 3, at 1900 UTC. Thank you. -- Mohammed Naser VEXXHOST, Inc. From rosmaita.fossdev at gmail.com Thu May 28 18:06:08 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 28 May 2020 14:06:08 -0400 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> Message-ID: <61fb455b-7a99-f621-0e19-4dde525a35f9@gmail.com> On 5/28/20 1:03 PM, Ghanshyam Mann wrote: > Hello Everyone, > > Ocata is in the 'Extended Maintenance' phase[1] but its gate is broken and difficult to fix now. > > The main issue started for gate is about Tempest and its deps constraint is correctly used on Ocata or not. > We fixed most of those issues when things started failing during py2 drop[2]. > > But now things are broken due to newer stestr is installed in Ocata jobs[2]. As Sean also mentioned in this > backport[4] that stestr is not constraint in Ocata[5] as it was not in g-r that time so cannot be added in u-c. > > Tempest old tag cannot be fixed to have that with cap on the tempest side. > > The only option left to unblock the gate is to remove the integration testing from Ocata gate. But it will untested after that > as we cannot say unit testing is enough to mark it tested. At least from QA perspective, I would not say it is tested > if there is no integration job running there. > > Keeping it untested (no integration testing) and saying it is Maintained in the community can be very confusing things. > I think its time to move it to the 'Unmaintained' phase and then EOL. Thought? +1000 It would be really helpful if we could make this transition as soon as possible. I don't see any reason to delay, particularly since the policy [a] allows a transition back to EM should maintainers step up in the next six months. How about putting Ocata into the 'Unmaintained' phase on 1 June 2020? [a] https://docs.openstack.org/project-team-guide/stable-branches.html#unmaintained > > [1] https://releases.openstack.org/ > [2] https://review.opendev.org/#/q/topic:fix-stable-gate+(status:open+OR+status:merged) > [3] https://zuul.opendev.org/t/openstack/build/e477d0919ff14eea9cd8a6235ea00c3d/log/logs/devstacklog.txt#18575 > [4] https://review.opendev.org/#/c/726931/ > [5] https://opendev.org/openstack/requirements/src/branch/stable/ocata/upper-constraints.txt > > -gmann > From sean.mcginnis at gmx.com Thu May 28 18:30:29 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 28 May 2020 13:30:29 -0500 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> Message-ID: <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> On 5/28/20 12:03 PM, Ghanshyam Mann wrote: > Hello Everyone, > > Ocata is in the 'Extended Maintenance' phase[1] but its gate is broken and difficult to fix now. > > The main issue started for gate is about Tempest and its deps constraint is correctly used on Ocata or not. > We fixed most of those issues when things started failing during py2 drop[2]. > > But now things are broken due to newer stestr is installed in Ocata jobs[2]. As Sean also mentioned in this > backport[4] that stestr is not constraint in Ocata[5] as it was not in g-r that time so cannot be added in u-c. We actually did just recently merge a patch to add it: https://review.opendev.org/#/c/725213/ There were some other breakages that had to be fixed before we could get it to land, so that took a little longer than expected. That said... > > Tempest old tag cannot be fixed to have that with cap on the tempest side. > > The only option left to unblock the gate is to remove the integration testing from Ocata gate. But it will untested after that > as we cannot say unit testing is enough to mark it tested. At least from QA perspective, I would not say it is tested > if there is no integration job running there. > > Keeping it untested (no integration testing) and saying it is Maintained in the community can be very confusing things. > I think its time to move it to the 'Unmaintained' phase and then EOL. Thought? I do think it is getting to the point where it is becoming increasingly difficult to keep stable/ocata happy, and time is better spent on more recent stable branches. When we discussed changing the stable policy to allow for Extended Maintenance, part of that was that we would support EM as long as someone was able to keep things working and tested. I think we've gotten to the point where that is no longer the case with ocata. This is the first branch that has made it this long before going EOL, so it has been useful to see what other issues came up that were not discussed. One thing that it's made me realize is there is a lot more codependency than we maybe realized in those early discussions. The policy and discussions that I remember were really about whether each project would continue to put in the effort to keep things running. But it really requires a lot more than just an individual project team to do that. If any one of the "core" projects starts to have an issue, that really means all of the projects have an issue. So it would not be possible for, just as an example, Cinder to decide to transition its stable/ocata branch to EOL, but Nova to keep maintaining their stable/ocata branch. There is also the cross-cutting concerns of requirements management and tempest. I think these two areas have been where the most issues have cropped up, and therefore requiring the most work to keep things running. Kudos to gmann and the tempest team for getting things to work after we've dropped py27. But requirements is a very small team (as is tempest/QA), and things are breaking that require some thought and time to work through the right way to handle issues in a stable branch, so a lot of the effort is falling to these smaller groups to try to prop things up. I know I personally don't have extra time to be working on these, and no real motivation other than I like to see Zuul comments be green. Now that we've gotten the stestr issue hopefully resolved, I don't think I can spend any more time on stable/ocata issues. Others are free to pitch in, but I really think it is in the community's best interest if we just decide we've done enough (Ocata would have gone EOL over 2 years ago under the old policy) and it's time to call it and mark those branches EOL. Sean From fungi at yuggoth.org Thu May 28 19:24:56 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 28 May 2020 19:24:56 +0000 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> Message-ID: <20200528192455.3kseedj6oq3que64@yuggoth.org> On 2020-05-28 13:30:29 -0500 (-0500), Sean McGinnis wrote: [...] > This is the first branch that has made it this long before going > EOL, so it has been useful to see what other issues came up that > were not discussed. [...] I'm actually shocked it has survived for so long. Tagged on 2017-02-22, the Ocata coordinated release has been in some active state of maintenance for well over 3 years already. That's triple the duration we used to keep stable branches open before the EM process was implemented. Three cheers for everyone who's kept stable/ocata branches alive and viable all that time! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From whayutin at redhat.com Thu May 28 20:32:37 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 28 May 2020 14:32:37 -0600 Subject: [tripleo] PTG next week Message-ID: Greetings, PTG is next week. Please update the schedule if you have posted a topic Please register if you have not yet.. https://virtualptgjune2020.eventbrite.com/ All the information you should need should be posted on the following link. https://etherpad.opendev.org/p/tripleo-ptg-victoria See ya next week :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu May 28 21:04:04 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 28 May 2020 16:04:04 -0500 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: Hey Tom, Forwarding to the OpenStack discuss list where you might get more assistance. Thanks, Amy (spotz) On Thu, May 28, 2020 at 3:32 PM Thomas King wrote: > Good day, > > We have Ironic running and connected via VLANs to nearby machines. We want > to extend this to other parts of our product development lab without > extending VLANs. > > Using DHCP relay, we would point to a single IP address to serve DHCP > requests but I'm not entirely sure of the Neutron network/subnet > configuration, nor which IP address should be used for the relay agent on > the switch. > > Is DHCP relay supported by Neutron? > > My guess is to add a subnet in the provisioning network and point the > relay agent to the linuxbridge interface's IP: > 14: brq467f6775-be: mtu 1500 qdisc > noqueue state UP group default qlen 1000 > link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff > inet 10.10.0.1/16 scope global brq467f6775-be > valid_lft forever preferred_lft forever > inet6 fe80::5400:52ff:fe85:d33d/64 scope link > valid_lft forever preferred_lft forever > > Thank you, > Tom King > _______________________________________________ > openstack-mentoring mailing list > openstack-mentoring at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu May 28 21:22:23 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 May 2020 16:22:23 -0500 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> Message-ID: <1725d2a2a33.d0fb966e52205.5429307849654804208@ghanshyammann.com> ---- On Thu, 28 May 2020 13:30:29 -0500 Sean McGinnis wrote ---- > On 5/28/20 12:03 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > Ocata is in the 'Extended Maintenance' phase[1] but its gate is broken and difficult to fix now. > > > > The main issue started for gate is about Tempest and its deps constraint is correctly used on Ocata or not. > > We fixed most of those issues when things started failing during py2 drop[2]. > > > > But now things are broken due to newer stestr is installed in Ocata jobs[2]. As Sean also mentioned in this > > backport[4] that stestr is not constraint in Ocata[5] as it was not in g-r that time so cannot be added in u-c. > > We actually did just recently merge a patch to add it: > > https://review.opendev.org/#/c/725213/ > > There were some other breakages that had to be fixed before we could get > it to land, so that took a little longer than expected. That said... > > > > > Tempest old tag cannot be fixed to have that with cap on the tempest side. > > > > The only option left to unblock the gate is to remove the integration testing from Ocata gate. But it will untested after that > > as we cannot say unit testing is enough to mark it tested. At least from QA perspective, I would not say it is tested > > if there is no integration job running there. > > > > Keeping it untested (no integration testing) and saying it is Maintained in the community can be very confusing things. > > I think its time to move it to the 'Unmaintained' phase and then EOL. Thought? > > I do think it is getting to the point where it is becoming increasingly > difficult to keep stable/ocata happy, and time is better spent on more > recent stable branches. > > When we discussed changing the stable policy to allow for Extended > Maintenance, part of that was that we would support EM as long as > someone was able to keep things working and tested. I think we've gotten > to the point where that is no longer the case with ocata. > > This is the first branch that has made it this long before going EOL, so > it has been useful to see what other issues came up that were not > discussed. One thing that it's made me realize is there is a lot more > codependency than we maybe realized in those early discussions. The > policy and discussions that I remember were really about whether each > project would continue to put in the effort to keep things running. But > it really requires a lot more than just an individual project team to do > that. If any one of the "core" projects starts to have an issue, that > really means all of the projects have an issue. So it would not be > possible for, just as an example, Cinder to decide to transition its > stable/ocata branch to EOL, but Nova to keep maintaining their > stable/ocata branch. > > There is also the cross-cutting concerns of requirements management and > tempest. I think these two areas have been where the most issues have > cropped up, and therefore requiring the most work to keep things > running. Kudos to gmann and the tempest team for getting things to work > after we've dropped py27. But requirements is a very small team (as is > tempest/QA), and things are breaking that require some thought and time > to work through the right way to handle issues in a stable branch, so a > lot of the effort is falling to these smaller groups to try to prop > things up. +1, Even QA has the policy and that is what we all agreed on about not supporting the stable branches once it is Extended Maintainance but you know when we see things are broken we fix those. I think that is the usual nature I think :) But I agree that its time to move forward for Ocata. Also once Ocata is gone, it will easy to freeze the devstack-gate. Ocata is last stable where we do not have zuulv3 native base jobs and Pike onwards we can backport everything converted in zuulv3. > > I know I personally don't have extra time to be working on these, and no > real motivation other than I like to see Zuul comments be green. Now > that we've gotten the stestr issue hopefully resolved, I don't think I > can spend any more time on stable/ocata issues. Others are free to pitch > in, but I really think it is in the community's best interest if we just > decide we've done enough (Ocata would have gone EOL over 2 years ago > under the old policy) and it's time to call it and mark those branches EOL. Yeah, we have extended it too much I think maybe due to the first one to go to 'Unmaintained' after Extended maintenance things started. -gmann > > Sean > > > From skaplons at redhat.com Thu May 28 21:31:46 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 May 2020 23:31:46 +0200 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: <61fb455b-7a99-f621-0e19-4dde525a35f9@gmail.com> References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <61fb455b-7a99-f621-0e19-4dde525a35f9@gmail.com> Message-ID: <20200528213146.j5uagsv7obng7hjf@skaplons-mac> Hi, On Thu, May 28, 2020 at 02:06:08PM -0400, Brian Rosmaita wrote: > On 5/28/20 1:03 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > Ocata is in the 'Extended Maintenance' phase[1] but its gate is broken and difficult to fix now. > > > > The main issue started for gate is about Tempest and its deps constraint is correctly used on Ocata or not. > > We fixed most of those issues when things started failing during py2 drop[2]. > > > > But now things are broken due to newer stestr is installed in Ocata jobs[2]. As Sean also mentioned in this > > backport[4] that stestr is not constraint in Ocata[5] as it was not in g-r that time so cannot be added in u-c. > > > > Tempest old tag cannot be fixed to have that with cap on the tempest side. > > > > The only option left to unblock the gate is to remove the integration testing from Ocata gate. But it will untested after that > > as we cannot say unit testing is enough to mark it tested. At least from QA perspective, I would not say it is tested > > if there is no integration job running there. > > > > Keeping it untested (no integration testing) and saying it is Maintained in the community can be very confusing things. > > I think its time to move it to the 'Unmaintained' phase and then EOL. Thought? > > +1000 +1 from the Neutron too. I checked recently that we don't have any new patches merged to Ocata since around October 2019. > > It would be really helpful if we could make this transition as soon as > possible. I don't see any reason to delay, particularly since the policy > [a] allows a transition back to EM should maintainers step up in the next > six months. > > How about putting Ocata into the 'Unmaintained' phase on 1 June 2020? > > > [a] https://docs.openstack.org/project-team-guide/stable-branches.html#unmaintained > > > > > > [1] https://releases.openstack.org/ > > [2] https://review.opendev.org/#/q/topic:fix-stable-gate+(status:open+OR+status:merged) > > [3] https://zuul.opendev.org/t/openstack/build/e477d0919ff14eea9cd8a6235ea00c3d/log/logs/devstacklog.txt#18575 > > [4] https://review.opendev.org/#/c/726931/ > > [5] https://opendev.org/openstack/requirements/src/branch/stable/ocata/upper-constraints.txt > > > > -gmann > > > > -- Slawek Kaplonski Senior software engineer Red Hat From johnsomor at gmail.com Thu May 28 21:37:00 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 28 May 2020 14:37:00 -0700 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: <1725d2a2a33.d0fb966e52205.5429307849654804208@ghanshyammann.com> References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> <1725d2a2a33.d0fb966e52205.5429307849654804208@ghanshyammann.com> Message-ID: Considering recent OSSA issues did not release fixes for Ocata[1][2], I think we should really consider making the maintenance status more clear and/or per project. I know that Adam proposed this a while ago for Octavia, but it seems to be stuck pending reviews.[3] Michael [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014717.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013251.html [3] https://review.opendev.org/#/c/719097/ On Thu, May 28, 2020 at 2:26 PM Ghanshyam Mann wrote: > > ---- On Thu, 28 May 2020 13:30:29 -0500 Sean McGinnis wrote ---- > > On 5/28/20 12:03 PM, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > Ocata is in the 'Extended Maintenance' phase[1] but its gate is broken and difficult to fix now. > > > > > > The main issue started for gate is about Tempest and its deps constraint is correctly used on Ocata or not. > > > We fixed most of those issues when things started failing during py2 drop[2]. > > > > > > But now things are broken due to newer stestr is installed in Ocata jobs[2]. As Sean also mentioned in this > > > backport[4] that stestr is not constraint in Ocata[5] as it was not in g-r that time so cannot be added in u-c. > > > > We actually did just recently merge a patch to add it: > > > > https://review.opendev.org/#/c/725213/ > > > > There were some other breakages that had to be fixed before we could get > > it to land, so that took a little longer than expected. That said... > > > > > > > > Tempest old tag cannot be fixed to have that with cap on the tempest side. > > > > > > The only option left to unblock the gate is to remove the integration testing from Ocata gate. But it will untested after that > > > as we cannot say unit testing is enough to mark it tested. At least from QA perspective, I would not say it is tested > > > if there is no integration job running there. > > > > > > Keeping it untested (no integration testing) and saying it is Maintained in the community can be very confusing things. > > > I think its time to move it to the 'Unmaintained' phase and then EOL. Thought? > > > > I do think it is getting to the point where it is becoming increasingly > > difficult to keep stable/ocata happy, and time is better spent on more > > recent stable branches. > > > > When we discussed changing the stable policy to allow for Extended > > Maintenance, part of that was that we would support EM as long as > > someone was able to keep things working and tested. I think we've gotten > > to the point where that is no longer the case with ocata. > > > > This is the first branch that has made it this long before going EOL, so > > it has been useful to see what other issues came up that were not > > discussed. One thing that it's made me realize is there is a lot more > > codependency than we maybe realized in those early discussions. The > > policy and discussions that I remember were really about whether each > > project would continue to put in the effort to keep things running. But > > it really requires a lot more than just an individual project team to do > > that. If any one of the "core" projects starts to have an issue, that > > really means all of the projects have an issue. So it would not be > > possible for, just as an example, Cinder to decide to transition its > > stable/ocata branch to EOL, but Nova to keep maintaining their > > stable/ocata branch. > > > > There is also the cross-cutting concerns of requirements management and > > tempest. I think these two areas have been where the most issues have > > cropped up, and therefore requiring the most work to keep things > > running. Kudos to gmann and the tempest team for getting things to work > > after we've dropped py27. But requirements is a very small team (as is > > tempest/QA), and things are breaking that require some thought and time > > to work through the right way to handle issues in a stable branch, so a > > lot of the effort is falling to these smaller groups to try to prop > > things up. > > +1, Even QA has the policy and that is what we all agreed on about not supporting the > stable branches once it is Extended Maintainance but you know when we see things are > broken we fix those. I think that is the usual nature I think :) But I agree that its time to > move forward for Ocata. > > Also once Ocata is gone, it will easy to freeze the devstack-gate. Ocata is last stable where > we do not have zuulv3 native base jobs and Pike onwards we can backport everything converted > in zuulv3. > > > > > I know I personally don't have extra time to be working on these, and no > > real motivation other than I like to see Zuul comments be green. Now > > that we've gotten the stestr issue hopefully resolved, I don't think I > > can spend any more time on stable/ocata issues. Others are free to pitch > > in, but I really think it is in the community's best interest if we just > > decide we've done enough (Ocata would have gone EOL over 2 years ago > > under the old policy) and it's time to call it and mark those branches EOL. > > Yeah, we have extended it too much I think maybe due to the first one to go to > 'Unmaintained' after Extended maintenance things started. > > -gmann > > > > > Sean > > > > > > > From aschultz at redhat.com Thu May 28 22:58:02 2020 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 28 May 2020 16:58:02 -0600 Subject: [tripleo] Deprecating tripleo-heat-templates firstboot In-Reply-To: <2ada6e70-d3dc-899d-3fdc-401a4fea56be@redhat.com> References: <1e7132dd-599c-8af6-57ce-ca1a44126b15@redhat.com> <769e266ed580747fcfa39a71efc6489ee5af91f3.camel@redhat.com> <2ada6e70-d3dc-899d-3fdc-401a4fea56be@redhat.com> Message-ID: On Tue, May 26, 2020 at 3:34 PM Steve Baker wrote: > > > On 27/05/20 5:30 am, Harald Jensås wrote: > > On Tue, 2020-05-26 at 11:32 +1200, Steve Baker wrote: > >> Soon nova will be switched off by default on the undercloud and all > >> overcloud deployments will effectively be deployed-server based > >> (either provisioned manually or via the baremetal provision command) > >> > >> This means that the docs for running firstboot scripts[1] will no > >> longer work, and neither will our collection of firstboot scripts[2]. > >> In this email I'm going to propose what we could do about this > >> situation and if there are still unresolved issues by the PTG it > >> might be worth having a short session on it. > >> > > If we had gone the other way around, and done the Heat Stack with > > "dummy" server resources before deploying baremetal we could have done > > this seamless, i.e passed these cloud-configs based on the stack to the > > baremetal provisioning yaml's extention you mention below. But that > > train departed, long ago ... > Also this still wouldn't have handled the manual provisioning deployed > server case. > > Do we need to add some deprecation and/or validation? Something that > > ensure we stop the deployment in case one of the resources > > OS::TripleO::NodeAdminUserData, OS::TripleO::NodeTimesyncUserData, > > OS::TripleO::NodeUserData or OS::TripleO::{{role.name}}::NodeUserData > > is defined in the resource registry, with a pointer to docs on how to > > move it to the baremetal provisioning yaml, or extraconfig. > > I think if OS::TripleO::*Server: is not mapped to OS::Nova::Server then > the deployment should halt with a message if OS::TripleO::NodeUserData > or OS::TripleO::{{role.name}}::NodeUserData are mapped to something > other than userdata_default.yaml > > As for OS::TripleO::NodeTimesyncUserData, it looks like this functionality is duplicated by deployment/timesync/chrony-baremetal-ansible.yaml which is mapped to OS::TripleO::Services::Timesync and included in every role, but NodeTimesyncUserData was added recently to handle some early config timestamp issues: > > https://opendev.org/openstack/tripleo-heat-templates/commit/eafe3908535ec766866efb74110e057ea2509c45 > https://bugs.launchpad.net/tripleo/+bug/1776869 > > Maybe this becomes less of an issue with no other config tasks happening at first boot, I've tagged in Alex for his thoughts. One option could be to enable and configure chrony during overcloud-full image build, then document how to disable it or change the ntp servers in cloud-config? > Since we configure NTP/chrony during the host_prep_tasks phase so that should be sufficient now. The original issue that we were attempting to fix with that was the read-only errors out of docker. We later learned that that issue was likely caused by a docker-puppet.py issue where we copied files that we read-only mounted. It's likely safe to delete this however, I would say it might be beneficial to include basic ntp or hwclock functionality in the new provision system on the off chance a user needs to do those prior to any configurations on the host. I believe we've cleaned up the ordering of things within the regular deployment now such that this firstboot is no longer required. > >> The baremetal provisioning implementation already uses cloud-init > >> cloud-config internally for user creation and key injection[3] so I'm > >> going to propose an enhancement the the baremetal provisioning yaml > >> format so that custom cloud-config instructions can be included > >> either inline or as a file path. > >> > > ++ > > > >> I think it is worth going through each firstboot script[2] and > >> deciding what its fate should be (other than being deprecated in > >> Ussuri): > >> > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/conntectx3_streering.yaml > >> > >> This has no parameters, so it could be converted to a standalone > >> cloud-config file, but should it? Can this be achieved with kernel > >> args? Does it require a reboot anyway, and so can be done with > >> extraconfig? > >> > > Maybe this could be done using this module in ansible instead: > > https://docs.ansible.com/ansible/latest/modules/modprobe_module.html#modprobe-module > > > >> > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/os-net-config-mappings.yaml > >> > >> I'm not sure why this is implemented as first boot, it seems to > >> consume the parameter NetConfigDataLookup and transforms it to the > >> format os-net-config needs for the file /etc/os-net- > >> config/mapping.yaml. It looks like this functionality should be moved > >> to where os-net-config is actually invoked, and the > >> NetConfigDataLookup parameter should be officially supported. > >> > > I agree, I have been thinking about moving this for a while actually. > > > >> > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_dev_rsync.yaml > >> > >> I suggest deleting this and including a cloud-config version in the > >> baremetal provisioning docs. > >> > > +1 > > > >> > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_heat_admin.yaml > >> > >> Delete this, there is already an abstraction for this built into the > >> baremetal provisioning format[4] > >> > >> > > +1 > > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_root_password.yaml > >> > >> Delete this and include it as an example in the baremetal > >> provisioning docs. > >> > >> > > +1 > > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_timesync.yaml > >> > >> Maybe this could be converted to an extraconfig/all_nodes script[5], > >> but it would be better if this sort of thing could be implemented as > >> an ansible role or playbook, are there any plans for an extraconfig > >> mechanism which uses plain ansible semantics? > >> > >> > >> > >> cheers > >> > >> [1] > >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/extra_config.html#firstboot-extra-configuration > >> > >> [2] > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot > >> > >> [3] > >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L286 > >> > >> [4] > >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L132 > >> > >> > >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L145 > >> > >> [5] > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/all_nodes > From aschultz at redhat.com Thu May 28 23:13:12 2020 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 28 May 2020 17:13:12 -0600 Subject: [tripleo] Deprecating tripleo-heat-templates firstboot In-Reply-To: References: <1e7132dd-599c-8af6-57ce-ca1a44126b15@redhat.com> <769e266ed580747fcfa39a71efc6489ee5af91f3.camel@redhat.com> <2ada6e70-d3dc-899d-3fdc-401a4fea56be@redhat.com> Message-ID: On Thu, May 28, 2020 at 4:58 PM Alex Schultz wrote: > > On Tue, May 26, 2020 at 3:34 PM Steve Baker wrote: > > > > > > On 27/05/20 5:30 am, Harald Jensås wrote: > > > On Tue, 2020-05-26 at 11:32 +1200, Steve Baker wrote: > > >> Soon nova will be switched off by default on the undercloud and all > > >> overcloud deployments will effectively be deployed-server based > > >> (either provisioned manually or via the baremetal provision command) > > >> > > >> This means that the docs for running firstboot scripts[1] will no > > >> longer work, and neither will our collection of firstboot scripts[2]. > > >> In this email I'm going to propose what we could do about this > > >> situation and if there are still unresolved issues by the PTG it > > >> might be worth having a short session on it. > > >> > > > If we had gone the other way around, and done the Heat Stack with > > > "dummy" server resources before deploying baremetal we could have done > > > this seamless, i.e passed these cloud-configs based on the stack to the > > > baremetal provisioning yaml's extention you mention below. But that > > > train departed, long ago ... > > Also this still wouldn't have handled the manual provisioning deployed > > server case. > > > Do we need to add some deprecation and/or validation? Something that > > > ensure we stop the deployment in case one of the resources > > > OS::TripleO::NodeAdminUserData, OS::TripleO::NodeTimesyncUserData, > > > OS::TripleO::NodeUserData or OS::TripleO::{{role.name}}::NodeUserData > > > is defined in the resource registry, with a pointer to docs on how to > > > move it to the baremetal provisioning yaml, or extraconfig. > > > > I think if OS::TripleO::*Server: is not mapped to OS::Nova::Server then > > the deployment should halt with a message if OS::TripleO::NodeUserData > > or OS::TripleO::{{role.name}}::NodeUserData are mapped to something > > other than userdata_default.yaml > > > > As for OS::TripleO::NodeTimesyncUserData, it looks like this functionality is duplicated by deployment/timesync/chrony-baremetal-ansible.yaml which is mapped to OS::TripleO::Services::Timesync and included in every role, but NodeTimesyncUserData was added recently to handle some early config timestamp issues: > > > > https://opendev.org/openstack/tripleo-heat-templates/commit/eafe3908535ec766866efb74110e057ea2509c45 > > https://bugs.launchpad.net/tripleo/+bug/1776869 > > > > Maybe this becomes less of an issue with no other config tasks happening at first boot, I've tagged in Alex for his thoughts. One option could be to enable and configure chrony during overcloud-full image build, then document how to disable it or change the ntp servers in cloud-config? > > > > Since we configure NTP/chrony during the host_prep_tasks phase so that > should be sufficient now. The original issue that we were attempting > to fix with that was the read-only errors out of docker. We later > learned that that issue was likely caused by a docker-puppet.py issue > where we copied files that we read-only mounted. It's likely safe to > delete this however, I would say it might be beneficial to include > basic ntp or hwclock functionality in the new provision system on the > off chance a user needs to do those prior to any configurations on the > host. I believe we've cleaned up the ordering of things within the > regular deployment now such that this firstboot is no longer required. > Slight correction in that we need to be able to have the cloud-init's ntp servers configurable during provisioning. The issue arises when a host's time is in the future and a file is written out that will later be used as a container (or mounted). The container engines tend to fail horribly. I would recommend including ntp servers (or pools) in the initial provisioning and highly recommend users configure them. We'll sync up later as part of the deployment but we want the hardware's time corrected as early as possible. > > >> The baremetal provisioning implementation already uses cloud-init > > >> cloud-config internally for user creation and key injection[3] so I'm > > >> going to propose an enhancement the the baremetal provisioning yaml > > >> format so that custom cloud-config instructions can be included > > >> either inline or as a file path. > > >> > > > ++ > > > > > >> I think it is worth going through each firstboot script[2] and > > >> deciding what its fate should be (other than being deprecated in > > >> Ussuri): > > >> > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/conntectx3_streering.yaml > > >> > > >> This has no parameters, so it could be converted to a standalone > > >> cloud-config file, but should it? Can this be achieved with kernel > > >> args? Does it require a reboot anyway, and so can be done with > > >> extraconfig? > > >> > > > Maybe this could be done using this module in ansible instead: > > > https://docs.ansible.com/ansible/latest/modules/modprobe_module.html#modprobe-module > > > > > >> > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/os-net-config-mappings.yaml > > >> > > >> I'm not sure why this is implemented as first boot, it seems to > > >> consume the parameter NetConfigDataLookup and transforms it to the > > >> format os-net-config needs for the file /etc/os-net- > > >> config/mapping.yaml. It looks like this functionality should be moved > > >> to where os-net-config is actually invoked, and the > > >> NetConfigDataLookup parameter should be officially supported. > > >> > > > I agree, I have been thinking about moving this for a while actually. > > > > > >> > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_dev_rsync.yaml > > >> > > >> I suggest deleting this and including a cloud-config version in the > > >> baremetal provisioning docs. > > >> > > > +1 > > > > > >> > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_heat_admin.yaml > > >> > > >> Delete this, there is already an abstraction for this built into the > > >> baremetal provisioning format[4] > > >> > > >> > > > +1 > > > > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_root_password.yaml > > >> > > >> Delete this and include it as an example in the baremetal > > >> provisioning docs. > > >> > > >> > > > +1 > > > > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_timesync.yaml > > >> > > >> Maybe this could be converted to an extraconfig/all_nodes script[5], > > >> but it would be better if this sort of thing could be implemented as > > >> an ansible role or playbook, are there any plans for an extraconfig > > >> mechanism which uses plain ansible semantics? > > >> > > >> > > >> > > >> cheers > > >> > > >> [1] > > >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/extra_config.html#firstboot-extra-configuration > > >> > > >> [2] > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot > > >> > > >> [3] > > >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L286 > > >> > > >> [4] > > >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L132 > > >> > > >> > > >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L145 > > >> > > >> [5] > > >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/all_nodes > > From sbaker at redhat.com Thu May 28 23:21:21 2020 From: sbaker at redhat.com (Steve Baker) Date: Fri, 29 May 2020 11:21:21 +1200 Subject: [tripleo] Deprecating tripleo-heat-templates firstboot In-Reply-To: References: <1e7132dd-599c-8af6-57ce-ca1a44126b15@redhat.com> <769e266ed580747fcfa39a71efc6489ee5af91f3.camel@redhat.com> <2ada6e70-d3dc-899d-3fdc-401a4fea56be@redhat.com> Message-ID: <2fd21db2-deb9-a21d-f5d2-c2a5fe784eb5@redhat.com> On 29/05/20 11:13 am, Alex Schultz wrote: > On Thu, May 28, 2020 at 4:58 PM Alex Schultz wrote: >> On Tue, May 26, 2020 at 3:34 PM Steve Baker wrote: >>> >>> On 27/05/20 5:30 am, Harald Jensås wrote: >>>> On Tue, 2020-05-26 at 11:32 +1200, Steve Baker wrote: >>>>> Soon nova will be switched off by default on the undercloud and all >>>>> overcloud deployments will effectively be deployed-server based >>>>> (either provisioned manually or via the baremetal provision command) >>>>> >>>>> This means that the docs for running firstboot scripts[1] will no >>>>> longer work, and neither will our collection of firstboot scripts[2]. >>>>> In this email I'm going to propose what we could do about this >>>>> situation and if there are still unresolved issues by the PTG it >>>>> might be worth having a short session on it. >>>>> >>>> If we had gone the other way around, and done the Heat Stack with >>>> "dummy" server resources before deploying baremetal we could have done >>>> this seamless, i.e passed these cloud-configs based on the stack to the >>>> baremetal provisioning yaml's extention you mention below. But that >>>> train departed, long ago ... >>> Also this still wouldn't have handled the manual provisioning deployed >>> server case. >>>> Do we need to add some deprecation and/or validation? Something that >>>> ensure we stop the deployment in case one of the resources >>>> OS::TripleO::NodeAdminUserData, OS::TripleO::NodeTimesyncUserData, >>>> OS::TripleO::NodeUserData or OS::TripleO::{{role.name}}::NodeUserData >>>> is defined in the resource registry, with a pointer to docs on how to >>>> move it to the baremetal provisioning yaml, or extraconfig. >>> I think if OS::TripleO::*Server: is not mapped to OS::Nova::Server then >>> the deployment should halt with a message if OS::TripleO::NodeUserData >>> or OS::TripleO::{{role.name}}::NodeUserData are mapped to something >>> other than userdata_default.yaml >>> >>> As for OS::TripleO::NodeTimesyncUserData, it looks like this functionality is duplicated by deployment/timesync/chrony-baremetal-ansible.yaml which is mapped to OS::TripleO::Services::Timesync and included in every role, but NodeTimesyncUserData was added recently to handle some early config timestamp issues: >>> >>> https://opendev.org/openstack/tripleo-heat-templates/commit/eafe3908535ec766866efb74110e057ea2509c45 >>> https://bugs.launchpad.net/tripleo/+bug/1776869 >>> >>> Maybe this becomes less of an issue with no other config tasks happening at first boot, I've tagged in Alex for his thoughts. One option could be to enable and configure chrony during overcloud-full image build, then document how to disable it or change the ntp servers in cloud-config? >>> >> Since we configure NTP/chrony during the host_prep_tasks phase so that >> should be sufficient now. The original issue that we were attempting >> to fix with that was the read-only errors out of docker. We later >> learned that that issue was likely caused by a docker-puppet.py issue >> where we copied files that we read-only mounted. It's likely safe to >> delete this however, I would say it might be beneficial to include >> basic ntp or hwclock functionality in the new provision system on the >> off chance a user needs to do those prior to any configurations on the >> host. I believe we've cleaned up the ordering of things within the >> regular deployment now such that this firstboot is no longer required. >> > Slight correction in that we need to be able to have the cloud-init's > ntp servers configurable during provisioning. The issue arises when a > host's time is in the future and a file is written out that will later > be used as a container (or mounted). The container engines tend to > fail horribly. I would recommend including ntp servers (or pools) in > the initial provisioning and highly recommend users configure them. > We'll sync up later as part of the deployment but we want the > hardware's time corrected as early as possible. OK, it sounds like we should configure a running timesync in the overcloud-full image with a default pool, and document what cloud-config is required to customize the pool servers or switch it off. This cloud-config can be invoked in the baremetal provisioning yaml >>>>> The baremetal provisioning implementation already uses cloud-init >>>>> cloud-config internally for user creation and key injection[3] so I'm >>>>> going to propose an enhancement the the baremetal provisioning yaml >>>>> format so that custom cloud-config instructions can be included >>>>> either inline or as a file path. >>>>> >>>> ++ >>>> >>>>> I think it is worth going through each firstboot script[2] and >>>>> deciding what its fate should be (other than being deprecated in >>>>> Ussuri): >>>>> >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/conntectx3_streering.yaml >>>>> >>>>> This has no parameters, so it could be converted to a standalone >>>>> cloud-config file, but should it? Can this be achieved with kernel >>>>> args? Does it require a reboot anyway, and so can be done with >>>>> extraconfig? >>>>> >>>> Maybe this could be done using this module in ansible instead: >>>> https://docs.ansible.com/ansible/latest/modules/modprobe_module.html#modprobe-module >>>> >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/os-net-config-mappings.yaml >>>>> >>>>> I'm not sure why this is implemented as first boot, it seems to >>>>> consume the parameter NetConfigDataLookup and transforms it to the >>>>> format os-net-config needs for the file /etc/os-net- >>>>> config/mapping.yaml. It looks like this functionality should be moved >>>>> to where os-net-config is actually invoked, and the >>>>> NetConfigDataLookup parameter should be officially supported. >>>>> >>>> I agree, I have been thinking about moving this for a while actually. >>>> >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_dev_rsync.yaml >>>>> >>>>> I suggest deleting this and including a cloud-config version in the >>>>> baremetal provisioning docs. >>>>> >>>> +1 >>>> >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_heat_admin.yaml >>>>> >>>>> Delete this, there is already an abstraction for this built into the >>>>> baremetal provisioning format[4] >>>>> >>>>> >>>> +1 >>>> >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_root_password.yaml >>>>> >>>>> Delete this and include it as an example in the baremetal >>>>> provisioning docs. >>>>> >>>>> >>>> +1 >>>> >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot/userdata_timesync.yaml >>>>> >>>>> Maybe this could be converted to an extraconfig/all_nodes script[5], >>>>> but it would be better if this sort of thing could be implemented as >>>>> an ansible role or playbook, are there any plans for an extraconfig >>>>> mechanism which uses plain ansible semantics? >>>>> >>>>> >>>>> >>>>> cheers >>>>> >>>>> [1] >>>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/extra_config.html#firstboot-extra-configuration >>>>> >>>>> [2] >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/firstboot >>>>> >>>>> [3] >>>>> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L286 >>>>> >>>>> [4] >>>>> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L132 >>>>> >>>>> >>>>> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/modules/metalsmith_instances.py#L145 >>>>> >>>>> [5] >>>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/all_nodes From fungi at yuggoth.org Thu May 28 23:32:32 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 28 May 2020 23:32:32 +0000 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> <1725d2a2a33.d0fb966e52205.5429307849654804208@ghanshyammann.com> Message-ID: <20200528233232.xddnwxai6flh5u5m@yuggoth.org> On 2020-05-28 14:37:00 -0700 (-0700), Michael Johnson wrote: > Considering recent OSSA issues did not release fixes for Ocata[1][2], > I think we should really consider making the maintenance status more > clear and/or per project. [...] In some cases this can be because the vulnerability was only introduced after the Ocata release and so the stable/ocata branch was not affected (I don't recall nor can I immediately spot whether that was the scenario with any recent advisories we've issued). In general though, I agree, if nobody steps forward to at least backport security fixes and keep jobs working sufficiently to merge those, then the branch is already in a de facto unmaintained state. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pramchan at yahoo.com Fri May 29 02:19:36 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 29 May 2020 02:19:36 +0000 (UTC) Subject: [heat] Deprecations in heat stack / apis In-Reply-To: References: Message-ID: <1669159331.1454022.1590718776872@mail.yahoo.com> Hi all, Can heat team send us representation for Heat Interop testing. Based on the  triplo deprecating tripleo-heat-template, appears there are deprications and we did have been requesting help to review the add-ons to interop  refer : https://github.com/openstack/interop/blob/master/next.json& https://github.com/openstack/interop/blob/master/2020.06.json& https://github.com/openstack/heat-tempest-plugin/blob/master/heat_tempest_plugin/tests/api/test_heat_api.py Add you topics to support Heat orchestrtion for interop.https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 ThanksPrakash - Chain Interop WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Fri May 29 07:20:55 2020 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 29 May 2020 09:20:55 +0200 Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration In-Reply-To: References: Message-ID: Hi, Port-resource-request (see: https://docs.openstack.org/api-ref/network/v2/index.html#port-resource-request ) is a read-only (and admin-only) field of ports, which is filled based on the agent heartbeats. So now there is now polling of agents or similar. Adding extra "overload" to this mechanism, like polling cyborg or similar looks something out of the original design for me, not to speak about the performance issues to add - API requests towards cyborg (or anything else) to every port GET operation - store cyborg related information in neutron db which was fetched from cyborg (periodically I suppose) to make neutron able to fill port-resource-request. Regards Lajos Sean Mooney ezt írta (időpont: 2020. máj. 28., Cs, 16:13): > On Thu, 2020-05-28 at 20:50 +0800, yumeng bao wrote: > >  > > Hi all, > > > > > > In cyborg pre-PTG meeting conducted last week[0],shaohe from Intel > introduced SmartNIC support integrations,and we've > > reached some initial agreements: > > > > The workflow for a user to create a server with network > acceleartor(accelerator is managed by Cyborg) is: > > > > 1. create a port with accelerator request specified into > binding_profile field > > NOTE: Putting the accelerator request(device_profile) into > binding_profile is one possible solution implemented in > > our POC. > the binding profile field is not really intended for this. > > > https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/portbindings.py#L31-L34 > its intended to pass info from nova to neutron but not the other way > around. > it was orgininally introduced so that nova could pass info to the neutron > plug in > specificly the sriov pci address. it was not intended for two way > comunicaiton to present infom form neutron > to nova. > > we kindo of broke that with the trusted vf feature but since that was > intended to be admin only as its a security risk > in a mulit tenant cloud its a slightl different case. > i think we should avoid using the binding profile for passing info form > neutron to nova and keep it for its orginal > use of passing info from the virt dirver to the network backend. > > > > Another possible solution,adding a new attribute to port object > for cyborg specific use instead of using > > binding_profile, is discussed in shanghai Summit[1]. > > This needs check with neutron team, which neutron team would suggest? > from a nova persepctive i would prefer if this was a new extention. > the binding profile is admin only by default so its not realy a good way > to request features be enabled. > you can use neutron rbac policies to alther that i belive but in genral i > dont think we shoudl advocate for non admins > to be able to modify the binding profile as they can break nova. e.g. by > modifying the pci addres. > if we want to supprot cyborg/smartnic integration we should add a new > device-profile extention that intoduces the ablity > for a non admin user to specify a cyborg device profile name as a new > attibute on the port. > > the neutron server could then either retirve the request groups form > cyborg and pass them as part of the port resouce > request using the mechanium added for minium bandwidth or it can leave > that to nova to manage. > > i would kind of prefer neutron to do this but both could work. > > > > 2.create a server with the port created > > > > Cyborg-nova-neutron integration workflow can be found on page 3 of the > slide[2] presented in pre-PTG. > > > > And we also record the introduction! Please find the pre-PTG meeting > vedio record in [3] and [4], they are the same, > > just for different region access. > > > > > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014987.html > > [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj > > [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw > > [3]pre-PTG vedio records in Youtube: > https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu.be > > [4]pre-PTG vedio records in Youku: > > > http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom=iphone&sharekey=51459cbd599407990dd09940061b374d4 > > > > Regards, > > Yumeng > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri May 29 07:49:30 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 29 May 2020 09:49:30 +0200 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: Hi Tom, I know for sure that people are using DHCP relay with ironic, I think the TripleO documentation may give you some hints (adjusted to your presumably non-TripleO environment): http://tripleo.org/install/advanced_deployment/routed_spine_leaf_network.html#dhcp-relay-configuration Dmitry On Thu, May 28, 2020 at 11:06 PM Amy Marrich wrote: > Hey Tom, > > Forwarding to the OpenStack discuss list where you might get more > assistance. > > Thanks, > > Amy (spotz) > > On Thu, May 28, 2020 at 3:32 PM Thomas King wrote: > >> Good day, >> >> We have Ironic running and connected via VLANs to nearby machines. We >> want to extend this to other parts of our product development lab without >> extending VLANs. >> >> Using DHCP relay, we would point to a single IP address to serve DHCP >> requests but I'm not entirely sure of the Neutron network/subnet >> configuration, nor which IP address should be used for the relay agent on >> the switch. >> >> Is DHCP relay supported by Neutron? >> >> My guess is to add a subnet in the provisioning network and point the >> relay agent to the linuxbridge interface's IP: >> 14: brq467f6775-be: mtu 1500 qdisc >> noqueue state UP group default qlen 1000 >> link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff >> inet 10.10.0.1/16 scope global brq467f6775-be >> valid_lft forever preferred_lft forever >> inet6 fe80::5400:52ff:fe85:d33d/64 scope link >> valid_lft forever preferred_lft forever >> >> Thank you, >> Tom King >> _______________________________________________ >> openstack-mentoring mailing list >> openstack-mentoring at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Fri May 29 08:59:24 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 29 May 2020 10:59:24 +0200 Subject: [tripleo] better observability into containers management in the post-paunch world Message-ID: <69f05237-3bc7-808f-d2ca-16a5855e9366@redhat.com> Hello, as you may know, Paunch [0] was originally developed for simple containers management in TripleO. Today we have Paunch deprecated for the preference of ansible module [1]. It uses ansible async executions and performs management actions in batches. While that's great and novel and works faster, Paunch's logs were simple to follow and its code was easy to debug with pdb breakpoints. Howewer, debugging experience and overall observability (please excuse me that buzzword) has changed [2], when switched it to ansible. In order improve that situation, I propose that data aggregator [3] for post-processing of the containers data under the module's management. It is based on setting facts, using already developed custom filters [4], and ansible's set_stats [5]. That data is just a (big) dictionary that contains: * input containers configs fetched from json files for all deploy steps, * snapshotted runtime info as it was consumed by the containers management tasks, * internal processing results (identified logical state, like 'changed' or 'broken') That is for all of the batches of managed containers in a deployment, per hosts basis. And that's basically it. How to use that data is out of scope at this point of discussion. Please share your thoughts on the topic. Note that the aggregator itiself is simple (~140 LOC). Please don't call it over-engineered :) It *only* allows users and tripleo devs to build whatever complex and over-engineered data analysing pipelines on top of it. Those may be colored graphs of dependencies and states, or simple json parsing, like that prototype [5] (WIP, only for a standalone deployments). Wrapping it up, it's up to devs and users how to use that data aggregator. Let's just simplify their lives with debugging containers management framework please. [0] https://opendev.org/openstack/paunch [1] https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_container_manage [2] https://launchpad.net/bugs/1879455 [3] https://review.opendev.org/#/q/topic:bug/1879455 [4] https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/ansible_plugins/filter/helpers.py [5] https://docs.ansible.com/ansible/latest/modules/set_stats_module.html -- Best regards, Bogdan Dobrelya, Irc #bogdando From dsneddon at redhat.com Fri May 29 09:14:51 2020 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 29 May 2020 02:14:51 -0700 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: You probably want to enable Neutron segments and use the Neutron routed networks feature so you can use different subnets on different segments (layer 2 domains AKA VLANs) of the same network. You specify different values such as IP allocation pools and router address(es) for each subnet, and Ironic and Neutron will do the right thing. You need to enable segments in the Neutron configuration and restart the Neutron server. I don’t think you will have to recreate the network. Behind the scenes, dnsmasq will be configured with multiple subnets and address scopes within the Neutron DHCP agent and the Ironic Inspector agent. Each segment/subnet will be given a different VLAN ID. As Dmitry mentioned, TripleO uses that method for the provisioning network, so you can use that as an example. The provisioning network in TripleO is the one referred to as the “control plane” network. -Dan On Fri, May 29, 2020 at 12:51 AM Dmitry Tantsur wrote: > Hi Tom, > > I know for sure that people are using DHCP relay with ironic, I think the > TripleO documentation may give you some hints (adjusted to your presumably > non-TripleO environment): > http://tripleo.org/install/advanced_deployment/routed_spine_leaf_network.html#dhcp-relay-configuration > > Dmitry > > On Thu, May 28, 2020 at 11:06 PM Amy Marrich wrote: > >> Hey Tom, >> >> Forwarding to the OpenStack discuss list where you might get more >> assistance. >> >> Thanks, >> >> Amy (spotz) >> >> On Thu, May 28, 2020 at 3:32 PM Thomas King >> wrote: >> >>> Good day, >>> >>> We have Ironic running and connected via VLANs to nearby machines. We >>> want to extend this to other parts of our product development lab without >>> extending VLANs. >>> >>> Using DHCP relay, we would point to a single IP address to serve DHCP >>> requests but I'm not entirely sure of the Neutron network/subnet >>> configuration, nor which IP address should be used for the relay agent on >>> the switch. >>> >>> Is DHCP relay supported by Neutron? >>> >>> My guess is to add a subnet in the provisioning network and point the >>> relay agent to the linuxbridge interface's IP: >>> 14: brq467f6775-be: mtu 1500 qdisc >>> noqueue state UP group default qlen 1000 >>> link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff >>> inet 10.10.0.1/16 scope global brq467f6775-be >>> valid_lft forever preferred_lft forever >>> inet6 fe80::5400:52ff:fe85:d33d/64 scope link >>> valid_lft forever preferred_lft forever >>> >>> Thank you, >>> Tom King >>> _______________________________________________ >>> openstack-mentoring mailing list >>> openstack-mentoring at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring >>> >> -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri May 29 10:29:20 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 29 May 2020 12:29:20 +0200 Subject: [puppet] Puppet 5 is officially unsupported in Victoria release In-Reply-To: <1590678440941.1665@binero.com> References: <1588928163362.88879@binero.com> <20200508175819.v5fpdw2xxewyuqb3@yuggoth.org> <87785b66-3b32-ceef-da00-4a4c4331d230@debian.org> <20200508202449.lw6yslw5yzkpy42o@yuggoth.org> <555fb8c7-60e9-b23f-78b1-81aa901ad3a5@debian.org> <1589050337848.3939@binero.com> <92450b45-f011-afe0-3e4f-2f59e520f1a5@debian.org> <96e6af4d-d9df-4210-6f9d-9f83164d7499@debian.org> <1589966309759.42370@binero.com> <51ed8f75-fc3a-5539-daef-5693758c3d68@debian.org> <1590616492646.63126@binero.com> <287ca10a-2467-dc09-546a-7326b0358c20@debian.org> <1590660761517.77469@binero.com> <85be2041-5ab0-a659-3a70-bd9d0d877412@debian.org> <1590678440941.1665@binero.com> Message-ID: <48168732-0eca-2564-e33c-5745ac8800a8@debian.org> On 5/28/20 5:07 PM, Tobias Urdin wrote: > Hello Thomas, > > My proposal was that we officially state that Puppet 5 is unsupported but we dont > break it for the Victoria cycle to make sure nobody actually uses Puppet 5. > > Best regards That's fine then, let's go ahead. I use a packaged version of the modules as you know, so that shouldn't be breaking me as you wrote. Cheers, Thomas Goirand (zigo) From elod.illes at est.tech Fri May 29 11:34:46 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 29 May 2020 13:34:46 +0200 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> Message-ID: Hi, First of all, let's just start that Matt's resolution about Extended Maintenance [1] is a good and detailed description about what and how the community decided to handle the extended support of old stable branches. Even the testing difficulties part [2] is well described and actually explains the very same situation what Gmann's mail is about. According to that section it is possible that branches uses reduced sets of testing, even without tempest jobs. I think we understood that it is more risky to accept fixes that might not be fully tested, but that is a risk that we accepted when Extended Maintenance process was proposed. Furthermore, I could imagine that some patches are not accepted to get merged in such branches because stable cores consider it too risky without testing it properly, while other patches can be easily merged as those are not considered dangerous. On the other hand of course if a project's stable maintainers do not have the capacity to deal with an old branch, then it is a possibility to propose End of Life for that branch on this mailing list (according to process [3]). If no one volunteers to help them with the reviews and backports, then it's time to EOL. And again, according to the resolution, it is imagined a per-project EOL, not an overall community-wide EOL as was before. What I see though is that there are not that high number of incoming patches against Ocata in the last months so it shouldn't take that much extra work compared to newer stable branches, still it needs work of course and dedicated reviewers, dedicated time. So who are interested in keeping old branches alive should step up and spend time on reviewing. We shouldn't forget that this process is to help developers, vendors and operators. But we need them to show interest. What Sean wrote about requirement team's burden is much more real as that is the biggest pain point of the old branches. (Especially now, when not even py27 is dropped, but random modules decide to drop also py35, py36, py37, too - not to mention that sometimes they do it improperly, causing extra chaos). This could really push us toward EOL'ing a complete branch. So maybe this team needs more help from interested parties. Finally, my last comment: it's a great success that two years after the originally planned EOL date, Ocata could still accept fixes. That was/is the goal of Extended Maintenance. TL;DR: If it's not feasible to fix a general issue of a job, then drop that job. And I think we should not EOL Ocata in general, rather let projects EOL their ocata branch if they cannot invest more time on fixing them. [1] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html [2] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#testing [3] https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life Előd On 2020. 05. 28. 20:30, Sean McGinnis wrote: > On 5/28/20 12:03 PM, Ghanshyam Mann wrote: >> Hello Everyone, >> >> Ocata is in the 'Extended Maintenance' phase[1] but its gate is >> broken and difficult to fix now. >> >> The main issue started for gate is about Tempest and its deps >> constraint is correctly used on Ocata or not. >> We fixed most of those issues when things started failing during py2 >> drop[2]. >> >> But now things are broken due to newer stestr is installed in Ocata >> jobs[2]. As Sean also mentioned in this >> backport[4] that stestr is not constraint in Ocata[5] as it was not >> in g-r that time so cannot be added in u-c. > > We actually did just recently merge a patch to add it: > > https://review.opendev.org/#/c/725213/ > > There were some other breakages that had to be fixed before we could get > it to land, so that took a little longer than expected. That said... > >> >> Tempest old tag cannot be fixed to have that with cap on the tempest >> side. >> >> The only option left to unblock the gate is to remove the integration >> testing from Ocata gate. But it will untested after that >> as we cannot say unit testing is enough to mark it tested.  At least >> from QA perspective, I would not say it is tested >> if there is no integration job running there. >> >> Keeping it untested (no integration testing) and saying it is >> Maintained in the community can be very confusing things. >> I think its time to move it to the 'Unmaintained' phase and then EOL. >> Thought? > > I do think it is getting to the point where it is becoming increasingly > difficult to keep stable/ocata happy, and time is better spent on more > recent stable branches. > > When we discussed changing the stable policy to allow for Extended > Maintenance, part of that was that we would support EM as long as > someone was able to keep things working and tested. I think we've gotten > to the point where that is no longer the case with ocata. > > This is the first branch that has made it this long before going EOL, so > it has been useful to see what other issues came up that were not > discussed. One thing that it's made me realize is there is a lot more > codependency than we maybe realized in those early discussions. The > policy and discussions that I remember were really about whether each > project would continue to put in the effort to keep things running. But > it really requires a lot more than just an individual project team to do > that. If any one of the "core" projects starts to have an issue, that > really means all of the projects have an issue. So it would not be > possible for, just as an example, Cinder to decide to transition its > stable/ocata branch to EOL, but Nova to keep maintaining their > stable/ocata branch. > > There is also the cross-cutting concerns of requirements management and > tempest. I think these two areas have been where the most issues have > cropped up, and therefore requiring the most work to keep things > running. Kudos to gmann and the tempest team for getting things to work > after we've dropped py27. But requirements is a very small team (as is > tempest/QA), and things are breaking that require some thought and time > to work through the right way to handle issues in a stable branch, so a > lot of the effort is falling to these smaller groups to try to prop > things up. > > I know I personally don't have extra time to be working on these, and no > real motivation other than I like to see Zuul comments be green. Now > that we've gotten the stestr issue hopefully resolved, I don't think I > can spend any more time on stable/ocata issues. Others are free to pitch > in, but I really think it is in the community's best interest if we just > decide we've done enough (Ocata would have gone EOL over 2 years ago > under the old policy) and it's time to call it and mark those branches > EOL. > > Sean > > From sean.mcginnis at gmx.com Fri May 29 12:54:05 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 29 May 2020 07:54:05 -0500 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> Message-ID: On 5/29/20 6:34 AM, Előd Illés wrote: > [snip] > > TL;DR: If it's not feasible to fix a general issue of a job, then drop > that job. And I think we should not EOL Ocata in general, rather let > projects EOL their ocata branch if they cannot invest more time on > fixing them. The interdependency is the trick here. Some projects can easily EOL on their own and it's isolated enough that it doesn't cause issues. But for other projects, like Cinder and Nova that I mentioned, it's kind of an all-or-nothing situation. I suppose it is feasible that we drop testing to only running unit tests. If we don't run any kind of integration testing, then it does make these projects a little more independent. We still have the requirements issues though. Unless someone addresses any rot in the stable requirements, even unit tests become hard to run. Just thinking out loud on some of the issues I see. We can try to follow the original EM plan and leave it up to each project to declare their intent to go EOL, then tag ocata-eol to close it out. Or we can collectively decide Ocata is done and pull the big switch. Sean From gmann at ghanshyammann.com Fri May 29 13:17:16 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 May 2020 08:17:16 -0500 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> Message-ID: <17260946293.bf44dcaa81001.6800161932877911216@ghanshyammann.com> ---- On Fri, 29 May 2020 07:54:05 -0500 Sean McGinnis wrote ---- > On 5/29/20 6:34 AM, Előd Illés wrote: > > [snip] > > > > TL;DR: If it's not feasible to fix a general issue of a job, then drop > > that job. And I think we should not EOL Ocata in general, rather let > > projects EOL their ocata branch if they cannot invest more time on > > fixing them. > > The interdependency is the trick here. Some projects can easily EOL on > their own and it's isolated enough that it doesn't cause issues. But for > other projects, like Cinder and Nova that I mentioned, it's kind of an > all-or-nothing situation. > > I suppose it is feasible that we drop testing to only running unit > tests. If we don't run any kind of integration testing, then it does > make these projects a little more independent. > > We still have the requirements issues though. Unless someone addresses > any rot in the stable requirements, even unit tests become hard to run. > > Just thinking out loud on some of the issues I see. We can try to follow > the original EM plan and leave it up to each project to declare their > intent to go EOL, then tag ocata-eol to close it out. Or we can > collectively decide Ocata is done and pull the big switch. >From the stable policy if CI has broken nd no maintainer then we can move that to unmaintained. And there is always time to revert back to EM if the maintainer shows up. IMO, maintaining only with unit tests is not a good idea. I have not heard from projects that they are interested to maintain it, if any then we can see how to proceed otherwise collectively marking Ocata as Unmaintained is the right thing. [1] https://docs.openstack.org/project-team-guide/stable-branches.html#unmaintained -gmann > > Sean > > > From radoslaw.piliszek at gmail.com Fri May 29 13:18:19 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Fri, 29 May 2020 15:18:19 +0200 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core Message-ID: Hi Folks! This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, CC'ed) as Kolla Ansible core. Doug coauthored the Nova cells support and helps greatly with monitoring and logging facilities available in Kolla. Please give your feedback in this thread. If there are no objections, I will add Doug after a week from now (that is roughly when PTG is over). -yoctozepto From aschultz at redhat.com Fri May 29 13:32:54 2020 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 29 May 2020 07:32:54 -0600 Subject: [heat] Deprecations in heat stack / apis In-Reply-To: <1669159331.1454022.1590718776872@mail.yahoo.com> References: <1669159331.1454022.1590718776872@mail.yahoo.com> Message-ID: On Thu, May 28, 2020 at 8:26 PM prakash RAMCHANDRAN wrote: > > Hi all, > > Can heat team send us representation for Heat Interop testing. Based on the triplo deprecating tripleo-heat-template, appears there are deprications and we did have been requesting help to review the add-ons to interop > What do you mean 'triplo deprecating tripleo-heat-template'? Can you point to where this is coming from? > refer : https://github.com/openstack/interop/blob/master/next.json > & https://github.com/openstack/interop/blob/master/2020.06.json > & https://github.com/openstack/heat-tempest-plugin/blob/master/heat_tempest_plugin/tests/api/test_heat_api.py > > > Add you topics to support Heat orchestrtion for interop. > https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 > > > Thanks > Prakash - Chain Interop WG > > From radoslaw.piliszek at gmail.com Fri May 29 13:46:20 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Fri, 29 May 2020 15:46:20 +0200 Subject: [all][dev][qa] cirros 0.5.1 In-Reply-To: <0a471df1-6e9b-f2e3-6340-7465f328a5a1@gmail.com> References: <0a471df1-6e9b-f2e3-6340-7465f328a5a1@gmail.com> Message-ID: <230b7c5b-f5b1-8895-d3e2-27dbd89646b3@gmail.com> Hi again! The patches in DevStack and Ironic have already merged. Thanks to everyone involved. :-) The Neutron patch is awaiting approval now [1]. Do note there are some issues with IPv6-only environments (afaik nothing new compared to 0.4.0). Please see [2] for details. [1] https://review.opendev.org/711425 [2] https://github.com/cirros-dev/cirros/issues/58 -yoctozepto On 2020-05-26 17:22, Radosław Piliszek wrote: > Hello OpenStack Folks! > > Since it's early in the Victoria cycle, it's a good time to revisit the > cirros 0.5.1 migration in common CI. The patch [1] is getting a cleanup > due to nova fix increasing instance memory limits upfront but I propose > to merge it as soon as we clean it up properly. It has been tested with > Ussuri in Kolla CI and had positive results from Tempest jobs for > Neutron and Ironic. > It would be great if we made sure projects no longer use the old cirros > 0.4.0 nor some old Ubuntus. > > [1] https://review.opendev.org/711492 > > -yoctozepto From elod.illes at est.tech Fri May 29 14:24:36 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 29 May 2020 16:24:36 +0200 Subject: [all][stable] Moving the stable/ocata to 'Unmaintained' phase and then EOL In-Reply-To: References: <1725c3cbbd0.11d04fc8645393.9035729090460383424@ghanshyammann.com> <762e58c8-44f6-79d0-d674-43becf3eb42a@gmx.com> Message-ID: <8bbcc334-d59c-f99c-078e-a221a8d0a3d9@est.tech> On 2020. 05. 29. 14:54, Sean McGinnis wrote: > On 5/29/20 6:34 AM, Előd Illés wrote: >> [snip] >> >> TL;DR: If it's not feasible to fix a general issue of a job, then drop >> that job. And I think we should not EOL Ocata in general, rather let >> projects EOL their ocata branch if they cannot invest more time on >> fixing them. > > The interdependency is the trick here. Some projects can easily EOL on > their own and it's isolated enough that it doesn't cause issues. But for > other projects, like Cinder and Nova that I mentioned, it's kind of an > all-or-nothing situation. From the resolution: "Note that it is possible for our CI infrastructure to function based on EOL tags - https://review.opendev.org/#/c/520095/ [1] Does this cover the problem you mention? Or at least should we try that this is enough? > > I suppose it is feasible that we drop testing to only running unit > tests. If we don't run any kind of integration testing, then it does > make these projects a little more independent. > > We still have the requirements issues though. Unless someone addresses > any rot in the stable requirements, even unit tests become hard to run. Yes, I experienced the same: from time to time, new constraints need to be added to projects (especially those that are not using global upper-constraints.txt), and not always clear from the errors what is the exact root cause. Usually if someone takes the time to backport a patch also tries to fix the problem (not always, though, and not always with a success). > > Just thinking out loud on some of the issues I see. We can try to follow > the original EM plan and leave it up to each project to declare their > intent to go EOL, then tag ocata-eol to close it out. Or we can > collectively decide Ocata is done and pull the big switch. > > Sean > Indeed it depends on what the community wants. Whether there are others who are interested in keeping a branch open, keeping Ocata open for bugfixes. Thanks Sean and everyone, for thinking on this, and moreover for fixing the ocata branch issues so far. :) Előd [1] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life From balazs.gibizer at est.tech Fri May 29 14:29:24 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 29 May 2020 16:29:24 +0200 Subject: [nova][neutron][ptg] How to increase the minimum bandwidth guarantee of a running instance In-Reply-To: References: <20200519195510.chfuo5byodcrooaj@skaplons-mac> Message-ID: <09K3BQ.U60PNNRYHFS31@est.tech> On Wed, May 20, 2020 at 13:50, Balázs Gibizer wrote: > > > On Tue, May 19, 2020 at 23:48, Sean Mooney wrote: >> On Tue, 2020-05-19 at 21:55 +0200, Slawek Kaplonski wrote: >> [snip] >>> > >>> > >>> > Option B - QoS rule update >>> > -------------------------- >>> > >>> > Allow changing the minimum bandwidth guarantee of a port that is >>> already >>> > bound to the instance. >>> > >>> > Today Neutron rejects such QoS rule update. If we want to >>> support such >>> > update then: >>> > * either Neutron should call placement allocation_candidates API >>> and the >>> > update the instance's allocation. Similarly what Nova does in >>> Option A. >>> > * or Neutron should tell Nova that the resource request of the >>> port has been >>> > changed and then Nova needs to call Placement and update >>> instance's >>> > allocation. >>> >>> In this case, if You update QoS rule, don't forget that policy >>> with this rule >>> can be used by many ports already. So we will need to find all of >>> them and >>> call placement for each. >>> What if that will be fine for some ports but not for all? >> i think if we went with a qos rule update we would not actully >> modify the rule itself >> that would break to many thing and instead change change the qos >> rule that is applied to the port. >> >> e.g. if you have a 1GBps rule and and 10GBps then we could support >> swaping between the rules >> but we should not support chnaging the 1GBps rule to a 2GBps rule. >> >> neutron should ideally do the placement check and allocation update >> as part of the qos rule update >> api action and raise an exception if it could not. >>> >>> > >>> > >>> > The Option A and Option B are not mutually exclusive but still I >>> would like >>> > to see what is the preference of the community. Which direction >>> should we >>> > move forward? >>> >>> There is also 3rd possible option, very similar to Option B which >>> is change of >>> the QoS policy for the port. It's basically almost the same as >>> Option B, but >>> that way You have always only one port to update (unless it's not >>> policy >>> associated with network). So because of that reason, maybe a bit >>> easier to do. >> >> yes that is what i was suggesting above and its one of the option we >> discused when first >> desigining the minium bandwith policy. this i think is the optimal >> solution and i dont think we should do >> option a or b although A could be done as a sperate feature just not >> as a way we recommend to update qos policies. > > My mistake. I don't want to allow changing a rule I want to allow > changing which rule is assigned to a bound port. As Sean described > this direction might require neutron to call GET > /allocation_candidates and then update the instance allocation as a > result in placement. However it would create a situation where the > instance's allocation is managed both from nova and neutron. >>> I've thought more about Option B (e.g. allowing to _replace_ the rule assigned to a bound port) and probably found one more general limitation. From placement perspective there could be two resource providers (RP1, RP2) on the same host that are connected to the same physnet (e.g having the same CUSTOM_PHYSNET_XXX trait). Both can have independent bandwidth inventories and both can have different bandwidth usages. Let's assume that the port is currently allocating from RP1 and then the user requests an increase of the bandwidth allocation of the port via a qos min bw rule replacement. Let's assume that RP1 does not have enough free bandwidth resource to accommodate the change but RP2 has. From placement perspective we could remove the existing bw allocation from RP1 and add the new increased bw allocation to RP2. *BUT* we cannot simply do that from networking perspective as RP1 and RP2 represents two different PFs (or OVS bridges) so allocation move would require the vif to be moved too in the networking backend. Do I understand correctly that this is a valid limitation from networking perspective? Also I would like to tease out the Neutron team's opinion about the option of implementing Option B on the neutron side. E.g.: * User request a min bw rule replacement * Neutron reads the current allocation of the port.device_id (i.e instance_uuid) from placement * Neutron calculates the difference between the bw resource request of the old min bw rule and the new min bw rule * Neutron adds this difference to the bw allocation of the RP indicated by the value of port.binding_profile['allocation'] (which is an RP uuid) and the PUTs the new instance allocation back to placement. If the PUT /allocations call succeed the the rule replacement is accepted and if the PUT /allocations fails then the rule replacement is rejected to the end user. I'm asking this because moving the instance allocation management part of the above algorithm to the nova side would require additional logic: * A new port-resource-request-changed event for os-server-external-event to notify nova about the change. This is a small inconvenient. BUT we need also * a way for neutron to provide both the old and the new resource request of the port (or the diff) so that nova can use that towards placement. Please note that the current tag field in the os-server-external-event request is only a string used to communicate the port_id so it is not really useful to carry structured data. Cheers, gibi From mark at stackhpc.com Fri May 29 14:31:33 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 29 May 2020 15:31:33 +0100 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: On Fri, 29 May 2020 at 14:18, Radosław Piliszek wrote: > > Hi Folks! > > This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, > CC'ed) as Kolla Ansible core. > > Doug coauthored the Nova cells support and helps greatly with monitoring > and logging facilities available in Kolla. > > Please give your feedback in this thread. +2! Doug would be a great addition to the team. > > If there are no objections, I will add Doug after a week from now (that > is roughly when PTG is over). > > -yoctozepto > From marcin.juszkiewicz at linaro.org Fri May 29 14:51:40 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 29 May 2020 16:51:40 +0200 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: <40c18751-dc2e-ea55-a2c0-10d0a7c7a78e@linaro.org> W dniu 29.05.2020 o 15:18, Radosław Piliszek pisze: > Hi Folks! > > This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, > CC'ed) as Kolla Ansible core. +1 From openstack at nemebean.com Fri May 29 15:56:19 2020 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 29 May 2020 10:56:19 -0500 Subject: [oslo] PTG Plans Message-ID: <982ae8fc-4d2a-b91f-8fb1-b4c6c4b84e6a@nemebean.com> Hi, Sorry for the late notice on this. Many firedrills this week. I've made some updates to the etherpad[0] to reflect more concrete plans for both location and timing of topics. It worked out to just give everything 15 minutes for the first hour, and I reserved the second hour for cross-project discussions with Nova. As always, we can be flexible with the scheduling if something needs more or less discussion. The only timing constraint I saw mentioned was that Thierry needed the oslo.metrics topic to be early, so he gets to go first. :-) I also added a meetpad URL there. We used Jitsi pretty successfully for our previous virtual PTG so I don't see any reason to move off the open platform. I suggest trying to join ahead of time to make sure everything works for you. We do have a Zoom reserved for that time as well in case something catastrophic happens. If you have any concerns about the plan, let me know ASAP. Otherwise I'll see you all on Monday! -Ben 0: https://etherpad.opendev.org/p/oslo-victoria-topics From hello at dincercelik.com Fri May 29 16:55:27 2020 From: hello at dincercelik.com (Dincer Celik) Date: Fri, 29 May 2020 19:55:27 +0300 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: <6C61C7DA-9A96-40D3-9B89-9E188A0B34CC@dincercelik.com> +1 Doug is doing valuable works. His addition will be good. > On 29 May 2020, at 16:18, Radosław Piliszek wrote: > > Hi Folks! > > This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, CC'ed) as Kolla Ansible core. > > Doug coauthored the Nova cells support and helps greatly with monitoring and logging facilities available in Kolla. > > Please give your feedback in this thread. > > If there are no objections, I will add Doug after a week from now (that is roughly when PTG is over). > > -yoctozepto > From thomas.king at gmail.com Thu May 28 21:08:23 2020 From: thomas.king at gmail.com (Thomas King) Date: Thu, 28 May 2020 15:08:23 -0600 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: Thanks, that'd be great. Tom King On Thu, May 28, 2020 at 3:04 PM Amy Marrich wrote: > Hey Tom, > > Forwarding to the OpenStack discuss list where you might get more > assistance. > > Thanks, > > Amy (spotz) > > On Thu, May 28, 2020 at 3:32 PM Thomas King wrote: > >> Good day, >> >> We have Ironic running and connected via VLANs to nearby machines. We >> want to extend this to other parts of our product development lab without >> extending VLANs. >> >> Using DHCP relay, we would point to a single IP address to serve DHCP >> requests but I'm not entirely sure of the Neutron network/subnet >> configuration, nor which IP address should be used for the relay agent on >> the switch. >> >> Is DHCP relay supported by Neutron? >> >> My guess is to add a subnet in the provisioning network and point the >> relay agent to the linuxbridge interface's IP: >> 14: brq467f6775-be: mtu 1500 qdisc >> noqueue state UP group default qlen 1000 >> link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff >> inet 10.10.0.1/16 scope global brq467f6775-be >> valid_lft forever preferred_lft forever >> inet6 fe80::5400:52ff:fe85:d33d/64 scope link >> valid_lft forever preferred_lft forever >> >> Thank you, >> Tom King >> _______________________________________________ >> openstack-mentoring mailing list >> openstack-mentoring at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatinkarel at gmail.com Fri May 29 11:55:40 2020 From: yatinkarel at gmail.com (YATIN KAREL) Date: Fri, 29 May 2020 17:25:40 +0530 Subject: [rdo-users] [TripleO][rdo][train][centos7] fails at ovn_south_db_server: Start containers for step 4 using paunch In-Reply-To: References: Message-ID: Hi Ruslanas, On Wed, May 27, 2020 at 5:08 PM Ruslanas Gžibovskis wrote: > latest execution (3rd openstack deploy command if to be a bit more > precise): > https://pastebin.com/nKbqZRX0 < this is paunch.log on controller > https://pastebin.com/NP8DNgQ7 < this is ansible output from deploy > command. > > Can u check if s > I have issues with OVN deployment, Is there an option to use OVS (as I > understand OVS component was replaced by OVN?). > > Yes default is OVN, and it should work fine in Train. For the Error you are facing it looks like ovn_south_db_server container is not running on controller node for some reason, can u check with: sudopodman ps -a if it's in Exited state, also try to see if there are some logs which describe why it failed with: sudo podman logs ovn_south_db_server. If you still want to try OVS instead, could be done, There exist environment file[1] which can be used to override default ovn to ovs. Just pass -e ${_THT}/environments/services/neutron-ovs.yaml in the end of overcloud deploy comand. [1] https://github.com/openstack/tripleo-heat-templates/blob/stable/train/environments/services/neutron-ovs.yaml > Deploy command: > > _THT="/usr/share/openstack-tripleo-heat-templates" > _LTHT="$(pwd)" > > time openstack --verbose overcloud deploy \ > --templates \ > --stack rem0te \ > -r ${_LTHT}/roles_data.yaml \ > -n ${_LTHT}/network_data.yaml \ > -e ${_LTHT}/containers-prepare-parameter.yaml \ > -e ${_LTHT}/overcloud_images.yaml \ > -e ${_THT}/environments/disable-telemetry.yaml \ > -e ${_THT}/environments/host-config-and-reboot.yaml \ > --ntp-server > > config files: > 1 - https://github.com/qw3r3wq/homelab/tree/master/overcloud > > > Thank you for reading up to here. ;) double thank you for reply > _______________________________________________ > users mailing list > users at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscribe at lists.rdoproject.org > Thanks and Regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.king at gmail.com Fri May 29 16:47:11 2020 From: thomas.king at gmail.com (Thomas King) Date: Fri, 29 May 2020 10:47:11 -0600 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: In the Triple-O docs for unicast DHCP relay, it doesn't exactly say which IP address to target. Without deploying Triple-O, I'm not clear if the relay IP should be the bridge interface or the DHCP device. The first method makes sense because the gateway for that subnet wouldn't be connected to the Ironic controller by layer 2 (unless we used VXLAN over the physical network). As an experiment, I created a second subnet on my provisioning network. The original DHCP device port now has two IP addresses, one on each subnet. That makes the second method possible if I targeted its original IP address. Thanks for the help and please let me know which method is correct. Tom King On Fri, May 29, 2020 at 3:15 AM Dan Sneddon wrote: > You probably want to enable Neutron segments and use the Neutron routed > networks feature so you can use different subnets on different segments > (layer 2 domains AKA VLANs) of the same network. You specify different > values such as IP allocation pools and router address(es) for each subnet, > and Ironic and Neutron will do the right thing. You need to enable segments > in the Neutron configuration and restart the Neutron server. I don’t think > you will have to recreate the network. Behind the scenes, dnsmasq will be > configured with multiple subnets and address scopes within the Neutron DHCP > agent and the Ironic Inspector agent. > > Each segment/subnet will be given a different VLAN ID. As Dmitry > mentioned, TripleO uses that method for the provisioning network, so you > can use that as an example. The provisioning network in TripleO is the one > referred to as the “control plane” network. > > -Dan > > On Fri, May 29, 2020 at 12:51 AM Dmitry Tantsur > wrote: > >> Hi Tom, >> >> I know for sure that people are using DHCP relay with ironic, I think the >> TripleO documentation may give you some hints (adjusted to your presumably >> non-TripleO environment): >> http://tripleo.org/install/advanced_deployment/routed_spine_leaf_network.html#dhcp-relay-configuration >> >> Dmitry >> >> On Thu, May 28, 2020 at 11:06 PM Amy Marrich wrote: >> >>> Hey Tom, >>> >>> Forwarding to the OpenStack discuss list where you might get more >>> assistance. >>> >>> Thanks, >>> >>> Amy (spotz) >>> >>> On Thu, May 28, 2020 at 3:32 PM Thomas King >>> wrote: >>> >>>> Good day, >>>> >>>> We have Ironic running and connected via VLANs to nearby machines. We >>>> want to extend this to other parts of our product development lab without >>>> extending VLANs. >>>> >>>> Using DHCP relay, we would point to a single IP address to serve DHCP >>>> requests but I'm not entirely sure of the Neutron network/subnet >>>> configuration, nor which IP address should be used for the relay agent on >>>> the switch. >>>> >>>> Is DHCP relay supported by Neutron? >>>> >>>> My guess is to add a subnet in the provisioning network and point the >>>> relay agent to the linuxbridge interface's IP: >>>> 14: brq467f6775-be: mtu 1500 qdisc >>>> noqueue state UP group default qlen 1000 >>>> link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff >>>> inet 10.10.0.1/16 scope global brq467f6775-be >>>> valid_lft forever preferred_lft forever >>>> inet6 fe80::5400:52ff:fe85:d33d/64 scope link >>>> valid_lft forever preferred_lft forever >>>> >>>> Thank you, >>>> Tom King >>>> _______________________________________________ >>>> openstack-mentoring mailing list >>>> openstack-mentoring at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring >>>> >>> -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvoelker at vmware.com Fri May 29 17:33:30 2020 From: mvoelker at vmware.com (Mark Voelker) Date: Fri, 29 May 2020 17:33:30 +0000 Subject: [heat] Deprecations in heat stack / apis In-Reply-To: References: <1669159331.1454022.1590718776872@mail.yahoo.com> Message-ID: <5DEFA738-EC64-488A-8A64-B59ADA76CE16@vmware.com> Prakash, Are you referring to this mailing list thread [1]? If so that’s more or less an implementation detail in how TripleO deploys a cloud, and has absolutely no bearing on the API’s offered by Heat itself. If you’re looking for information on API’s, resources, or other functionality being deprecated over time, I’d suggest having a look at the Heat release notes. [2] The Heat API has been rather stable for some time now, though there have been a few resources deprecated over the past several releases (often following the underlying project that the resource supports—Nova or Neutron for example—deprecating something). At Your Service, Mark T. Voelker [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015035.html [2] https://docs.openstack.org/releasenotes/heat/ On May 29, 2020, at 9:32 AM, Alex Schultz > wrote: On Thu, May 28, 2020 at 8:26 PM prakash RAMCHANDRAN > wrote: Hi all, Can heat team send us representation for Heat Interop testing. Based on the triplo deprecating tripleo-heat-template, appears there are deprications and we did have been requesting help to review the add-ons to interop What do you mean 'triplo deprecating tripleo-heat-template'? Can you point to where this is coming from? refer : https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Finterop%2Fblob%2Fmaster%2Fnext.json&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863105493&sdata=%2F8yfr2iHPW48aHDHICJm5PSiNzvftcnvYiXFaB%2F%2F2xw%3D&reserved=0 & https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Finterop%2Fblob%2Fmaster%2F2020.06.json&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863105493&sdata=S%2FKSGF7UPAJRx48dnQVdI0sVm7zk05FWvVfzBAMCTs8%3D&reserved=0 & https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Fheat-tempest-plugin%2Fblob%2Fmaster%2Fheat_tempest_plugin%2Ftests%2Fapi%2Ftest_heat_api.py&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863105493&sdata=f9AQnaLbMFeyCBWkcvcgMKRFEk2eQRuEyc0IfuXciL4%3D&reserved=0 Add you topics to support Heat orchestrtion for interop. https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fetherpad.opendev.org%2Fp%2Finterop_virtual_ptg_planning_june_2020&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863115492&sdata=mVpboq%2BtJXTCXNoi5UbDW9tKpZ%2B93L5FF7wovwXw%2Bac%3D&reserved=0 Thanks Prakash - Chain Interop WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Fri May 29 17:47:42 2020 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Fri, 29 May 2020 19:47:42 +0200 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: +1! On Fri, 29 May 2020 at 15:19, Radosław Piliszek wrote: > Hi Folks! > > This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, > CC'ed) as Kolla Ansible core. > > Doug coauthored the Nova cells support and helps greatly with monitoring > and logging facilities available in Kolla. > > Please give your feedback in this thread. > > If there are no objections, I will add Doug after a week from now (that > is roughly when PTG is over). > > -yoctozepto > > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri May 29 18:25:27 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 29 May 2020 14:25:27 -0400 Subject: [cinder] PTG schedule Message-ID: <66cb1dcf-d5d8-fba1-de32-231d5c119c45@gmail.com> The Cinder team will be meeting in the zoom Diablo room, Tuesday-Friday 1300-1600 UTC. Make sure you register for the PTG so you will receive the passwords to the video meeting: https://virtualptgjune2020.eventbrite.com/ The complete schedule for the PTG is here: https://www.openstack.org/ptg#tab_schedule Although Cinder won't be meeting on Monday, there are plenty of interesting sessions you can attend. See this helpful email for general PTG info: http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015054.html The official Cinder etherpad for the PTG is: https://etherpad.opendev.org/p/victoria-ptg-cinder It contains the schedule. Please look it over so you can see when you'll be leading a discussion. Feel free to enhance any topic with additional information or questions. The schedule is organized like this: Tuesday - Current Cinder Issues Wednesday - Third Party Drivers Thursday - Cinder Improvements Friday - Cross-Project and Cinder Project Business see you next week! brian From pramchan at yahoo.com Fri May 29 19:32:07 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 29 May 2020 19:32:07 +0000 (UTC) Subject: [heat] Deprecations in heat stack / apis In-Reply-To: <5DEFA738-EC64-488A-8A64-B59ADA76CE16@vmware.com> References: <1669159331.1454022.1590718776872@mail.yahoo.com> <5DEFA738-EC64-488A-8A64-B59ADA76CE16@vmware.com> Message-ID: <1012748290.1812985.1590780727928@mail.yahoo.com> Mark, Thanks for clarification and agreed triple-o installer template does not change heat template creation fixed functionality based on today's  interop weekly  review call. we can drop this and focus on the participation from any and all interested  in joining the PTG call on June 1st to clear the core + add-ons. Refere: https://etherpad.opendev.org/p/victoria-ptg-interop-wg ThanksPrakash On Friday, May 29, 2020, 10:33:35 AM PDT, Mark Voelker wrote: Prakash, Are you referring to this mailing list thread [1]?  If so that’s more or less an implementation detail in how TripleO deploys a cloud, and has absolutely no bearing on the API’s offered by Heat itself.  If you’re looking for information on API’s, resources, or other functionality being deprecated over time, I’d suggest having a look at the Heat release notes. [2]  The Heat API has been rather stable for some time now, though there have been a few resources deprecated over the past several releases (often following the underlying project that the resource supports—Nova or Neutron for example—deprecating something). At Your Service, Mark T. Voelker [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015035.html [2] https://docs.openstack.org/releasenotes/heat/ On May 29, 2020, at 9:32 AM, Alex Schultz wrote: On Thu, May 28, 2020 at 8:26 PM prakash RAMCHANDRAN wrote: Hi all, Can heat team send us representation for Heat Interop testing. Based on the  triplo deprecating tripleo-heat-template, appears there are deprications and we did have been requesting help to review the add-ons to interop What do you mean 'triplo deprecating tripleo-heat-template'? Can you point to where this is coming from? refer : https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Finterop%2Fblob%2Fmaster%2Fnext.json&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863105493&sdata=%2F8yfr2iHPW48aHDHICJm5PSiNzvftcnvYiXFaB%2F%2F2xw%3D&reserved=0 & https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Finterop%2Fblob%2Fmaster%2F2020.06.json&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863105493&sdata=S%2FKSGF7UPAJRx48dnQVdI0sVm7zk05FWvVfzBAMCTs8%3D&reserved=0 & https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Fheat-tempest-plugin%2Fblob%2Fmaster%2Fheat_tempest_plugin%2Ftests%2Fapi%2Ftest_heat_api.py&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863105493&sdata=f9AQnaLbMFeyCBWkcvcgMKRFEk2eQRuEyc0IfuXciL4%3D&reserved=0 Add you topics to support Heat orchestrtion for interop. https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fetherpad.opendev.org%2Fp%2Finterop_virtual_ptg_planning_june_2020&data=02%7C01%7Cmvoelker%40vmware.com%7C3cb8d24b910e4052da7708d803d5faaf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637263564863115492&sdata=mVpboq%2BtJXTCXNoi5UbDW9tKpZ%2B93L5FF7wovwXw%2Bac%3D&reserved=0 Thanks Prakash - Chain Interop WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri May 29 20:37:43 2020 From: amy at demarco.com (Amy Marrich) Date: Fri, 29 May 2020 15:37:43 -0500 Subject: [Diversity] [PTG] Diversity WG at PTG Message-ID: The Diversity and Inclusion WG will be having a session at 16:00 UTC on Monday at https://meetpad.opendev.org/PTGDiversityAndInclusion As a WG reporting to the OSF Board we would like to invite all OSF projects to come and learn what the WG can do for their project. The WG oversees the Diversity and Inclusion surveys as well as the OpenStack mentoring program. Hope to see new faces on Monday! Amy Marrich (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Fri May 29 20:39:15 2020 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 29 May 2020 17:39:15 -0300 Subject: [tripleo] TripleO CI Summary: Unified Sprint 27 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 27** (May 07 thru May 27). The following is a summary of completed work during this sprint cycle [1]: - Completed stable branching work (ussuri). - Internal pipeline: created integration and component jobs. - Continued to fix promoter tests on CentOS8 and built an internal promoter server. - Submitted change reviews for the TripleO IPA multinode job. - Started backporting to train - Built CentOS8 train full promotion pipeline and a job to upgrade from train to ussuri. - Landing these jobs in upstream soon - Ruck/Rover recorded notes [2]. The planned work for the next sprint [3] extends the work started in the previous sprint and focuses on creating tests for ci-config repository. The Ruck and Rover for this sprint are Rafael Folco (rfolco) and Poja Jadhav (pojadhav). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in etherpad [4]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-27 [2] https://hackmd.io/2MdkNAUuT7aBcM0Yck4xnw [3] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-28 [4] https://hackmd.io/YAqFJrKMThGghTW4P2tabA -- Rafael Folco Senior Software Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri May 29 21:41:59 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 29 May 2020 17:41:59 -0400 Subject: [tc] changing meeting time Message-ID: Hello everyone, The PTG is happening online next week, on the same day as our TC monthly meeting. Since the PTG is only every six months, and we will meet the month after, how does everyone feel about skipping this month's meeting and considering the PTG as our meeting instead? Your comments are appreciated, Thanks, -- Mohammed Naser VEXXHOST, Inc. From fungi at yuggoth.org Fri May 29 21:48:07 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 29 May 2020 21:48:07 +0000 Subject: [tc] changing meeting time In-Reply-To: References: Message-ID: <20200529214807.lt5h4iozx2yh5w6e@yuggoth.org> On 2020-05-29 17:41:59 -0400 (-0400), Mohammed Naser wrote: > The PTG is happening online next week, on the same day as our TC > monthly meeting. > > Since the PTG is only every six months, and we will meet the month > after, how does everyone feel about skipping this month's meeting and > considering the PTG as our meeting instead? In a bygone era, the gathering of TC members at the Forum and PTG were basically considered *the* TC Meeting. As long as you can get quorum, it makes sense. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From nate.johnston at redhat.com Fri May 29 22:09:38 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 29 May 2020 18:09:38 -0400 Subject: [tc] changing meeting time In-Reply-To: <20200529214807.lt5h4iozx2yh5w6e@yuggoth.org> References: <20200529214807.lt5h4iozx2yh5w6e@yuggoth.org> Message-ID: <20200529220938.g5q5ujgbonmf3jk3@firewall> On Fri, May 29, 2020 at 09:48:07PM +0000, Jeremy Stanley wrote: > On 2020-05-29 17:41:59 -0400 (-0400), Mohammed Naser wrote: > > The PTG is happening online next week, on the same day as our TC > > monthly meeting. > > > > Since the PTG is only every six months, and we will meet the month > > after, how does everyone feel about skipping this month's meeting and > > considering the PTG as our meeting instead? > > In a bygone era, the gathering of TC members at the Forum and PTG > were basically considered *the* TC Meeting. As long as you can get > quorum, it makes sense. > -- > Jeremy Stanley I think this is the commonsense approach. Let's just do that. Thanks, Nate From iwienand at redhat.com Sat May 30 00:30:21 2020 From: iwienand at redhat.com (Ian Wienand) Date: Sat, 30 May 2020 10:30:21 +1000 Subject: Dropping python2.7 from diskimage-builder In-Reply-To: <20200521042248.GA1361000@fedora19.localdomain> References: <2967622E-AC8E-4386-AA72-B5791A8A0A1E@inaugust.com> <20200520222242.GA1349440@fedora19.localdomain> <20200521042248.GA1361000@fedora19.localdomain> Message-ID: <20200530003021.GA1770156@fedora19.localdomain> On Thu, May 21, 2020 at 02:22:48PM +1000, Ian Wienand wrote: > Since [1] has a few dependencies anyway, I think it's probably best to > force-merge [2,3,4,5] around the gate failures (they'd had review from > clarkb and otherwise passing) which fixes some bits about installing > the focal kernel and will mean that the last 2.X does not have any > known issues building images up until the focal release. After a few back and forths, 2.38.0 is intended to be the last dib release and doesn't have any known issues building up to focal on x86-64 and arm64, the rpm distros and gentoo are also in good shape. I'll leave this a few days and then merge the python-3 only changes and tag as 3.0.0 Thanks, -i From iwienand at redhat.com Sat May 30 00:43:14 2020 From: iwienand at redhat.com (Ian Wienand) Date: Sat, 30 May 2020 10:43:14 +1000 Subject: [all][infra] Upcoming removal of preinstalled pip and virtualenv from base images Message-ID: <20200530004314.GA1770592@fedora19.localdomain> Hello, This is to notify the community of the planned upcoming removal of the "pip-and-virtualenv" element from our infra image builds. This is part of the "cleanup test node python" spec documented at [1]. tl;dr ----- - pip and virtualenv tools will no longer be pre-installed on the base images we use for testing, but will be installed by jobs themselves. - most current jobs will depend on common Zuul base jobs that have been updated to install these tools already *i.e. in most cases there should be nothing to do* - in general, jobs should ensure they depend on roles that install these tools if they require them - if you do find your job failing due to the lack of either of these tools, use the ensure-pip or ensure-virtualenv roles provided in zuul-jobs - we have "-plain" node types that implement this now. If you think there might be a problem, switch the job to run run against one of these node types described below for testing. History ------- The "pip-and-virtualenv" element is part of diskimage-builder [2] and installs the latest version of pip and virtualenv, and related tools like setuptools, into the system environment for the daily builds. This has a long history of working around various issues, but has become increasingly problematic to maintain. One of the more noticable effects is that to prevent the distribution packages overwriting the upstream installs, we put various pip, setuptools and virtualenv packages on "hold" in CI images. This prevents tests resinstalling packaged tools which can create, in short, a big mess. Both destroying the extant environment and having the tools on hold have been problems for various projects at various times. Another other problem is what happens when you call "pip" or "virtualenv" has diverged. It used to be that "pip" would install the package under Python2, while pip3 would install under python3. However, modern distributions now have "pip" installing under Python 3. To keep having "pip" install Python 2 packages on these platforms is not just wrong, but drags in Python 2 dependences to our images that aren't required. The addition of Python 3's venv and the split with virtualenv makes things even more confusing again. Future ------ As famously said "the only winning move is not to play". [3] By dropping this element, images will not have non-packaged pip/virtualenv/setuptools/etc. pre-installed. No packages will be on hold. The "ensure-pip" role [4] will ensure that dependencies for "pip:" commands will work. Because of the way base jobs are setup, it is most likely that this role has already run. If you wish to use a virtual environment to install a tool, I would suggest using the "ensure_pip_virtualenv_cmd" this role exports. This will default to "python3 -m venv". An example is [5]. In a Python 3 word, you probably do not *need* virtualenv; "python3 -m venv" is available and works after the "ensure-pip" role is run. However, if you require some of the features that virtualenv provides that venv does not (explained at [6]) there is a role "ensure-virtualenv" [7]. For example, we do this on the devstack branches because it is common to use "virtualenv" there due to the long history [8]. If you need specific versions of pip or virtualenv, etc. beyond the system-packaged versions installed with the above, you should have your job configure these. There is absolutely no problem with jobs installing differing versions of pip/virtuatlenv/etc in any way they want -- we just don't want the base images to have any of that by default. Of course, you should consider if you're building/testing something that is actually useful outside the gate, but that is a global concern. Testing ------- We have built parallel nodes with the suffix "-plain" where you can test any jobs in the new environment. For example [9] speculatively tests devstack. The node types available are centos-7-plain centos-8-plain ubuntu-xenial-plain ubuntu-bionic-plain The newer focal images do not have pip pre-installed, neither do the faster moving Fedora images, any SUSE images, or any ARM64 images. Rollout ------- We would like to make the switch soon, to shake out any issues early in the cycle. This would mean on or about the 8th June. Thanks, -i [1] https://docs.opendev.org/opendev/infra-specs/latest/specs/cleanup-test-node-python.html [2] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/pip-and-virtualenv [3] https://en.wikipedia.org/wiki/WarGames [4] https://zuul-ci.org/docs/zuul-jobs/python-roles.html#role-ensure-pip [5] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/bindep/tasks/install.yaml#L9 [6] https://virtualenv.pypa.io/en/latest/ [7] https://zuul-ci.org/docs/zuul-jobs/python-roles.html#role-ensure-virtualenv [8] https://opendev.org/openstack/devstack/commit/23cfb9e6ebc63a4da4577c0ef9e3450b9c946fa7 [9] https://review.opendev.org/#/c/712211/11/.zuul.yaml From gmann at ghanshyammann.com Sat May 30 01:18:45 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 May 2020 20:18:45 -0500 Subject: [tc] changing meeting time In-Reply-To: References: Message-ID: <1726328ec46.c0eec8e498297.978849335141497009@ghanshyammann.com> ---- On Fri, 29 May 2020 16:41:59 -0500 Mohammed Naser wrote ---- > Hello everyone, > > The PTG is happening online next week, on the same day as our TC > monthly meeting. > > Since the PTG is only every six months, and we will meet the month > after, how does everyone feel about skipping this month's meeting and > considering the PTG as our meeting instead? > > Your comments are appreciated, +1. make sense and anyways we are going to cover most of the meeting agenda things in PTG. -gmann > > Thanks, > > -- > Mohammed Naser > VEXXHOST, Inc. > > From ssbarnea at redhat.com Sat May 30 12:00:04 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Sat, 30 May 2020 13:00:04 +0100 Subject: [all][infra] Upcoming removal of preinstalled pip and virtualenv from base images In-Reply-To: <20200530004314.GA1770592@fedora19.localdomain> References: <20200530004314.GA1770592@fedora19.localdomain> Message-ID: <372AFC5D-9060-456B-988A-99E8DF73D10C@redhat.com> That is indeed a really good move which can be summarized as moving towards unadulterated system images, or at least not modified in a way that would bring them to a state where they are "unsupported". Preinstalling already existing system packages or setting up mirrors is ok, but other than this probably not. virtualenv/pip/setuptools prooved to be a very good source of issues. Thanks Sorin > On 30 May 2020, at 01:43, Ian Wienand wrote: > > Hello, > > This is to notify the community of the planned upcoming removal of the > "pip-and-virtualenv" element from our infra image builds. > > This is part of the "cleanup test node python" spec documented at [1]. > > tl;dr > ----- > > - pip and virtualenv tools will no longer be pre-installed on the base > images we use for testing, but will be installed by jobs themselves. > - most current jobs will depend on common Zuul base jobs that have > been updated to install these tools already *i.e. in most cases > there should be nothing to do* > - in general, jobs should ensure they depend on roles that install > these tools if they require them > - if you do find your job failing due to the lack of either of these > tools, use the ensure-pip or ensure-virtualenv roles provided in > zuul-jobs > - we have "-plain" node types that implement this now. If you think > there might be a problem, switch the job to run run against one of > these node types described below for testing. > > History > ------- > > The "pip-and-virtualenv" element is part of diskimage-builder [2] and > installs the latest version of pip and virtualenv, and related tools > like setuptools, into the system environment for the daily builds. > This has a long history of working around various issues, but has > become increasingly problematic to maintain. > > One of the more noticable effects is that to prevent the distribution > packages overwriting the upstream installs, we put various pip, > setuptools and virtualenv packages on "hold" in CI images. This > prevents tests resinstalling packaged tools which can create, in > short, a big mess. Both destroying the extant environment and having > the tools on hold have been problems for various projects at various > times. > > Another other problem is what happens when you call "pip" or > "virtualenv" has diverged. It used to be that "pip" would install the > package under Python2, while pip3 would install under python3. > However, modern distributions now have "pip" installing under Python > 3. To keep having "pip" install Python 2 packages on these platforms > is not just wrong, but drags in Python 2 dependences to our images > that aren't required. > > The addition of Python 3's venv and the split with virtualenv makes > things even more confusing again. > > Future > ------ > > As famously said "the only winning move is not to play". [3] > > By dropping this element, images will not have non-packaged > pip/virtualenv/setuptools/etc. pre-installed. No packages will be on > hold. > > The "ensure-pip" role [4] will ensure that dependencies for "pip:" > commands will work. Because of the way base jobs are setup, it is > most likely that this role has already run. If you wish to use a > virtual environment to install a tool, I would suggest using the > "ensure_pip_virtualenv_cmd" this role exports. This will default to > "python3 -m venv". An example is [5]. > > In a Python 3 word, you probably do not *need* virtualenv; "python3 -m > venv" is available and works after the "ensure-pip" role is run. > However, if you require some of the features that virtualenv provides > that venv does not (explained at [6]) there is a role > "ensure-virtualenv" [7]. For example, we do this on the devstack > branches because it is common to use "virtualenv" there due to the > long history [8]. > > If you need specific versions of pip or virtualenv, etc. beyond the > system-packaged versions installed with the above, you should have > your job configure these. There is absolutely no problem with jobs > installing differing versions of pip/virtuatlenv/etc in any way they > want -- we just don't want the base images to have any of that by > default. Of course, you should consider if you're building/testing > something that is actually useful outside the gate, but that is a > global concern. > > Testing > ------- > > We have built parallel nodes with the suffix "-plain" where you can > test any jobs in the new environment. For example [9] speculatively > tests devstack. The node types available are > > centos-7-plain > centos-8-plain > ubuntu-xenial-plain > ubuntu-bionic-plain > > The newer focal images do not have pip pre-installed, neither do the > faster moving Fedora images, any SUSE images, or any ARM64 images. > > Rollout > ------- > > We would like to make the switch soon, to shake out any issues early > in the cycle. This would mean on or about the 8th June. > > Thanks, > > -i > > [1] https://docs.opendev.org/opendev/infra-specs/latest/specs/cleanup-test-node-python.html > [2] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/pip-and-virtualenv > [3] https://en.wikipedia.org/wiki/WarGames > [4] https://zuul-ci.org/docs/zuul-jobs/python-roles.html#role-ensure-pip > [5] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/bindep/tasks/install.yaml#L9 > [6] https://virtualenv.pypa.io/en/latest/ > [7] https://zuul-ci.org/docs/zuul-jobs/python-roles.html#role-ensure-virtualenv > [8] https://opendev.org/openstack/devstack/commit/23cfb9e6ebc63a4da4577c0ef9e3450b9c946fa7 > [9] https://review.opendev.org/#/c/712211/11/.zuul.yaml > > From berndbausch at gmail.com Sun May 31 07:33:36 2020 From: berndbausch at gmail.com (Bernd Bausch) Date: Sun, 31 May 2020 16:33:36 +0900 Subject: [swift] Object Versioning: How to retrieve old versions of deleted objects? Message-ID: According to documentation of Swift's new object versioning middleware [1], a deleted object won't be visible but previous version content is still recoverable. Documentation doesn't say how, though. After deleting /myobject/, I can see its versions: $ openstack object delete testcontainer myobject $ curl http://192.168.1.200:8080/v1/$ACCOUNT/testcontainer?versions -H "x-auth-token: $T" | python -m json.tool | grep -e version_id -e name         "name": "myobject",         "version_id": "1590896642.07497"         "name": "myobject",         "version_id": "1590892906.83929"         "name": "myobject",         "version_id": "1590892899.36921" But I can't GET old versions of /myobject/: $ curl -i http://192.168.1.200:8080/v1/$ACCOUNT/testcontainer/myobject?version-id="1590896642.07497" -H "x-auth-token: $T" HTTP/1.1 404 Not Found After creating another object with the same name /myobject/, old versions are accessible again. This is a solution, but dare I say not a very elegant one. *Can **a deleted, versioned object be recovered without creating a dummy object with the same name?* [1] https://docs.openstack.org/swift/latest/middleware.html#object-crud-operations-to-a-versioned-container -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Sun May 31 07:46:06 2020 From: berndbausch at gmail.com (Bernd Bausch) Date: Sun, 31 May 2020 16:46:06 +0900 Subject: [swift] Object Versioning: How to retrieve old versions of deleted objects? In-Reply-To: References: Message-ID: I'd like to delete the attached email, but it's not possible. Sorry. I did not see that the object version that I tried to retrieve had a content type of "application/x-deleted;swift_versions_deleted=1". That explains the behaviour. Bernd. On 5/31/2020 4:33 PM, Bernd Bausch wrote: > > According to documentation of Swift's new object versioning middleware > [1], a deleted object won't be visible but previous version content is > still recoverable. > > Documentation doesn't say how, though. After deleting /myobject/, I > can see its versions: > > $ openstack object delete testcontainer myobject > $ curl http://192.168.1.200:8080/v1/$ACCOUNT/testcontainer?versions -H > "x-auth-token: $T" | python -m json.tool | grep -e version_id -e name >         "name": "myobject", >         "version_id": "1590896642.07497" >         "name": "myobject", >         "version_id": "1590892906.83929" >         "name": "myobject", >         "version_id": "1590892899.36921" > > But I can't GET old versions of /myobject/: > > $ curl -i > http://192.168.1.200:8080/v1/$ACCOUNT/testcontainer/myobject?version-id="1590896642.07497" > -H "x-auth-token: $T" > HTTP/1.1 404 Not Found > > After creating another object with the same name /myobject/, old > versions are accessible again. This is a solution, but dare I say not > a very elegant one. *Can **a deleted, versioned object be recovered > without creating a dummy object with the same name?* > > [1] > https://docs.openstack.org/swift/latest/middleware.html#object-crud-operations-to-a-versioned-container > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Sun May 31 10:04:25 2020 From: stig.openstack at telfer.org (Stig Telfer) Date: Sun, 31 May 2020 11:04:25 +0100 Subject: [scientific-sig] Virtual PTG sessions on Monday Message-ID: <2FF0EAAD-BC3E-4926-BF58-29162557E99F@telfer.org> Hi All - We have the Scientific SIG sessions at the PTG on Monday at 15:00-17:00 UTC and 21:00-23:00 UTC. Everyone is welcome. We have been gathering details and planning content in the Etherpad provided at https://etherpad.opendev.org/p/victoria-ptg-scientific-sig The broad plan is to try to get a little more in-depth than our usual discussions on IRC and Slack, and for people to bring topics for discussion to share information with other science cloud operators in the SIG. Good topics would be pain points, problems, functional gaps, etc, for which others might have creative solutions. Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Sun May 31 15:53:39 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Sun, 31 May 2020 15:53:39 +0000 (UTC) Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration In-Reply-To: References: Message-ID: <1936985058.312813.1590940419571@mail.yahoo.com> Hi Sean and Lajos, Thank you so much for your quick response, good suggestions and feedbacks! @Sean Mooney > if we want to supprot cyborg/smartnic integration we should add a new > device-profile extention that intoduces the ablity > for a non admin user to specify a cyborg device profile name as a new > attibute on the port. +1,Agree. Cyborg likes this suggestion! This will be more clear that this field is for device profile usage. The reason why we were firstly thinking of using binding:profile is that this is a way with the smallest number of changes possible in both nova and neutron.But thinking of the non-admin issue, the  violation of the one way comunicaiton of binding:profile, and the possible security risk of breaking nova(which we surely don't want to do that), we surely prefer giving up binding:profile and  finding a better place to put the new device-profile extention. > the neutron server could then either retirve the request groups form > cyborg and pass them as part of the port resouce > request using the mechanium added for minium bandwidth or it can leave > that to nova to manage. > > i would kind of prefer neutron to do this but both could work. Yes, neutron server can also do that,but given the fact that we already landed the code of retriving request groups form cyborg in nova, can we reuse this process in nova and add new process in create_resource_requests to create accelerator resource request from port info?  I would be very appreciated if this change can land in nova, as I see the advantages are: 1) This keeps accelerator request groups things handled in on place, which makes integration clear and simple, Nova controls the main manage things,the neutron handles network backends integration,and cyborg involves accelerator management. 2) Another good thing: this will dispels Lajos's concerns on port-resource-request! 3) As presented in the proposal(page 3 and 5 of the slide)[0], please don't worry! This will be a tiny change in nova. Cyborg will be very appreciated if this change can land in nova, for it saves much effort in cyborg-neutron integration. @Lajos Katona: > Port-resource-request (see:https://docs.openstack.org/api-ref/network/v2/index.html#port-resource-request) > is a read-only (and admin-only) field of ports, which is filled > based on the agent heartbeats. So now there is now polling of agents or > similar. Adding extra "overload" to this mechanism, like polling cyborg or > similar looks something out of the original design for me, not to speak about the performance issues to add >  >    - API requests towards cyborg (or anything else) to every port GET >    operation >    - store cyborg related information in neutron db which was fetched from >    cyborg (periodically I suppose) to make neutron able to fill >    port-resource-request. As mentioned above,if request group can land in nova, we don't need to concern API request towards cyborg and cyborg info merged to port-resource-request. Another question just for curiosity.In my understanding(Please correct me if I'm worng.), I feel that neutron doesn't need to poll cyborg periodically if neutron fill port-resource-request, just fetch it once port request happens. because neutron can expect that the cyborg device_profile(provides resource request info for nova scheduler) don't change very often,it is the flavor of accelerator, and only admin can create/delete them. [0]pre-PTG slides update: https://docs.qq.com/slide/DVkxSUlRnVGxnUFR3 Regards, Yumeng On Friday, May 29, 2020, 3:21:08 PM GMT+8, Lajos Katona wrote: Hi, Port-resource-request (see: https://docs.openstack.org/api-ref/network/v2/index.html#port-resource-request ) is a read-only (and admin-only) field of ports, which is filled based on the agent heartbeats. So now there is now polling of agents or similar. Adding extra "overload" to this mechanism, like polling cyborg or similar looks something out of the original design for me, not to speak about the performance issues to add      * API requests towards cyborg (or anything else) to every port GET operation     * store cyborg related information in neutron db which was fetched from cyborg (periodically I suppose) to make neutron able to fill port-resource-request. Regards Lajos Sean Mooney ezt írta (időpont: 2020. máj. 28., Cs, 16:13): > On Thu, 2020-05-28 at 20:50 +0800, yumeng bao wrote: >>  >> Hi all, >> >> >> In cyborg pre-PTG meeting conducted last week[0],shaohe from Intel introduced SmartNIC support integrations,and we've >> reached some initial agreements: >> >> The workflow for a user to create a server with network acceleartor(accelerator is managed by Cyborg) is: >> >>   1. create a port with accelerator request specified into binding_profile field >>  NOTE: Putting the accelerator request(device_profile) into binding_profile is one possible solution implemented in >> our POC. > the binding profile field is not really intended for this. > > https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/portbindings.py#L31-L34 > its intended to pass info from nova to neutron but not the other way around. > it was orgininally introduced so that nova could pass info to the neutron plug in > specificly the sriov pci address. it was not intended for two way comunicaiton to present infom form neutron > to nova. > > we kindo of broke that with the trusted vf feature but since that was intended to be admin only as its a security risk > in a mulit tenant cloud its a slightl different case. > i think we should avoid using the binding profile for passing info form neutron to nova and keep it for its orginal > use of passing info from the virt dirver to the network backend. > > >>        Another possible solution,adding a new attribute to port object for cyborg specific use instead of using >> binding_profile, is discussed in shanghai Summit[1]. >> This needs check with neutron team, which neutron team would suggest? > from a nova persepctive i would prefer if this was  a new extention. > the binding profile is admin only by default so its not realy a good way to request features be enabled. > you can use neutron rbac policies to alther that i belive but in genral i dont think we shoudl advocate for non admins > to be able to modify the binding profile as they can break nova. e.g. by modifying the pci addres. > if we want to supprot cyborg/smartnic integration we should add a new device-profile extention that intoduces the ablity > for a non admin user to specify a cyborg device profile name as a new attibute on the port. > > the neutron server could then either retirve the request groups form cyborg and pass them as part of the port resouce > request using the mechanium added for minium bandwidth or it can leave that to nova to manage. > > i would kind of prefer neutron to do this but both could work. >> >>   2.create a server with the port created >> >> Cyborg-nova-neutron integration workflow can be found on page 3 of the slide[2] presented in pre-PTG. >> >> And we also record the introduction! Please find the pre-PTG meeting vedio record in [3] and [4], they are the same, >> just for different region access. >> >> >> [0]http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014987.html >> [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj >> [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw >> [3]pre-PTG vedio records in Youtube:https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu.be >> [4]pre-PTG vedio records in Youku: >> http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom=iphone&sharekey=51459cbd599407990dd09940061b374d4 >> >> Regards, >> Yumeng >> > > > From skaplons at redhat.com Sun May 31 15:59:58 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 31 May 2020 17:59:58 +0200 Subject: Openstack user surver - questions Message-ID: <20200531155958.y3t2en2jydrdx3gx@skaplons-mac> Hi, First of all sorry if I didn't look for it long enough but I have couple of questions about user surver and I couldn't find answer for them anywhere. 1. I was looking at [1] for some Neutron related data and I found only one questions about used backends in "Deployment Decisions". Problem is that this graph is for me a bit unreadable due to many things on x-axis which overlaps each other. Is there any place where I can find some "raw data" to check? 2. Another question about the same chart is: is there any way to maybe change possible replies in the surver for next years? I'm asking about that becasue I have a feeling that e.g. responses "Open vSwitch" and "ML2 - Open vSwitch" may be confusing for users. My understanding is that "Open vSwitch" means simply old "Open vSwitch" core plugin instead of ML2 plugin but this old plugin was removed around "Libery" cycle so I really don't think that still 37% of users are using it. 3. Is there any way to propose new, more detailed questions about e.g Neutron? For example what service plugins they are using. [1] https://www.openstack.org/analytics -- Slawek Kaplonski Senior software engineer Red Hat From knikolla at bu.edu Sun May 31 18:13:34 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Sun, 31 May 2020 18:13:34 +0000 Subject: [keystone] Keystone PTG Schedule Message-ID: <10FA4BC8-3139-425E-9F86-5F02F756DF61@bu.edu> Hi all, The Keystone team will be meeting Thursday and Friday 1300-1700 UTC. The schedule[0] can be found below and provides information on how to register and attend. In addition, I'm hoping to organize a team meetup Wednesday, Thursday or Friday on 1700UTC. Please fill in the doodle at [1]. More general PTG information can be found at [2]. [0]. https://etherpad.opendev.org/p/victoria-ptg-keystone [1]. https://doodle.com/poll/z7diwe6qyq4mbghe [2]. http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015054.html From allison at openstack.org Sun May 31 18:40:16 2020 From: allison at openstack.org (Allison Price) Date: Sun, 31 May 2020 13:40:16 -0500 Subject: Openstack user surver - questions In-Reply-To: <20200531155958.y3t2en2jydrdx3gx@skaplons-mac> References: <20200531155958.y3t2en2jydrdx3gx@skaplons-mac> Message-ID: <272339AC-F0D8-4FD7-8D1C-34F166449962@openstack.org> Hi Slawek, Thanks for reaching out with your questions about the user survey! > On May 31, 2020, at 10:59 AM, Slawek Kaplonski wrote: > > Hi, > > First of all sorry if I didn't look for it long enough but I have couple of > questions about user surver and I couldn't find answer for them anywhere. > > 1. I was looking at [1] for some Neutron related data and I found only one > questions about used backends in "Deployment Decisions". Problem is that this > graph is for me a bit unreadable due to many things on x-axis which overlaps > each other. Is there any place where I can find some "raw data" to check? The Foundation can pull raw data for you as long as the information remains anonymized and share. Is the used backends question the only one you want data on? Or would you also like data o the percentage of users interested in, testing, and deploying Neutron? This data is also available in the analytics dashboard [1], but can often be hard to read as well. > > 2. Another question about the same chart is: is there any way to maybe change > possible replies in the surver for next years? I'm asking about that becasue I > have a feeling that e.g. responses "Open vSwitch" and "ML2 - Open vSwitch" may > be confusing for users. > My understanding is that "Open vSwitch" means simply old "Open vSwitch" core > plugin instead of ML2 plugin but this old plugin was removed around "Libery" > cycle so I really don't think that still 37% of users are using it. We try to keep most questions static from survey to survey for comparison reasoning. However, if you think that some responses are confusing and can propose alternative language, we can consider that and make those changes. > > 3. Is there any way to propose new, more detailed questions about e.g Neutron? > For example what service plugins they are using. We have let each PTL add 1-2 optional questions at the end of the survey for respondents who indicated they were working with a particular project. The current Neutron question is: Which of the following features in the Neutron project are you actively using, interested in using or looking forward to using in your OpenStack deployment? The current user survey cycle ends in late August. That is when we will circulate the anonymized results to this question with the openstack-discuss mailing list along with other project-specific questions. At that time, PTLs can let us know if they would like to change their question. Let me know if you have any other questions - happy to help! I’ll also be around this week during the PTG if you would like me to jump in and clarify anything. Allison Price IRC: aprice > > [1] https://www.openstack.org/analytics > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Sun May 31 19:20:06 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Sun, 31 May 2020 14:20:06 -0500 Subject: [security] Security SIG PTG Schedule Message-ID: Hello, The Security SIG will be meeting for the PTG on Monday, June 1st from 1500-1700 UTC & 2100-2300 UTC. The current agenda etherpad can be found here[0]. Hope to see you there! [0] https://etherpad.opendev.org/p/security-sig-ptg-victoria -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Sun May 31 19:22:19 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Sun, 31 May 2020 14:22:19 -0500 Subject: [openstack-helm] OpenStack-Helm PTG Schedule Message-ID: Hello, The OpenStack-Helm team will be meeting for the PTG on Tue, June 2nd and Wednesday, June 3rd from 1300-1700 UTC. The current agenda etherpad can be found here[0]. Hope to see you there! [0] https://etherpad.opendev.org/p/openstack-helm-ptg-victoria -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Sun May 31 19:23:50 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Sun, 31 May 2020 14:23:50 -0500 Subject: [security] Security SIG meeting June 04th Cancellation Message-ID: Hello everyone, The Security SIG meeting this week, June 04th, will be cancelled due to the PTG happening this week. We will continue regular meetings again next week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Sun May 31 19:24:51 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Sun, 31 May 2020 14:24:51 -0500 Subject: [openstack-helm] IRC Meeting Cancelled - June 02nd Message-ID: Hello everyone, The openstack-helm meeting this week, June 02nd, will be cancelled due to the PTG happening this week. We will continue regular meetings again next week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sun May 31 17:20:45 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 31 May 2020 19:20:45 +0200 Subject: [neutron] Team meeting on Tuesday 02.06.2020 cancelled Message-ID: <20200531172045.nt3v5zwolxdepmv2@skaplons-mac> Hi, We have PTG sessions in same time, so lets cancel IRC meeting this week. See You all on the PTG sessions :) -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Sun May 31 17:21:39 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 31 May 2020 19:21:39 +0200 Subject: [neutron] CI meeting on Wednesday 03.06.2020 cancelled Message-ID: <20200531172139.p22dqppdcngvmrs2@skaplons-mac> Hi, It's PTG time so lets cancel this week CI meeting. -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Sun May 31 17:23:11 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 31 May 2020 19:23:11 +0200 Subject: [neutron] Drivers meeting on Friday 5.06.2020 cancelled Message-ID: <20200531172311.m2ypbwo7mavns566@skaplons-mac> Hi, It's PTG time so lets cancel this week meeting. -- Slawek Kaplonski Senior software engineer Red Hat