From amy at demarco.com Mon Mar 1 00:08:09 2021 From: amy at demarco.com (Amy Marrich) Date: Sun, 28 Feb 2021 18:08:09 -0600 Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: <952196218.884082.1614451027288@mail.yahoo.com> References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> Message-ID: Deepesh, Forwarding to the OpenStack Discuss mailing list. Thanks, Amy (spotz) On Sat, Feb 27, 2021 at 12:42 PM Dees wrote: > Hi All, > > Deployed openstack all service yesterday and seemed to be running since > power cycling the openstack today. > > Then deployed and configured VM instance with networking but config > onv-chassis is reporting error, as I am new to openstack (having used it > number of years back) mind guiding where I should be looking any help will > be greatly appreciated. > > > > nova-compute/0* active idle 1 10.141.14.62 > Unit is ready > ntp/0* active idle 10.141.14.62 > 123/udp chrony: Ready > ovn-chassis/0* error idle 10.141.14.62 > hook failed: "config-changed" > > > Deployed VM instance using the following command line. > > > curl > http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img > | \ > openstack image create --public --container-format bare --disk-format > qcow2 \ > --property architecture=x86_64 --property hw_disk_bus=virtio \ > --property hw_vif_model=virtio "focal_x86_64" > > openstack flavor create --ram 512 --disk 4 m1.micro > > openstack network create Pub_Net --external --share --default \ > --provider-network-type vlan --provider-segment 200 > --provider-physical-network physnet1 > > openstack subnet create Pub_Subnet --allocation-pool > start=10.141.40.40,end=10.141.40.62 \ > --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \ > --network Pub_Net > > openstack network create Network1 --internal > openstack subnet create Subnet1 \ > --allocation-pool start=192.168.0.10,end=192.168.0.199 \ > --subnet-range 192.168.0.0/24 \ > --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \ > --network Network1 > > Kind regards, > Deepesh > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 1 06:33:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 1 Mar 2021 07:33:24 +0100 Subject: [release] Release countdown for week R-6 Mar 01 - Mar 05 Message-ID: Development Focus ----------------- Work on libraries should be wrapping up, in preparation for the various library-related deadlines coming up. Now is a good time to make decisions on deferring feature work to the next development cycle in order to be able to focus on finishing already-started feature work. General Information ------------------- We are now getting close to the end of the cycle, and will be gradually freezing feature work on the various deliverables that make up the OpenStack release. This coming week is the deadline for general libraries (except client libraries): their last feature release needs to happen before "Non-client library freeze" on 4 March, 2021. Only bugfixes releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/wallaby branching request with the review (as an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2 In the next weeks we will have deadlines for: * Client libraries (think python-*client libraries), which need to have their last feature release before "Client library freeze" (11 March, 2021) * Deliverables following a cycle-with-rc model (that would be most services), which observe a Feature freeze on that same date, 11 March, 2021. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. As we are getting to the point of creating stable/wallaby branches, this would be a good point for teams to review membership in their wallaby-stable-maint groups. Once the stable/wallaby branches are cut for a repo, the ability to approve any necessary backports into those branches for wallaby will be limited to the members of that stable team. If there are any questions about stable policy or stable team membership, please reach out in the #openstack-stable channel. Upcoming Deadlines & Dates -------------------------- Cross-project events: - Non-client library freeze: 4 March, 2021 (R-6 week) - Client library freeze: 11 March, 2021 (R-5 week) - Wallaby-3 milestone: 11 March, 2021 (R-5 week) - Cycle Highlights Due: 11 March 2021 (R-5 week) - Wallaby final release: 14 April, 2021 Project-specific events: - Cinder Driver Features Declaration: 12 March, 2021 (R-5 week) Thanks for your attention -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Mon Mar 1 08:03:12 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 1 Mar 2021 08:03:12 +0000 Subject: [puppet-openstack] stop using the ${pyvers} variable In-Reply-To: References: <77a37088-9f7e-92b6-e196-784ff3168a0e@debian.org> <21a825a7-7976-0735-0962-9b85a3005215@debian.org> , Message-ID: Hello, I don't really mind removing it if what you are saying about Python 4 is true, I don't know so I will take your word for it. I agree with Takashi, that we can remove it but should do it consistenly and then be done with it, just stopping to use it because it takes literally five seconds more to fix a patch seems like something one can live with until then. Remember that we've for years been working on consistency, cleaning up and removing duplication, I wouldn't want to start introducing then after that. So yes, let's remove it but require everything up until that point to be consistent, maybe a short timespan to squeeze all those changes into Wallaby though. Is this a change you want to drive Thomas? Best regards and have a good start of the week everybody. Tobias ________________________________ From: Takashi Kajinami Sent: Sunday, February 28, 2021 3:12:42 PM To: Thomas Goirand Cc: OpenStack Discuss Subject: Re: [puppet-openstack] stop using the ${pyvers} variable On Sun, Feb 28, 2021 at 8:22 PM Thomas Goirand > wrote: On 2/28/21 12:10 PM, Takashi Kajinami wrote: > > On Sun, Feb 28, 2021 at 3:32 AM Thomas Goirand > >> wrote: > > Hi, > > On 2/27/21 3:52 PM, Takashi Kajinami wrote: > > I have posted a comment on the said patch but I prefer using pyvers in > > that specific patch because; > > - The change seems to be a backport candidate and using pyvers > helps us > > backport the change > > to older branches like Train which still supports python 2 IIRC. > > Even Rocky is already using python3 in Debian/Ubuntu. The last distro > using the Python 2 version would be Stretch (which is long EOLed) and > Bionic (at this time, people should be moving to Focal, no ?, which IMO > are not targets for backports. > > Therefore, for this specific patch, even if you want to do a backport, > it doesn't make sense. > > Are you planning to do such a backport for the RPM world? > > We still have queens open for puppet-openstack modules. > IIRC rdo rocky is based on CentOS7 and Python2. > Also, I don't really like to see inconsistent implementation caused by > backport > knowing that we don't support python3 in these branches. > Anyway we can consider that when we actually backport the change. I'm a bit surprised that you still care about such an old release as Queens. Is this the release shipped with CentOS 7? RDO Queens anr RDO Rocky are based on CentOS 7. RDO Train supports both CentOS7 and CentOS8 IIRC. In any ways, thanks for letting me know, I have to admit I don't know much about the RPM side of things. In such case, I'm ok to keep the ${pyvers} variable for the CentOS case for a bit longer then, but can we agree when we stop using it? Also IMO, forcing it for Debian/Ubuntu doesn't make sense anymore. IMO we can think about backport separately(we can make any required changes in backport only) so we can get rid of pyvers in master for both CentOS and Ubuntu/Debian. However I prefer to deal with that removal separately and consistently, so that we won't create inconsistent implementation where some relies on pyvers and the others doesn't rely on pyvers. Thanks everyone for participating in this thread, Cheers, Thomas Goirand (zigo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Mar 1 08:53:02 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 1 Mar 2021 08:53:02 +0000 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: On Fri, 26 Feb 2021 at 17:08, Michal Arbet wrote: > > Hi, > > I found that keepalived VIP switch cause TCP connections that refuse to die on host where VIP was before it was switched. > > I filled a bug here -> https://bugs.launchpad.net/kolla-ansible/+bug/1917068 > Fixed here -> https://review.opendev.org/c/openstack/kolla-ansible/+/777772 > Video presentation of bug here -> https://download.kevko.ultimum.cloud/video_debug.mp4 > > I was just curious and wanted to ask here in openstack-discuss : > > If someone already seen this issue in past ? > If yes, do you tweak net.ipv4.tcp_retries2 kernel parameter ? > If no, how did you solve this ? Hi Michal, thanks for the investigation here. There is a nice tool, that I found far too late, that helps to help answer questions like this: https://codesearch.opendev.org/ > > Thank you, > Michal Arbet ( kevko ) > From thierry at openstack.org Mon Mar 1 09:11:09 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 1 Mar 2021 10:11:09 +0100 Subject: Moving ara code review to GitHub In-Reply-To: References: Message-ID: <8ff9bb87-45e6-4a16-1d51-02c161b9c21a@openstack.org> Laurent Dumont wrote: > [...] > I know that when I merged something for ARA, I ended up appreciating > Gerrit but there is a definite learning curve that I don't feel was > necessary to the process. That's a great point. I personally think Gerrit is more efficient, but it's definitely different from the PR-based approach which the largest public code forges use (GitHub, Gitlab). Learning Gerrit is totally worth it if you intend to contribute regularly. It's harder to justify for a drive-by contribution, so you might miss those because the drive-by contributor sometimes just won't go through the hassle. So for a project like ARA which is mostly feature-complete, is not super busy and would at this point most likely attract drive-by contributions fixing corner case bugs and adding corner case use cases, I certainly understand your choice. -- Thierry From marios at redhat.com Mon Mar 1 09:46:44 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 1 Mar 2021 11:46:44 +0200 Subject: [TripleO] Moving stable/rocky for *-tripleo-* repos to End of Life OK? In-Reply-To: References: Message-ID: On Fri, Feb 12, 2021 at 5:38 PM Marios Andreou wrote: > > > On Fri, Feb 5, 2021 at 4:40 PM Marios Andreou wrote: > >> Hello all, >> >> it's been ~ 2 months now since my initial mail about $subject [1] and >> just under a month since my last bump on the thread [2] and I haven't heard >> any objections so far. >> >> So I think it's now appropriate to move forward with [3] which tags the >> latest commits to the stable/rocky branch of all tripleo-repos [4] as >> 'rock-eol' (except archived things like instack/tripleo-ui). >> >> Once it merges we will no longer be able to land anything into >> stable/rocky for all tripleo repos and the stable/rocky branch will be >> deleted. >> >> So, last chance! If you object please go and -1 the patch at [3] and/or >> reply here >> >> (explicitly added some folks into cc for attention please) Thanks to Elod, I just updated/posted v2 of the proposal at https://review.opendev.org/c/openstack/releases/+/774244 Harald, Slawek, Tom, o/ please check the comments at https://review.opendev.org/c/openstack/releases/+/774244/1/deliverables/rocky/tripleo-heat-templates.yaml#62 . The question is - do you really need this on stable/rocky in and of itself, or, rather because that branch is still active, and we needed it on the previous, i.e. queens? Basically sanity check again so there is no confusion later ;) do you folks object to us declaring stable/rocky as EOL? thank you! regards, marios > > > bump - there has been some discussion on the proposal at > https://review.opendev.org/c/openstack/releases/+/774244 which is now > resolved. > > I just removed my blocking workflow -1 at releases/+/774244 so really > really last chance now ;) > > regards, marios > > > > >> thanks, marios >> >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019338.html >> [2] >> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019860.html >> [3] https://review.opendev.org/c/openstack/releases/+/774244 >> [4] https://releases.openstack.org/teams/tripleo.html >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Mon Mar 1 10:03:13 2021 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 1 Mar 2021 11:03:13 +0100 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: Hi, I really like the tool, thank you. also confirms my idea as this kernel option is also used by other projects where they explicitly mention case with keepalived/VIP switch. Now I definitively think this should be handled also in kolla-ansible. Thanks, Michal Arbet ( kevko ) Dne po 1. 3. 2021 9:53 uživatel Mark Goddard napsal: > On Fri, 26 Feb 2021 at 17:08, Michal Arbet > wrote: > > > > Hi, > > > > I found that keepalived VIP switch cause TCP connections that refuse > to die on host where VIP was before it was switched. > > > > I filled a bug here -> > https://bugs.launchpad.net/kolla-ansible/+bug/1917068 > > Fixed here -> > https://review.opendev.org/c/openstack/kolla-ansible/+/777772 > > Video presentation of bug here -> > https://download.kevko.ultimum.cloud/video_debug.mp4 > > > > I was just curious and wanted to ask here in openstack-discuss : > > > > If someone already seen this issue in past ? > > If yes, do you tweak net.ipv4.tcp_retries2 kernel parameter ? > > If no, how did you solve this ? > > Hi Michal, thanks for the investigation here. There is a nice tool, > that I found far too late, that helps to help answer questions like > this: https://codesearch.opendev.org/ > > > > > Thank you, > > Michal Arbet ( kevko ) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Mar 1 10:39:15 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 1 Mar 2021 10:39:15 +0000 Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> Message-ID: Hi, Few questions inline On Mon, Mar 1, 2021 at 12:09 AM Amy Marrich wrote: > > Deepesh, > > Forwarding to the OpenStack Discuss mailing list. > > Thanks, > > Amy (spotz) > > On Sat, Feb 27, 2021 at 12:42 PM Dees wrote: >> >> Hi All, >> >> Deployed openstack all service yesterday and seemed to be running since power cycling the openstack today. >> What did you use to deploy OpenStack ? Based on the command below I don't recognize that output. >> Then deployed and configured VM instance with networking but config onv-chassis is reporting error, as I am new to openstack (having used it number of years back) mind guiding where I should be looking any help will be greatly appreciated. >> >> >> >> nova-compute/0* active idle 1 10.141.14.62 Unit is ready >> ntp/0* active idle 10.141.14.62 123/udp chrony: Ready >> ovn-chassis/0* error idle 10.141.14.62 hook failed: "config-changed" >> I do not recognize this output or that error in particular, but I assume that ovn-chassis is the name of the process of the ovn-controller running on the node ? If so, the configuration of ovn-controller should be present in the local OVSDB instance, can you please paste the output of the following command: $ sudo ovs-vsctl list Open_VSwitch . Also, do you have any logs related to that process ? >> >> Deployed VM instance using the following command line. >> >> >> curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \ >> openstack image create --public --container-format bare --disk-format qcow2 \ >> --property architecture=x86_64 --property hw_disk_bus=virtio \ >> --property hw_vif_model=virtio "focal_x86_64" >> >> openstack flavor create --ram 512 --disk 4 m1.micro >> >> openstack network create Pub_Net --external --share --default \ >> --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 >> >> openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.40,end=10.141.40.62 \ >> --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \ >> --network Pub_Net >> >> openstack network create Network1 --internal >> openstack subnet create Subnet1 \ >> --allocation-pool start=192.168.0.10,end=192.168.0.199 \ >> --subnet-range 192.168.0.0/24 \ >> --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \ >> --network Network1 >> >> Kind regards, >> Deepesh >> _______________________________________________ >> Community mailing list >> Community at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/community From bcafarel at redhat.com Mon Mar 1 11:58:18 2021 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 1 Mar 2021 12:58:18 +0100 Subject: [neutron] Bug deputy report (week starting on 2021-02-22) Message-ID: Hello neutrinos, this is our first new bug deputy rotation for 2021, overall a quiet week with most bugs already fixed or good progress. DVR folks may be interested in the discussion ongoing on the single High bug, also I left one bug in Opinion on OVS agent main loop Critical * neutron.tests.unit.common.test_utils.TestThrottler test_throttler is failing - https://bugs.launchpad.net/neutron/+bug/1916572 Patch was quickly merged: https://review.opendev.org/c/openstack/neutron/+/777072 High * [dvr] bound port permanent arp entries never deleted - https://bugs.launchpad.net/neutron/+bug/1916761 Introduced with https://review.opendev.org/q/I538aa6d68fbb5ff8431f82ba76601ee34c1bb181 Lengthy discussion in the bug itself and suggested fix https://review.opendev.org/c/openstack/neutron/+/777616 Medium * [OVN][QOS] OVN DB QoS rule is not removed when a FIP is dissasociated - https://bugs.launchpad.net/neutron/+bug/1916470 Patch by ralonsoh merged https://review.opendev.org/c/openstack/neutron/+/776916 * A privsep daemon spawned by neutron-openvswitch-agent hangs when debug logging is enabled - https://bugs.launchpad.net/neutron/+bug/1896734 Actually reported earlier on charm and oslo, ralonsoh looking into it from neutron side * StaleDataError: DELETE statement on table 'standardattributes' expected to delete 2 row(s); 1 were matched - https://bugs.launchpad.net/neutron/+bug/1916889 Spotted by Liu while looking at dvr bug, patch sent to neutron-lib https://review.opendev.org/c/openstack/neutron-lib/+/777581 Low * ovn-octavia-provider can attempt to write protocol=None to OVSDB - https://bugs.launchpad.net/neutron/+bug/1916646 This appears in some functional test results, otherwiseguy sent https://review.opendev.org/c/openstack/ovn-octavia-provider/+/777201 Opinion * use deepcopy in function rpc_loop of ovs-agent - https://bugs.launchpad.net/neutron/+bug/1916761 My hunch is that we are fine here, but please chime in other opinions Passing the baton to our PTL for next week! -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Mar 1 12:16:14 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Mar 2021 13:16:14 +0100 Subject: Input for ETSI Hardware Platform Capability Registry Message-ID: <9A1BC726-FD91-4A37-B23E-8F146E8A7DE7@gmail.com> Hi OpenStack Community, I’m reaching out to you about a call for action that I received from the ETSI NFV group[1]. The group is currently working on a Hardware Platform Capability Registry[2] which is designed to contain information about hardware capabilities that a cloud platform offers to Virtual Network Functions (VNFs). They will use the entries of the registry as part of VNF Descriptors (VNFD). They are looking for information about hardware capabilities exposed through OpenStack in the following categories: CPU, memory, storage, network and logical node. They are looking for basic functions as well as information about accelerators. You can find further information about the registry on their wiki page[2]. I’m looking for input to submit to the registry or feedback on the approach to help this work item at ETSI. Please reach out to me or reply to this thread if your project exposes any information about the underlying hardware, which should be added to the above registry. Please let me know if you have any questions. Thanks and Best Regards, Ildikó [1] https://www.etsi.org/technologies/nfv [2] https://nfvwiki.etsi.org/index.php?title=Hardware_Platform_Capability_Registry ——— Ildikó Váncsa Ecosystem Technical Lead Open Infrastructure Foundation From tpb at dyncloud.net Mon Mar 1 12:21:51 2021 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 1 Mar 2021 07:21:51 -0500 Subject: [TripleO] Moving stable/rocky for *-tripleo-* repos to End of Life OK? In-Reply-To: References: Message-ID: <20210301122151.3o5cu4nloststneb@barron.net> On 01/03/21 11:46 +0200, Marios Andreou wrote: >On Fri, Feb 12, 2021 at 5:38 PM Marios Andreou wrote: > >> >> >> On Fri, Feb 5, 2021 at 4:40 PM Marios Andreou wrote: >> >>> Hello all, >>> >>> it's been ~ 2 months now since my initial mail about $subject [1] and >>> just under a month since my last bump on the thread [2] and I haven't heard >>> any objections so far. >>> >>> So I think it's now appropriate to move forward with [3] which tags the >>> latest commits to the stable/rocky branch of all tripleo-repos [4] as >>> 'rock-eol' (except archived things like instack/tripleo-ui). >>> >>> Once it merges we will no longer be able to land anything into >>> stable/rocky for all tripleo repos and the stable/rocky branch will be >>> deleted. >>> >>> So, last chance! If you object please go and -1 the patch at [3] and/or >>> reply here >>> >>> > >(explicitly added some folks into cc for attention please) > >Thanks to Elod, I just updated/posted v2 of the proposal at >https://review.opendev.org/c/openstack/releases/+/774244 > >Harald, Slawek, Tom, o/ please check the comments at >https://review.opendev.org/c/openstack/releases/+/774244/1/deliverables/rocky/tripleo-heat-templates.yaml#62 >. > >The question is - do you really need this on stable/rocky in and of itself, >or, rather because that branch is still active, and we needed it on the >previous, i.e. queens? > >Basically sanity check again so there is no confusion later ;) do you folks >object to us declaring stable/rocky as EOL? I have no objection to declaring stable/rocky as EOL - the change I proposed there was part of a series of backports from master back to stable/queens. I posted it to stable/rocky along the way since that branch was not EOL and I didn't want to leave a gap in the series of backports. > >thank you! > >regards, marios > > > > >> >> >> bump - there has been some discussion on the proposal at >> https://review.opendev.org/c/openstack/releases/+/774244 which is now >> resolved. >> >> I just removed my blocking workflow -1 at releases/+/774244 so really >> really last chance now ;) >> >> regards, marios >> >> >> >> >>> thanks, marios >>> >>> >>> [1] >>> http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019338.html >>> [2] >>> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019860.html >>> [3] https://review.opendev.org/c/openstack/releases/+/774244 >>> [4] https://releases.openstack.org/teams/tripleo.html >>> >> From marios at redhat.com Mon Mar 1 12:35:32 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 1 Mar 2021 14:35:32 +0200 Subject: [TripleO] Moving stable/rocky for *-tripleo-* repos to End of Life OK? In-Reply-To: <20210301122151.3o5cu4nloststneb@barron.net> References: <20210301122151.3o5cu4nloststneb@barron.net> Message-ID: On Mon, Mar 1, 2021 at 2:22 PM Tom Barron wrote: > On 01/03/21 11:46 +0200, Marios Andreou wrote: > >On Fri, Feb 12, 2021 at 5:38 PM Marios Andreou wrote: > > > >> > >> > >> On Fri, Feb 5, 2021 at 4:40 PM Marios Andreou > wrote: > >> > >>> Hello all, > >>> > >>> it's been ~ 2 months now since my initial mail about $subject [1] and > >>> just under a month since my last bump on the thread [2] and I haven't > heard > >>> any objections so far. > >>> > >>> So I think it's now appropriate to move forward with [3] which tags the > >>> latest commits to the stable/rocky branch of all tripleo-repos [4] as > >>> 'rock-eol' (except archived things like instack/tripleo-ui). > >>> > >>> Once it merges we will no longer be able to land anything into > >>> stable/rocky for all tripleo repos and the stable/rocky branch will be > >>> deleted. > >>> > >>> So, last chance! If you object please go and -1 the patch at [3] and/or > >>> reply here > >>> > >>> > > > >(explicitly added some folks into cc for attention please) > > > >Thanks to Elod, I just updated/posted v2 of the proposal at > >https://review.opendev.org/c/openstack/releases/+/774244 > > > >Harald, Slawek, Tom, o/ please check the comments at > > > https://review.opendev.org/c/openstack/releases/+/774244/1/deliverables/rocky/tripleo-heat-templates.yaml#62 > >. > > > >The question is - do you really need this on stable/rocky in and of > itself, > >or, rather because that branch is still active, and we needed it on the > >previous, i.e. queens? > > > >Basically sanity check again so there is no confusion later ;) do you > folks > >object to us declaring stable/rocky as EOL? > > I have no objection to declaring stable/rocky as EOL - the change I > proposed there was part of a series of backports from master back to > stable/queens. I posted it to stable/rocky along the way since that > branch was not EOL and I didn't want to leave a gap in the series of > backports. > > ACK thanks for confirming Tom > > > >thank you! > > > >regards, marios > > > > > > > > > >> > >> > >> bump - there has been some discussion on the proposal at > >> https://review.opendev.org/c/openstack/releases/+/774244 which is now > >> resolved. > >> > >> I just removed my blocking workflow -1 at releases/+/774244 so really > >> really last chance now ;) > >> > >> regards, marios > >> > >> > >> > >> > >>> thanks, marios > >>> > >>> > >>> [1] > >>> > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019338.html > >>> [2] > >>> > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019860.html > >>> [3] https://review.opendev.org/c/openstack/releases/+/774244 > >>> [4] https://releases.openstack.org/teams/tripleo.html > >>> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Mar 1 13:08:51 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 1 Mar 2021 14:08:51 +0100 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: On Mon, Mar 1, 2021 at 11:06 AM Michal Arbet wrote: > I really like the tool, thank you. also confirms my idea as this kernel option is also used by other projects where they explicitly mention case with keepalived/VIP switch. FWIW, I can see only StarlingX and Airship, none of general-purpose tools seem to mention customising this variable. -yoctozepto From balazs.gibizer at est.tech Mon Mar 1 13:31:20 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 01 Mar 2021 14:31:20 +0100 Subject: [nova][placement] Wallaby release Message-ID: <8KLAPQ.YM5K5P44TRVX2@est.tech> Hi, We are getting close to the Wallaby release. So I create a tracking etherpad[1] with the schedule and TODOs. One thing that I want to highlight is that we will hit Feature Freeze on 11th of March. As the timeframe between FF and RC1 is short I'm not plannig with FFEs. Patches that are approved before 11 March EOB can be rechecked or rebased if needed and then re-approved. If you have a patch that is really close but not approved before the deadline, and you think there are two cores that willing to review it before RC1, then please send a mail to the ML with [nova][FFE] subject prefix not later than 16th of March EOB. Cheers, gibi [1] https://etherpad.opendev.org/p/nova-wallaby-rc-potential From arne.wiebalck at cern.ch Mon Mar 1 13:41:17 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 1 Mar 2021 14:41:17 +0100 Subject: [baremetal-sig][ironic] Tue Mar 9, 2021, 2pm UTC: PTG input and Ironic Prometheus Exporter Message-ID: <9784dcfd-5cf3-5d0c-757b-858ba0481382@cern.ch> Dear all, The Bare Metal SIG will meet next week Tue Mar 9, 2021, at 2pm UTC on zoom. There will be two main points on the agenda: - A "topic-of-the-day" presentation/demo by Iury Gregory (iurygregory) on 'The Ironic Prometheus Exporter' and - Gathering (operator) input for the Ironic upstream team for the upcoming PTG: what are the pain points, what features should be added? The second item is also why this mail comes a week earlier than usual. Think about it, bring your items and/or add them to https://etherpad.opendev.or g/p/bare-metal-sig Everyone is welcome! Cheers, Arne From iurygregory at gmail.com Mon Mar 1 13:43:23 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 1 Mar 2021 14:43:23 +0100 Subject: [ironic] replacing the review priority list on the etherpad with hashtag Message-ID: Hello Ironicers! If you were present in the last weekly meeting, you are probably aware that we are using hashtags to track the patches that are in the priority list. If you are part of the ironic core group you can edit hashtags in any patch, if you are not core you can only add hashtags to the patches you are the Owner. In the gerrit UI you should be able to see Hashtags fields and a button ADD HASHTAG, the hashtag we are using is *ironic-week-prio* , so, if you have a patch that is in a good shape and ready for review you can add to the hashtag (just click in the ADD HASHTAG button, write the hashtag, click in SAVE). I will push a patch adding this information to our docs, I'm willing to do a session (or maybe two since we have people in different timezones) to explain. I will create a doodle and add to this thread. Thank you! -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Mar 1 15:49:53 2021 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 1 Mar 2021 08:49:53 -0700 Subject: [puppet-openstack] stop using the ${pyvers} variable In-Reply-To: <77a37088-9f7e-92b6-e196-784ff3168a0e@debian.org> References: <77a37088-9f7e-92b6-e196-784ff3168a0e@debian.org> Message-ID: On Sat, Feb 27, 2021 at 2:29 AM Thomas Goirand wrote: > Hi, > > Using the ${pyvers} so we can switch be between Python versions made > sense 1 or 2 years ago. However, I'm in the opinion that we should stop > using that, and switch to using python3 everywhere directly whenever > possible. Though Tobias seems to not agree (see [1]), so I'm raising the > topic in the list so we can discuss this together. > > It would only make sense to remove it going forward. I would not support dropping it for any stable branches. It's something we could work on in X unless someone wants to drive it for W. > Your thoughts? > Cheers, > > Thomas Goirand (zigo) > > [1] https://review.opendev.org/c/openstack/puppet-swift/+/777564 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Mon Mar 1 16:02:19 2021 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 1 Mar 2021 17:02:19 +0100 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: Well, but that doesn't mean it's right that they don't have it configured. If you google "net.ipv4.tcp_retries2 keepalive" and will read results, you will see that this option is widely used I think we have to discuss option value (not fix itself)...to find some golden middle .. https://www.programmersought.com/article/724162740/ https://knowledge.broadcom.com/external/article/142410/tuning-tcp-keepalive-for-inprogress-task.html https://www.ibm.com/support/knowledgecenter/ko/SSEPGG_9.7.0/com.ibm.db2.luw.admin.ha.doc/doc/t0058764.html?view=embed https://www.suse.com/support/kb/doc/?id=000019293 https://programmer.group/kubeadm-build-highly-available-kubernetes-1.15.1.html po 1. 3. 2021 v 14:09 odesílatel Radosław Piliszek < radoslaw.piliszek at gmail.com> napsal: > On Mon, Mar 1, 2021 at 11:06 AM Michal Arbet > wrote: > > I really like the tool, thank you. also confirms my idea as this kernel > option is also used by other projects where they explicitly mention case > with keepalived/VIP switch. > > FWIW, I can see only StarlingX and Airship, none of general-purpose > tools seem to mention customising this variable. > > -yoctozepto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Mar 1 16:07:26 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 1 Mar 2021 16:07:26 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack Message-ID: Hi all, As part of the Victoria PTG [0] the Neutron community agreed upon switching the default backend in Devstack to OVN. A lot of work has been done since, from porting the OVN devstack module to the DevStack tree, refactoring the DevStack module to install OVN from distro packages, implementing features to close the parity gap with ML2/OVS, fixing issues with tests and distros, etc... We are now very close to being able to make the switch and we've thought about sending this email to the broader community to raise awareness about this change as well as bring more attention to the patches that are current on review. Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is discontinued and/or not supported anymore. The ML2/OVS driver is still going to be developed and maintained by the upstream Neutron community. Below is a e per project explanation with relevant links and issues of where we stand with this work right now: * Keystone: Everything should be good for Keystone, the gate is happy with the changes. Here is the test patch: https://review.opendev.org/c/openstack/keystone/+/777963 * Glance: Everything should be good for Glace, the gate is happy with the changes. Here is the test patch: https://review.opendev.org/c/openstack/glance/+/748390 * Swift: Everything should be good for Swift, the gate is happy with the changes. Here is the test patch: https://review.opendev.org/c/openstack/swift/+/748403 * Ironic: Since chainloading iPXE by the OVN built-in DHCP server is work in progress, we've changed most of the Ironic jobs to explicitly enable ML2/OVS and everything is merged, so we should be good for Ironic too. Here is the test patch: https://review.opendev.org/c/openstack/ironic/+/748405 * Cinder: Cinder is almost complete. There's one test failure in the "tempest-slow-py3" job run on the "test_port_security_macspoofing_port" test. This failure is due to a bug in core OVN [1]. This bug has already been fixed upstream [2] and the fix has been backported down to the branch-20.03 [3] of the OVN project. However, since we install OVN from packages we are currently waiting for this fix to be included in the packages for Ubuntu Focal (it's based on OVN 20.03). I already contacted the package maintainer which has been very supportive of this work and will work on the package update, but he maintain a handful of backports in that package which is not yet included in OVN 20.03 upstream and he's now working with the core OVN community [4] to include it first in the branch and then create a new package for it. Hopefully this will happen soon. But for now we have a few options moving on with this issue: 1- Wait for the new package version 2- Mark the test as unstable until we get the new package version 3- Compile OVN from source instead of installing it from packages (OVN_BUILD_FROM_SOURCE=True in local.conf) What do you think about it ? Here is the test patch for Cinder: https://review.opendev.org/c/openstack/cinder/+/748227 * Nova: There are a few patches waiting for review for Nova, which are: 1- Adapting the live migration scripts to work with ML2/OVN: Basically the scripts were trying to stop the Neutron agent (q-agt) process which is not part of an ML2/OVN deployment. The patch changes the code to check if that system unit exists before trying to stop it. Patch: https://review.opendev.org/c/openstack/nova/+/776419 2- Explicitly set grenade job to ML2/OVS: This is a temporary change which can be removed one release cycle after we switch DevStack to ML2/OVN. Grenade will test updating from the release version to the master branch but, since the default of the released version is not ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job is not supported. Patch: https://review.opendev.org/c/openstack/nova/+/776934 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] therefore we are temporarily enabling ML2/OVS for this job until that feature lands in core OVN. Patch: https://review.opendev.org/c/openstack/nova/+/776944 I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these changes and he suggested keeping all the Nova jobs on ML2/OVS for now because he feels like a change in the default network driver a few weeks prior to the upstream code freeze can be concerning. We do not know yet precisely when we are changing the default due to the current patches we need to get merged but, if this is a shared feeling among the Nova community I can work on enabling ML2/OVS on all jobs in Nova until we get a new release in OpenStack. Here's the test patch for Nova: https://review.opendev.org/c/openstack/nova/+/776945 * DevStack: And this is the final patch that will make this all happen: https://review.opendev.org/c/openstack/devstack/+/735097 It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been a long and bumpy road to get to this point and I would like to say thanks to everyone involved so far and everyone that read the whole email, please let me know your thoughts. [0] https://etherpad.opendev.org/p/neutron-victoria-ptg [1] https://bugs.launchpad.net/tempest/+bug/1728886 [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html Cheers, Lucas From kennelson11 at gmail.com Mon Mar 1 17:44:16 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Mar 2021 09:44:16 -0800 Subject: [all] April 2021 PTG Dates & Registration Message-ID: Hello Everyone! I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. Can't wait to see you all there! -the Kendalls (diablo_rojo & wendallkaters) [1] https://www.openstack.org/ptg/ [2] https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Mar 1 18:07:35 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 1 Mar 2021 19:07:35 +0100 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: Hi Kendall & Kendall, While registering, I noticed the event date in Eventbrite is set in a way that the calendar invite is for a one-hour event on April 19. Could this please be corrected? Many thanks, Pierre Riteau (priteau) On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: > > Hello Everyone! > > I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! > > PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! > > Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. > > Can't wait to see you all there! > > -the Kendalls (diablo_rojo & wendallkaters) > > [1] https://www.openstack.org/ptg/ > [2] https://april2021-ptg.eventbrite.com From kendall at openstack.org Mon Mar 1 18:08:01 2021 From: kendall at openstack.org (Kendall Waters) Date: Mon, 1 Mar 2021 12:08:01 -0600 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: <592A8D48-531B-421F-9628-EA84B3581643@openstack.org> Oops! That could be my mistake. Thanks for flagging. I will take a look at it. Cheers, Kendall Kendall Waters Perez Marketing & Events Coordinator Open Infrastructure Foundation > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > Hi Kendall & Kendall, > > While registering, I noticed the event date in Eventbrite is set in a > way that the calendar invite is for a one-hour event on April 19. > Could this please be corrected? > > Many thanks, > Pierre Riteau (priteau) > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: >> >> Hello Everyone! >> >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! >> >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! >> >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. >> >> Can't wait to see you all there! >> >> -the Kendalls (diablo_rojo & wendallkaters) >> >> [1] https://www.openstack.org/ptg/ >> [2] https://april2021-ptg.eventbrite.com From smooney at redhat.com Mon Mar 1 18:10:54 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 01 Mar 2021 18:10:54 +0000 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: <458239bc35bbc1471a683a2d503beb744d8a1a97.camel@redhat.com> On Mon, 2021-03-01 at 17:02 +0100, Michal Arbet wrote: > Well, but that doesn't mean it's right that they don't have it configured. > If you google "net.ipv4.tcp_retries2 keepalive" and will read results, you > will see that this option is widely used this is something that operators can fix themselve externally however. im not against haveign kolla or other tools be able to configur it automticlly persay but its not kolla-ansibles job to configure every possible tuneing. this one might make sense to do by default or optionally but host config is largely out of scope of kolla ansible. in ooo where that is ment to handel all host config as well as openstack configuration or other tools where that is expliclty in socpe may also want to wtwee it but likely it shoudl be configurable. > > I think we have to discuss option value (not fix itself)...to find some > golden middle .. > > https://www.programmersought.com/article/724162740/ > https://knowledge.broadcom.com/external/article/142410/tuning-tcp-keepalive-for-inprogress-task.html > https://www.ibm.com/support/knowledgecenter/ko/SSEPGG_9.7.0/com.ibm.db2.luw.admin.ha.doc/doc/t0058764.html?view=embed > https://www.suse.com/support/kb/doc/?id=000019293 > https://programmer.group/kubeadm-build-highly-available-kubernetes-1.15.1.html > > po 1. 3. 2021 v 14:09 odesílatel Radosław Piliszek < > radoslaw.piliszek at gmail.com> napsal: > > > On Mon, Mar 1, 2021 at 11:06 AM Michal Arbet > > wrote: > > > I really like the tool, thank you. also confirms my idea as this kernel > > option is also used by other projects where they explicitly mention case > > with keepalived/VIP switch. > > > > FWIW, I can see only StarlingX and Airship, none of general-purpose > > tools seem to mention customising this variable. > > > > -yoctozepto > > From kendall at openstack.org Mon Mar 1 18:10:27 2021 From: kendall at openstack.org (Kendall Waters) Date: Mon, 1 Mar 2021 12:10:27 -0600 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: Hi Pierre, Do you mind registering again and seeing if the problem is fixed? And if not, do you mind sending a screenshot so I can see what is going on? Thanks! Kendall Kendall Waters Perez Marketing & Events Coordinator Open Infrastructure Foundation > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > Hi Kendall & Kendall, > > While registering, I noticed the event date in Eventbrite is set in a > way that the calendar invite is for a one-hour event on April 19. > Could this please be corrected? > > Many thanks, > Pierre Riteau (priteau) > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: >> >> Hello Everyone! >> >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! >> >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! >> >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. >> >> Can't wait to see you all there! >> >> -the Kendalls (diablo_rojo & wendallkaters) >> >> [1] https://www.openstack.org/ptg/ >> [2] https://april2021-ptg.eventbrite.com From smooney at redhat.com Mon Mar 1 18:24:21 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 01 Mar 2021 18:24:21 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > Hi all, > > As part of the Victoria PTG [0] the Neutron community agreed upon > switching the default backend in Devstack to OVN. A lot of work has > been done since, from porting the OVN devstack module to the DevStack > tree, refactoring the DevStack module to install OVN from distro > packages, implementing features to close the parity gap with ML2/OVS, > fixing issues with tests and distros, etc... > > We are now very close to being able to make the switch and we've > thought about sending this email to the broader community to raise > awareness about this change as well as bring more attention to the > patches that are current on review. > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > discontinued and/or not supported anymore. The ML2/OVS driver is still > going to be developed and maintained by the upstream Neutron > community. can we ensure that this does not happen until the xena release. in generall i think its ok to change the default but not this late in the cycle. i would also like to ensure we keep at least one non ovn based multi node job in nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. right now the event/neutorn interaction is not the same during move operations. > > Below is a e per project explanation with relevant links and issues of > where we stand with this work right now: > > * Keystone: > > Everything should be good for Keystone, the gate is happy with the > changes. Here is the test patch: > https://review.opendev.org/c/openstack/keystone/+/777963 > > * Glance: > > Everything should be good for Glace, the gate is happy with the > changes. Here is the test patch: > https://review.opendev.org/c/openstack/glance/+/748390 > > * Swift: > > Everything should be good for Swift, the gate is happy with the > changes. Here is the test patch: > https://review.opendev.org/c/openstack/swift/+/748403 > > * Ironic: > > Since chainloading iPXE by the OVN built-in DHCP server is work in > progress, we've changed most of the Ironic jobs to explicitly enable > ML2/OVS and everything is merged, so we should be good for Ironic too. > Here is the test patch: > https://review.opendev.org/c/openstack/ironic/+/748405 > > * Cinder: > > Cinder is almost complete. There's one test failure in the > "tempest-slow-py3" job run on the > "test_port_security_macspoofing_port" test. > > This failure is due to a bug in core OVN [1]. This bug has already > been fixed upstream [2] and the fix has been backported down to the > branch-20.03 [3] of the OVN project. However, since we install OVN > from packages we are currently waiting for this fix to be included in > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > contacted the package maintainer which has been very supportive of > this work and will work on the package update, but he maintain a > handful of backports in that package which is not yet included in OVN > 20.03 upstream and he's now working with the core OVN community [4] to > include it first in the branch and then create a new package for it. > Hopefully this will happen soon. > > But for now we have a few options moving on with this issue: > > 1- Wait for the new package version > 2- Mark the test as unstable until we get the new package version > 3- Compile OVN from source instead of installing it from packages > (OVN_BUILD_FROM_SOURCE=True in local.conf) i dont think we should default to ovn untill a souce build is not required. compiling form souce while not supper expensice still adds time to the job execution and im not sure we should be paying that cost on every devstack job run. we could maybe compile it once and bake the package into the image or host it on a mirror but i think we should avoid this option if we have alternitives. > > What do you think about it ? > > Here is the test patch for Cinder: > https://review.opendev.org/c/openstack/cinder/+/748227 > > * Nova: > > There are a few patches waiting for review for Nova, which are: > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > the scripts were trying to stop the Neutron agent (q-agt) process > which is not part of an ML2/OVN deployment. The patch changes the code > to check if that system unit exists before trying to stop it. > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > which can be removed one release cycle after we switch DevStack to > ML2/OVN. Grenade will test updating from the release version to the > master branch but, since the default of the released version is not > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > is not supported. > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > therefore we are temporarily enabling ML2/OVS for this job until that > feature lands in core OVN. > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > because he feels like a change in the default network driver a few > weeks prior to the upstream code freeze can be concerning. We do not > know yet precisely when we are changing the default due to the current > patches we need to get merged but, if this is a shared feeling among > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > until we get a new release in OpenStack. yep this is still my view. i would suggest we do the work required in the repos but not merge it until the xena release is open. thats technically at RC1 so march 25th i think we can safely do the swich after that but i would not change the defualt in any project before then. > > Here's the test patch for Nova: > https://review.opendev.org/c/openstack/nova/+/776945 > > * DevStack: > > And this is the final patch that will make this all happen: > https://review.opendev.org/c/openstack/devstack/+/735097 > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > a long and bumpy road to get to this point and I would like to say > thanks to everyone involved so far and everyone that read the whole > email, please let me know your thoughts. thanks for working on this. > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > Cheers, > Lucas > From skaplons at redhat.com Mon Mar 1 19:25:42 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 1 Mar 2021 20:25:42 +0100 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: <20210301192542.emodvk55xnf4h447@p1.localdomain> Hi, On Mon, Mar 01, 2021 at 12:10:27PM -0600, Kendall Waters wrote: > Hi Pierre, > > Do you mind registering again and seeing if the problem is fixed? And if not, do you mind sending a screenshot so I can see what is going on? Yes, it's fine. I see Date as Mon, Apr 19, 2021 8:00 AM - Fri, Apr 23, 2021 4:00 PM CDT at the registration summary. > > Thanks! > Kendall > > Kendall Waters Perez > Marketing & Events Coordinator > Open Infrastructure Foundation > > > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > > > Hi Kendall & Kendall, > > > > While registering, I noticed the event date in Eventbrite is set in a > > way that the calendar invite is for a one-hour event on April 19. > > Could this please be corrected? > > > > Many thanks, > > Pierre Riteau (priteau) > > > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: > >> > >> Hello Everyone! > >> > >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! > >> > >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! > >> > >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. > >> > >> Can't wait to see you all there! > >> > >> -the Kendalls (diablo_rojo & wendallkaters) > >> > >> [1] https://www.openstack.org/ptg/ > >> [2] https://april2021-ptg.eventbrite.com > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pierre at stackhpc.com Mon Mar 1 19:41:06 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 1 Mar 2021 20:41:06 +0100 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: No need to register again, I can see the updated dates online and it even works when I click on the calendar event creation link in my order confirmation. Thanks for fixing it so quickly! On Mon, 1 Mar 2021 at 19:11, Kendall Waters wrote: > > Hi Pierre, > > Do you mind registering again and seeing if the problem is fixed? And if not, do you mind sending a screenshot so I can see what is going on? > > Thanks! > Kendall > > Kendall Waters Perez > Marketing & Events Coordinator > Open Infrastructure Foundation > > > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > > > Hi Kendall & Kendall, > > > > While registering, I noticed the event date in Eventbrite is set in a > > way that the calendar invite is for a one-hour event on April 19. > > Could this please be corrected? > > > > Many thanks, > > Pierre Riteau (priteau) > > > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: > >> > >> Hello Everyone! > >> > >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! > >> > >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! > >> > >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. > >> > >> Can't wait to see you all there! > >> > >> -the Kendalls (diablo_rojo & wendallkaters) > >> > >> [1] https://www.openstack.org/ptg/ > >> [2] https://april2021-ptg.eventbrite.com > From eacosta at uesc.br Mon Mar 1 19:46:45 2021 From: eacosta at uesc.br (Eduardo Almeida Costa) Date: Mon, 1 Mar 2021 16:46:45 -0300 Subject: Question about Ubuntu Server Message-ID: Hello everybody. At my workplace, we will migrate from CentOS to Ubuntu Server. However, reading about it on forums and official documentation, it was not clear to me which version of Ubuntu Server is best for the production medium, 18.04 or 20.04. Can someone tell me what better version I can implement for production servers? My best regards. Eduardo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Mar 1 19:52:29 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 1 Mar 2021 20:52:29 +0100 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: It will depend on the OpenStack version you will run in production. Train - https://governance.openstack.org/tc/reference/runtimes/train.html Ussuri - https://governance.openstack.org/tc/reference/runtimes/ussuri.html Victoria - https://governance.openstack.org/tc/reference/runtimes/victoria.html Em seg., 1 de mar. de 2021 às 20:48, Eduardo Almeida Costa escreveu: > Hello everybody. > > At my workplace, we will migrate from CentOS to Ubuntu Server. > > However, reading about it on forums and official documentation, it was not > clear to me which version of Ubuntu Server is best for the production > medium, 18.04 or 20.04. > > Can someone tell me what better version I can implement for production > servers? > > My best regards. > > Eduardo. > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From raubvogel at gmail.com Mon Mar 1 19:54:41 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Mon, 1 Mar 2021 14:54:41 -0500 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: So openstack does run in debin/ubuntu? On Mon, Mar 1, 2021 at 2:54 PM Iury Gregory wrote: > It will depend on the OpenStack version you will run in production. > > Train - https://governance.openstack.org/tc/reference/runtimes/train.html > Ussuri - > https://governance.openstack.org/tc/reference/runtimes/ussuri.html > Victoria - > https://governance.openstack.org/tc/reference/runtimes/victoria.html > > Em seg., 1 de mar. de 2021 às 20:48, Eduardo Almeida Costa < > eacosta at uesc.br> escreveu: > >> Hello everybody. >> >> At my workplace, we will migrate from CentOS to Ubuntu Server. >> >> However, reading about it on forums and official documentation, it was >> not clear to me which version of Ubuntu Server is best for the production >> medium, 18.04 or 20.04. >> >> Can someone tell me what better version I can implement for production >> servers? >> >> My best regards. >> >> Eduardo. >> >> > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Mar 1 19:57:07 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 1 Mar 2021 20:57:07 +0100 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: Yes, it does. Em seg., 1 de mar. de 2021 às 20:54, Mauricio Tavares escreveu: > So openstack does run in debin/ubuntu? > > On Mon, Mar 1, 2021 at 2:54 PM Iury Gregory wrote: > >> It will depend on the OpenStack version you will run in production. >> >> Train - https://governance.openstack.org/tc/reference/runtimes/train.html >> >> Ussuri - >> https://governance.openstack.org/tc/reference/runtimes/ussuri.html >> Victoria - >> https://governance.openstack.org/tc/reference/runtimes/victoria.html >> >> Em seg., 1 de mar. de 2021 às 20:48, Eduardo Almeida Costa < >> eacosta at uesc.br> escreveu: >> >>> Hello everybody. >>> >>> At my workplace, we will migrate from CentOS to Ubuntu Server. >>> >>> However, reading about it on forums and official documentation, it was >>> not clear to me which version of Ubuntu Server is best for the production >>> medium, 18.04 or 20.04. >>> >>> Can someone tell me what better version I can implement for production >>> servers? >>> >>> My best regards. >>> >>> Eduardo. >>> >>> >> >> -- >> >> >> *Att[]'sIury Gregory Melo Ferreira * >> *MSc in Computer Science at UFCG* >> *Part of the puppet-manager-core team in OpenStack* >> *Software Engineer at Red Hat Czech* >> *Social*: https://www.linkedin.com/in/iurygregory >> *E-mail: iurygregory at gmail.com * >> > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Mar 1 20:00:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Mar 2021 20:00:35 +0000 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <20210301200035.u5pwq5akocoto24w@yuggoth.org> On 2021-03-01 16:46:45 -0300 (-0300), Eduardo Almeida Costa wrote: > At my workplace, we will migrate from CentOS to Ubuntu Server. > > However, reading about it on forums and official documentation, it > was not clear to me which version of Ubuntu Server is best for the > production medium, 18.04 or 20.04. > > Can someone tell me what better version I can implement for > production servers? Both 18.04 and 20.04 are "long-term support" releases, meaning Canonical will continue to provide security updates and other bug fixes in them for longer than the intermediate versions. 20.04 is, as its number would seem to indicate, newer than 18.04 (by roughly two years), so I wouldn't choose the older version unless you specifically need to run older software on it which won't work on the newer one for some reason. Since you posted this to the OpenStack discussion mailing list, I assume you're planning to install some version of OpenStack on Ubuntu. If so, you should look at our tested runtimes to see which platform was used to test the version of OpenStack you want to run: https://governance.openstack.org/tc/reference/project-testing-interface.html#tested-runtimes -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Mon Mar 1 20:04:20 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Mar 2021 20:04:20 +0000 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <20210301200420.2ukdtyedumn4gkpa@yuggoth.org> On 2021-03-01 14:54:41 -0500 (-0500), Mauricio Tavares wrote: > So openstack does run in debin/ubuntu? [...] Canonical (the company which produces Ubuntu) even has a commercially supported product built around running OpenStack on Ubuntu: https://ubuntu.com/openstack The Debian community maintains a comprehensive distribution of OpenStack as well: https://wiki.debian.org/OpenStack Hope that helps. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From motosingh at yahoo.co.uk Mon Mar 1 20:48:29 2021 From: motosingh at yahoo.co.uk (Dees) Date: Mon, 1 Mar 2021 20:48:29 +0000 (UTC) Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> Message-ID: <1580253120.2031001.1614631709883@mail.yahoo.com> Thanks for your reply much appreciated, deployed OpenStack through JUJU. I managed to get around that issue by restarting services but I have fallen into another issue now, looks like I need to define network time somewhere on the config I am right? When I create a network trying to get out a second interface I am getting errors. Any suggestions, please? labuser at maas:~/openstack-base$ openstack network create Pub_Net1 --external --share --default --provider-network-type vlan --provider-segment 201 --provider-physical-network externalError while executing the command: BadRequestException: 400, Invalid input for operation: physical_network 'external' unknown for VLAN provider network. The first network segment  is ok labuser at maas:~/openstack-base$ openstack network create Pub_Net --external --share --default \   --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 Config on juju   ovn-chassis:    annotations:      gui-x: '120'      gui-y: '1030'    charm: cs:ovn-chassis-10    # *** Please update the `bridge-interface-mappings` to values suitable ***    # *** for thehardware used in your deployment.  See the referenced     ***    # *** documentation at the top of this file.                           ***    options:      ovn-bridge-mappings: physnet1:br-data external:br-ex      bridge-interface-mappings: br-data:eno2 br-ex:eno3 Status ubuntu at 1:~$ sudo ovs-vsctl showe3b50030-b436-4d26-95af-4abeea1097d5    Manager "ptcp:6640:127.0.0.1"        is_connected: true    Bridge br-data        fail_mode: standalone        datapath_type: system        Port eno2            Interface eno2                type: system        Port br-data            Interface br-data                type: internal    Bridge br-int        fail_mode: secure        datapath_type: system        Port br-int            Interface br-int                type: internal        Port ovn-comput-0            Interface ovn-comput-0                type: geneve                options: {csum="true", key=flow, remote_ip="10.141.14.36"}    Bridge br-ex        fail_mode: standalone        datapath_type: system        Port eno3            Interface eno3                type: system        Port br-ex            Interface br-ex                type: internal    ovs_version: "2.13.1" Kind regards,Deepesh On Monday, 1 March 2021, 10:39:51 GMT, Lucas Alvares Gomes wrote: Hi, Few questions inline On Mon, Mar 1, 2021 at 12:09 AM Amy Marrich wrote: > > Deepesh, > > Forwarding to the OpenStack Discuss mailing list. > > Thanks, > > Amy (spotz) > > On Sat, Feb 27, 2021 at 12:42 PM Dees wrote: >> >> Hi All, >> >> Deployed openstack all service yesterday and seemed to be running since power cycling the openstack today. >> What did you use to deploy OpenStack ? Based on the command below I don't recognize that output. >> Then deployed and configured VM instance with networking but config onv-chassis is reporting error, as I am new to openstack (having used it number of years back) mind guiding where I should be looking any help will be greatly appreciated. >> >> >> >> nova-compute/0*              active    idle      1        10.141.14.62                      Unit is ready >>  ntp/0*                    active    idle                10.141.14.62    123/udp            chrony: Ready >>  ovn-chassis/0*            error    idle                10.141.14.62                      hook failed: "config-changed" >> I do not recognize this output or that error in particular, but I assume that ovn-chassis is the name of the process of the ovn-controller running on the node ? If so, the configuration of ovn-controller should be present in the local OVSDB instance, can you please paste the output of the following command: $ sudo ovs-vsctl list Open_VSwitch . Also, do you have any logs related to that process ? >> >> Deployed VM instance using the following command line. >> >> >> curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \ >>    openstack image create --public --container-format bare --disk-format qcow2 \ >>    --property architecture=x86_64 --property hw_disk_bus=virtio \ >>    --property hw_vif_model=virtio "focal_x86_64" >> >> openstack flavor create --ram 512 --disk 4 m1.micro >> >> openstack network create Pub_Net --external --share --default \ >>    --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 >> >> openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.40,end=10.141.40.62 \ >>    --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \ >>    --network Pub_Net >> >> openstack network create Network1 --internal >> openstack subnet create Subnet1 \ >>    --allocation-pool start=192.168.0.10,end=192.168.0.199 \ >>    --subnet-range 192.168.0.0/24 \ >>    --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \ >>    --network Network1 >> >> Kind regards, >> Deepesh >> _______________________________________________ >> Community mailing list >> Community at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/community -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Mar 1 20:54:00 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Mar 2021 20:54:00 +0000 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <20210301205400.74opdnu26wdqfher@yuggoth.org> On 2021-03-01 14:54:41 -0500 (-0500), Mauricio Tavares wrote: > So openstack does run in debin/ubuntu? [...] This is also a useful related resource: https://www.openstack.org/marketplace/distros/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Mon Mar 1 20:59:18 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Mar 2021 12:59:18 -0800 Subject: [SIG][Containers][k8s] Merge? Retire? In-Reply-To: References: Message-ID: Excellent! I will put up a patch to update the chair to being you then? I will make sure to add you as a reviewer when I post it. I think since there is no current interest in the containers SIG, perhaps we just remove it and have only the k8s SIG? -Kendall On Tue, Feb 23, 2021 at 4:34 PM feilong wrote: > Hi Kendall, > > Thanks for driving this firstly. I was the Magnum PTL for several cycles > and I can see there are still very strong interest from the community > members about running k8s on OpenStack and/or the reverse. I'm happy to > contribute my time for the SIG to bridge the two communities if it's > needed. Cheers. > > > On 24/02/21 1:02 pm, Kendall Nelson wrote: > > Hello! > > As you might have noticed, we've been working on getting the sig > governance site updated with current chairs, sig statuses. etc. Two SIGs > that are in need of updates still. > > The k8s SIG's listed chairs have all moved on. The container SIG is still > listed as forming. While not the exact same goal/topic, perhaps these can > be merged? And if so, do we have any volunteers for chairs? > > The other option is to simply remove the Container SIG as its not > completely formed at this point and retire the k8s SIG. > > Thoughts? Volunteers? > > -Kendall (diablo_rojo) > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon Mar 1 23:17:11 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 1 Mar 2021 17:17:11 -0600 Subject: [openstack-helm] IRC Meeting Cancelled March 2nd Message-ID: Hello, Since there are no agenda items [0] for the IRC meeting tomorrow, March 2nd, we will cancel it. Our next IRC meeting will be March 9th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Tue Mar 2 01:02:57 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 1 Mar 2021 22:02:57 -0300 Subject: [cinder] bug deputy report for week of 2021-02-22 Message-ID: This is a bug report from 2021-02-22 to 2021-02-26. Most of these bugs were discussed at the Cinder meeting last Wednesday 2021-02-24. Critical: - High: - https://bugs.launchpad.net/os-brick/+bug/1915678: "iSCSI+Multipath: Volume attachment hangs if session scanning fails''. Assigned to Takashi Kajinami (kajinamit). Medium: - https://bugs.launchpad.net/os-brick/+bug/1916264: "NVMeOFConnector can't connect volume ". Assigned to Zohar Mamedov (zoharm). - https://bugs.launchpad.net/python-cinderclient/+bug/1915996: "Fetching server version fails to support passing client certificates". Assigned to Sri Harsha mekala (harshayahoo). - https://bugs.launchpad.net/cinder/+bug/1915800: " XtremIO does not support ports filtering". Assigned to Vladislav Belogrudov (vlad-belogrudov). Low: - https://bugs.launchpad.net/cinder/+bug/1916258: "[docs] Install and configure a storage node in cinder ". Assigned to Sofia Enriquez (enriquetaso). Incomplete:- Undecided/Unconfirmed: - https://bugs.launchpad.net/cinder/+bug/1916980: "cinder sends old db object when delete an attachment". Assigned to wu.chunyang (wuchunyang). - https://bugs.launchpad.net/cinder/+bug/1916843: "Backup create failed: RBD volume flatten too long causing mq to timed out". Unassigned. Not a bug:- Feel free to reply/reach me if I missed something. Regards Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Mar 2 04:59:35 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Mar 2021 20:59:35 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC March 2021 Election Season Message-ID: Election details: https://governance.openstack.org/election/ The nomination period officially begins Mar 02, 2021 23:45 UTC. Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. Due to circumstances of timing, PTL and TC elections for the coming cycle will run concurrently; deadlines for their nomination and voting activities are synchronized but will still use separate ballots. Please note, if only one candidate is nominated as PTL for a project team during the PTL nomination period, that candidate will win by acclaim, and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a project team's PTL position. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[1] so that we may address your concerns. Thank you, -Kendall Nelson (diablo_rojo) [1] https://governance.openstack.org/election/#election-officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Mar 2 07:41:54 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 02 Mar 2021 08:41:54 +0100 Subject: [neutron] Bug deputy report (week starting on 2021-02-22) In-Reply-To: References: Message-ID: <20622341.81JxD3pCW9@p1> Hi, Dnia poniedziałek, 1 marca 2021 12:58:18 CET Bernard Cafarelli pisze: > Hello neutrinos, > > this is our first new bug deputy rotation for 2021, overall a quiet week > with most bugs already fixed or good progress. > > DVR folks may be interested in the discussion ongoing on the single High > bug, also I left one bug in Opinion on OVS agent main loop > > Critical > * neutron.tests.unit.common.test_utils.TestThrottler test_throttler is > failing - https://bugs.launchpad.net/neutron/+bug/1916572 > Patch was quickly merged: > https://review.opendev.org/c/openstack/neutron/+/777072 > > High > * [dvr] bound port permanent arp entries never deleted - > https://bugs.launchpad.net/neutron/+bug/1916761 > Introduced with > https://review.opendev.org/q/I538aa6d68fbb5ff8431f82ba76601ee34c1bb181 > Lengthy discussion in the bug itself and suggested fix > https://review.opendev.org/c/openstack/neutron/+/777616 > > Medium > * [OVN][QOS] OVN DB QoS rule is not removed when a FIP is dissasociated - > https://bugs.launchpad.net/neutron/+bug/1916470 > Patch by ralonsoh merged > https://review.opendev.org/c/openstack/neutron/+/776916 > * A privsep daemon spawned by neutron-openvswitch-agent hangs when debug > logging is enabled - https://bugs.launchpad.net/neutron/+bug/1896734 > Actually reported earlier on charm and oslo, ralonsoh looking into it > from neutron side > * StaleDataError: DELETE statement on table 'standardattributes' expected > to delete 2 row(s); 1 were matched - > https://bugs.launchpad.net/neutron/+bug/1916889 > Spotted by Liu while looking at dvr bug, patch sent to neutron-lib > https://review.opendev.org/c/openstack/neutron-lib/+/777581 > > Low > * ovn-octavia-provider can attempt to write protocol=None to OVSDB - > https://bugs.launchpad.net/neutron/+bug/1916646 > This appears in some functional test results, > otherwiseguy sent > https://review.opendev.org/c/openstack/ovn-octavia-provider/+/777201 > > Opinion > * use deepcopy in function rpc_loop of ovs-agent - > https://bugs.launchpad.net/neutron/+bug/1916761 I think You pasted wrong link here. Probably it should be https:// bugs.launchpad.net/neutron/+bug/1916918, right? > My hunch is that we are fine here, but please chime in other opinions > > Passing the baton to our PTL for next week! > -- > Bernard Cafarelli -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bdobreli at redhat.com Tue Mar 2 09:05:32 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 2 Mar 2021 10:05:32 +0100 Subject: Puppet openstack modules In-Reply-To: References: Message-ID: On 2/27/21 4:06 PM, Takashi Kajinami wrote: > Ruby codes in puppet-openstack repos are used for the following three > purposes. >  1. unit tests and acceptance tests using serverspec framework (files > placed under spec) >  2. implementation of custom type, provider, and function And some modules do really heavy use of those customizations written in ruby, like puppet-pacemaker [0], which is 57% in Ruby and only 41% in Puppet. [0] https://github.com/openstack/puppet-pacemaker >  3. template files (We use ERB instead of pure Ruby about this, though) > > 1 is supposed to be used only for testing during deployment but 2 and 3 > can be used > in any production use case in combination with puppet manifest files to > manage > OpenStack deployments. > > > On Sat, Feb 27, 2021 at 5:01 AM Bessghaier, Narjes > > wrote: > > > Dear OpenStack team, > > My name is Narjes and I'm a PhD student at the University of > Montréal, Canada. > >  My current work consists of analyzing code reviews on the puppet > modules. I would like to precisely know what the ruby files are used > for in the puppet modules. As mentioned in the official website, > most of unit test are written in ruby. Are ruby files destined to > carry out units tests or destined for production code. > > I appreciate your help, > Thank you > -- Best regards, Bogdan Dobrelya, Irc #bogdando From admin at gsic.uva.es Tue Mar 2 09:13:49 2021 From: admin at gsic.uva.es (Cristina Mayo) Date: Tue, 2 Mar 2021 10:13:49 +0100 Subject: [swift] Unable to get or create objects Message-ID: Hello, I don't have a lot of knowledge about Openstack. I have installed Swift in my Openstack Ussuri cloud with one storage node but I am not be able to list or create objects. In the controller node I saw these errors: proxy-server: ERROR with Account server 10.20.20.1:6202/sdb re: Trying to HEAD /v1/AUTH_c2b7d6242f3140f09d283e8fbb88732a: Connection refused (txn: tx370fb5929a6c4cf397384-00603dfe35) proxy-server: Account HEAD returning 503 for [] (txn:tx370fb5929a6c4cf397384-00603dfe35) Any idea? Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From alistairncoles at gmail.com Tue Mar 2 10:15:53 2021 From: alistairncoles at gmail.com (Alistair Coles) Date: Tue, 2 Mar 2021 10:15:53 +0000 Subject: [swift] Unable to get or create objects In-Reply-To: References: Message-ID: Cristina You might like to join the #openstack-swift irc channel to get help ( https://wiki.openstack.org/wiki/IRC ) It sounds like you don't have an account server running, or it is running but listening on a different port than configured in your account ring. You can verify which swift services are running on your storage node using: swift-init status main and/or check which ports they are listening on using: lsof -i tcp +c 15 (you're looking for lines similar to 'swift-account-s 4309 vagrant 17u IPv4 3466818 0t0 TCP localhost:6012 (LISTEN)') The host:port should match what is in the account ring, on your controller node, which is shown using: swift-ring-builder /etc/swift/account.builder You can also try to make an http request from your controller node to the account server using something like: curl -I http://localhost:6012/recon/version HTTP/1.1 200 OK Content-Length: 28 Content-Type: application/json Date: Tue, 02 Mar 2021 10:03:47 GMT ( in your case curl -I http://10.20.20.1:6202/recon/version ) If necessary start services using: swift-init restart account A similar diagnosis can be used if you have problems with connections to container or object services. Alistair On Tue, Mar 2, 2021 at 9:18 AM Cristina Mayo wrote: > Hello, > > I don't have a lot of knowledge about Openstack. I have installed Swift > in my Openstack Ussuri cloud with one storage node but I am not be able to > list or create objects. In the controller node I saw these errors: > > proxy-server: ERROR with Account server 10.20.20.1:6202/sdb re: Trying to > HEAD /v1/AUTH_c2b7d6242f3140f09d283e8fbb88732a: Connection refused (txn: > tx370fb5929a6c4cf397384-00603dfe35) > proxy-server: Account HEAD returning 503 for [] > (txn:tx370fb5929a6c4cf397384-00603dfe35) > > Any idea? > > Thanks in advance! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Mar 2 10:25:34 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 2 Mar 2021 11:25:34 +0100 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <121f78af-dcca-8139-dc78-cf8d7909335b@debian.org> On 3/1/21 8:54 PM, Mauricio Tavares wrote: > So openstack does run in debin/ubuntu? Yes, and if you're an IRC person, I enjoy helping people. :) Join us on #debian-openstack on the OFTC network. If you didn't know about it, have a look here: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer This is fully in Debian Bullseye, which at this point, I would be advising to run OpenStack (even if it's not released yet, the fact that it is frozen is good enough for production, IMO). With this solution, you do not need *anything* outside of the Debian repositories (even the puppet modules are packaged). I like to say also that it's very flexible, and will install all the OpenStack components it can depending on what node type you install. I hope this helps, Cheers, Thomas Goirand (zigo) From bcafarel at redhat.com Tue Mar 2 14:10:39 2021 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Tue, 2 Mar 2021 15:10:39 +0100 Subject: [neutron] Bug deputy report (week starting on 2021-02-22) In-Reply-To: <20622341.81JxD3pCW9@p1> References: <20622341.81JxD3pCW9@p1> Message-ID: On Tue, 2 Mar 2021 at 08:42, Slawek Kaplonski wrote: > Hi, > > Dnia poniedziałek, 1 marca 2021 12:58:18 CET Bernard Cafarelli pisze: > > Hello neutrinos, > > > > this is our first new bug deputy rotation for 2021, overall a quiet week > > with most bugs already fixed or good progress. > > > > DVR folks may be interested in the discussion ongoing on the single High > > bug, also I left one bug in Opinion on OVS agent main loop > > > > Critical > > * neutron.tests.unit.common.test_utils.TestThrottler test_throttler is > > failing - https://bugs.launchpad.net/neutron/+bug/1916572 > > Patch was quickly merged: > > https://review.opendev.org/c/openstack/neutron/+/777072 > > > > High > > * [dvr] bound port permanent arp entries never deleted - > > https://bugs.launchpad.net/neutron/+bug/1916761 > > Introduced with > > https://review.opendev.org/q/I538aa6d68fbb5ff8431f82ba76601ee34c1bb181 > > Lengthy discussion in the bug itself and suggested fix > > https://review.opendev.org/c/openstack/neutron/+/777616 > > > > Medium > > * [OVN][QOS] OVN DB QoS rule is not removed when a FIP is dissasociated - > > https://bugs.launchpad.net/neutron/+bug/1916470 > > Patch by ralonsoh merged > > https://review.opendev.org/c/openstack/neutron/+/776916 > > * A privsep daemon spawned by neutron-openvswitch-agent hangs when debug > > logging is enabled - https://bugs.launchpad.net/neutron/+bug/1896734 > > Actually reported earlier on charm and oslo, ralonsoh looking into it > > from neutron side > > * StaleDataError: DELETE statement on table 'standardattributes' expected > > to delete 2 row(s); 1 were matched - > > https://bugs.launchpad.net/neutron/+bug/1916889 > > Spotted by Liu while looking at dvr bug, patch sent to neutron-lib > > https://review.opendev.org/c/openstack/neutron-lib/+/777581 > > > > Low > > * ovn-octavia-provider can attempt to write protocol=None to OVSDB - > > https://bugs.launchpad.net/neutron/+bug/1916646 > > This appears in some functional test results, > > otherwiseguy sent > > https://review.opendev.org/c/openstack/ovn-octavia-provider/+/777201 > > > > Opinion > > * use deepcopy in function rpc_loop of ovs-agent - > > https://bugs.launchpad.net/neutron/+bug/1916761 > > I think You pasted wrong link here. Probably it should be https:// > bugs.launchpad.net/neutron/+bug/1916918, right? > Exactly, I just realized it when checking the links for the IRC meeting! 1916918 is correct one here > > > My hunch is that we are fine here, but please chime in other opinions > > > > Passing the baton to our PTL for next week! > > -- > > Bernard Cafarelli > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Mar 2 14:28:19 2021 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 2 Mar 2021 19:58:19 +0530 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> Message-ID: Hi, On Thu, Sep 12, 2019 at 12:30 AM Jeremy Stanley wrote: > > On 2019-09-09 12:53:26 +0530 (+0530), Yatin Karel wrote: > [...] > > Can someone from Release or Infra Team can do the needful of > > removing stable/ocata and stable/pike branch for TripleO projects > > being EOLed for pike/ocata in > > https://review.opendev.org/#/c/677478/ and > > https://review.opendev.org/#/c/678154/. > > I've attempted to extract the lists of projects from the changes you > linked. I believe you're asking to have the stable/ocata branch > deleted from these projects: > > openstack/instack-undercloud > openstack/instack > openstack/os-apply-config > openstack/os-cloud-config > openstack/os-collect-config > openstack/os-net-config > openstack/os-refresh-config > openstack/puppet-tripleo > openstack/python-tripleoclient > openstack/tripleo-common > openstack/tripleo-heat-templates > openstack/tripleo-image-elements > openstack/tripleo-puppet-elements > openstack/tripleo-ui > openstack/tripleo-validations > > And the stable/pike branch deleted from these projects: > > openstack/instack-undercloud > openstack/instack > openstack/os-apply-config > openstack/os-collect-config > openstack/os-net-config > openstack/os-refresh-config > openstack/paunch > openstack/puppet-tripleo > openstack/python-tripleoclient > openstack/tripleo-common > openstack/tripleo-heat-templates > openstack/tripleo-image-elements > openstack/tripleo-puppet-elements > openstack/tripleo-ui > openstack/tripleo-validations > > Can you confirm? Also, have you checked for and abandoned all open > changes on the affected branches? I totally missed this mail, in today's Tripleo meeting it was raised so get back to this again. @Jeremy yes the list is correct. These branches were EOLed long ago is it still necessary to abandon all open reviews? Anyway, I will get those cleaned. > -- > Jeremy Stanley Thanks and Regards Yatin Karel From fungi at yuggoth.org Tue Mar 2 15:04:51 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 2 Mar 2021 15:04:51 +0000 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> Message-ID: <20210302150451.ksnug53wzup747t4@yuggoth.org> On 2021-03-02 19:58:19 +0530 (+0530), Yatin Karel wrote: [...] > is it still necessary to abandon all open reviews? [...] Gerrit will not allow deletion of a branch if there are any changes still open for it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Tue Mar 2 15:53:26 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 2 Mar 2021 17:53:26 +0200 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20210302150451.ksnug53wzup747t4@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> <20210302150451.ksnug53wzup747t4@yuggoth.org> Message-ID: On Tue, Mar 2, 2021 at 5:06 PM Jeremy Stanley wrote: > On 2021-03-02 19:58:19 +0530 (+0530), Yatin Karel wrote: > [...] > > is it still necessary to abandon all open reviews? > [...] > > thanks ykarel for bringing this up in today's tripleo irc meeting - I completely missed this thread > Gerrit will not allow deletion of a branch if there are any changes > still open for it. > thank you and ack Jeremy - I just abandoned a couple of changes I had there (eg https://review.opendev.org/c/openstack/instack-undercloud/+/777368) to remove deprecated things from zuul layouts, but instead removing the branch is a better solution no more zuul layout to worry about ;) - one of those is currently waiting in the gate ( https://review.opendev.org/c/openstack/os-apply-config/+/777533) so i didn't hit abandon ... if it fails for whatever reason then I can abandon that one too. marios > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken at jots.org Tue Mar 2 20:31:07 2021 From: ken at jots.org (Ken D'Ambrosio) Date: Tue, 02 Mar 2021 15:31:07 -0500 Subject: anti-affinity: what're the mechanics? Message-ID: <371d077a5d2e99be4df344eceb25c43f@jots.org> Hey, all. Turns out we really need anti-affinity running on our (very freaking old -- Juno) clouds. I'm trying to find docs that describe its functionality, and am failing. If I enable it, and (say) have 10 hypervisors, and 12 VMs to fire off, what happens when VM #11 goes to fire? Does it fail, or does the scheduler just continue to at least *try* to maintain as few possible on each hypervisor? Thanks! -Ken From motosingh at yahoo.co.uk Tue Mar 2 22:00:17 2021 From: motosingh at yahoo.co.uk (Dees) Date: Tue, 2 Mar 2021 22:00:17 +0000 (UTC) Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: <1580253120.2031001.1614631709883@mail.yahoo.com> References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> <1580253120.2031001.1614631709883@mail.yahoo.com> Message-ID: <1178477318.2869706.1614722417156@mail.yahoo.com> Hi All, I got around that problem too. I have a new problem now as I need to assign two external networks to VM with provider segment as vlan. Any guidance please? So I tried this it doesn't work, not sure why. openstack network create Pub_Net --external --share --default \   --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.10,end=10.141.40.20 \   --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \   --network Pub_Net NET_ID1=$(openstack network list | grep -w Pub_Net| awk '{ print $2 }') openstack server create --image 'focal_x86_64' --flavor m1.micro \   --key-name User1-key --security-group Allow_SSHPing --nic net-id=$NET_ID1 \   focal-1 But which router and floating IP it works but as I need two public networks with only one floating IP allowed is an issue.----------------------------------------------------------------------------------------------------------------------------------------------- openstack network create Pub_Net --external --share --default \   --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.10,end=10.141.40.20 \   --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \   --network Pub_Net openstack network create Network1 --internalopenstack subnet create Subnet1 \   --allocation-pool start=192.168.0.10,end=192.168.0.199 \   --subnet-range 192.168.0.0/24 \   --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \   --network Network1 openstack router create Router1openstack router add subnet Router1 Subnet1openstack router set Router1 --external-gateway Pub_Net NET_ID1=$(openstack network list | grep Network1 | awk '{ print $2 }') openstack server create --image 'focal_x86_64' --flavor m1.micro \   --key-name User1-key --security-group Allow_SSHPing --nic net-id=$NET_ID1 \   focal-1 FLOATING_IP=$(openstack floating ip create -f value -c floating_ip_address Pub_Net) openstack server add floating ip focal-1 $FLOATING_IP Kind regards,Deepesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankelezhang at gmail.com Tue Mar 2 07:33:04 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Tue, 2 Mar 2021 15:33:04 +0800 Subject: Some questions about Ironic bare metal Message-ID: Hi, I have included Ironic service into my rocky OpenStack platform. Using the IPMI driver. The cleaning network and the provisioning network are my provider network. I have some questions about Ironic deleting and inspecting. 1、every time I delete my baremetal nodes, I need to delete the associated servers first. the servers delete successfully, but the associated servers filed in nodes are still exist. So I need to set nodes to maintenance mode before I can delete bare metal nodes. what's more, the port list in 'openstack port list' which belong to the nodes can not be deleted automatically. How can I delete nodes correctly? 2、If the new created baremetal nodes' system disks have exist OS data.I cannot inspect them, I need to clean them first, but the cleaning step need MAC address of the nodes and the MAC addresses are obtained by inspecting. So what should I do? I don't want to fill in the MAC addresses manually. I have got the PXE boot but was immediately plugged into the existing system as: [image: image.png] 0.1s later: [image: image.png] Looking forward to your help. Ankele. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 190407 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 106333 bytes Desc: not available URL: From fungi at yuggoth.org Tue Mar 2 23:12:33 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 2 Mar 2021 23:12:33 +0000 Subject: [all][elections][ptl][tc] IMPORTANT Note About March 2021 Elections Message-ID: <20210302231232.akpm65nswdpmygug@yuggoth.org> Please note that to be eligible to vote in upcoming Technical Committee and Project Team Lead elections, you must make sure that your Preferred Email Address in Gerrit appears somewhere in your Open Infrastructure Foundation Individual Member profile. To check this, visit https://review.opendev.org/settings/#EmailAddresses while authenticated to Gerrit and make a note of which address has the Preferred button selected. Next visit https://openstackid.org/accounts/user/profile and (after authenticating) make sure that address appears in at least one of the Email, Second Email, or Third Email fields. If not, add it to one of the available Email fields there and click the Save button. This requirement has changed slightly from before (when any address known to Gerrit was sufficient), because of slightly stricter API handling in the newer Gerrit release we're now running. If you have any questions, please feel free to reply or reach out to the technical election officials in the #openstack-election IRC channel on Freenode. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Tue Mar 2 23:46:18 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 2 Mar 2021 15:46:18 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC April 2021 Nominations Kickoff Message-ID: Nominations for OpenStack PTLs (Project Team Leads) and TC (Technical Committee) positions (5 positions) are now open and will remain open until Mar 09, 2021 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at https://governance.openstack.org/election/#how-to-submit-a-candidacy Please make sure to follow the candidacy file naming convention: candidates/xena// (for example, "candidates/xena/TC/stacker at example.org"). The name of the file should match an email address for your current OpenStack Foundation Individual Membership. Take this opportunity to ensure that your OSF member profile contains current information: https://www.openstack.org/profile/ Any OpenStack Foundation Individual Member can propose their candidacy for an available, directly-elected seat on the Technical Committee. In order to be an eligible candidate for PTL you must be an OpenStack Foundation Individual Member. PTL candidates must also have contributed to the corresponding team during the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC. Your Gerrit account must also have a verified email address matching the one used in your candidacy filename. Both PTL and TC elections will be held from Mar 12, 2021 23:45 UTC through to Mar 19, 2021 23:45 UTC. The electorate for the TC election are the OpenStack Foundation Individual Members who have a code contribution to one of the official teams over the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC, as well as any Extra ATCs who are acknowledged by the TC. The electorate for a PTL election are the OpenStack Foundation Individual Members who have a code contribution over the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC, in a deliverable repository maintained by the team which the PTL would lead, as well as the Extra ATCs who are acknowledged by the TC for that specific team. The list of project teams can be found at https://governance.openstack.org/tc/reference/projects/ and their individual team pages include lists of corresponding Extra ATCs. Please find below the timeline: nomination starts @ Mar 02, 2021 23:45 UTC nomination ends @ Mar 09, 2021 23:45 UTC campaigning starts @ Mar 09, 2021 23:45 UTC campaigning ends @ Mar 11, 2021 23:45 UTC elections start @ Mar 12, 2021 23:45 UTC elections end @ Mar 19, 2021 23:45 UTC Shortly after election officials approve candidates, they will be listed on the https://governance.openstack.org/election/ page. The electorate is requested to confirm their email addresses in Gerrit prior to 2021-03-08 00:00:00+00:00, so that the emailed ballots are sent to the correct email address. This email address should match one which was provided in your foundation member profile as well. Gerrit account information and OSF member profiles can be updated at https://review.openstack.org/#/settings/contact and https://www.openstack.org/profile/ accordingly. If you have any questions please be sure to either ask them on the mailing list or to the elections officials: https://governance.openstack.org/election/#election-officials -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 00:00:10 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 00:00:10 +0000 Subject: anti-affinity: what're the mechanics? In-Reply-To: <371d077a5d2e99be4df344eceb25c43f@jots.org> References: <371d077a5d2e99be4df344eceb25c43f@jots.org> Message-ID: On Tue, 2021-03-02 at 15:31 -0500, Ken D'Ambrosio wrote: > Hey, all. Turns out we really need anti-affinity running on our (very > freaking old -- Juno) clouds. I'm trying to find docs that describe its > functionality, and am failing. If I enable it, and (say) have 10 > hypervisors, and 12 VMs to fire off, what happens when VM #11 goes to > fire? Does it fail, or does the scheduler just continue to at least > *try* to maintain as few possible on each hypervisor? in juno i belive we only have hard anti affinity via the filter. i belive it predates the soft affinit/anti-affinity filter so it will error out. the behavior i belive will depend on if you ddi a multi create or booted the vms serally if you do it serially then you should be able to boot 10 vms. if you do a multi create then it depends on if you set the min value or not. if you dont set the min value i think only 10 will boot and the last two will error. if you set --min 12 --max 12 i think they all will go to error or be deleted. i have not checked that but i belive we are ment to try and role back in that case. the soft affinity weigher was added by https://github.com/openstack/nova/commit/72ba18468e62370522e07df796f5ff74ae13e8c9 in mitaka. if you want to be able to boot all 12 then you need a weigher like that. for the most part you can proably backport that directly from master and use it in juno as i dont think we have matirally altered the way the filters work that much bu the weigher like the filters are also plugabl so you can backport it externally and load it if you wanted too. that is proably your best bet. > > Thanks! > > -Ken > From juliaashleykreger at gmail.com Wed Mar 3 00:04:47 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 2 Mar 2021 16:04:47 -0800 Subject: Some questions about Ironic bare metal In-Reply-To: References: Message-ID: Greetintgs, replies inline! On Tue, Mar 2, 2021 at 2:48 PM Ankele zhang wrote: > Hi, > I have included Ironic service into my rocky OpenStack platform. Using the > IPMI driver. The cleaning network and the provisioning network are my > provider network. > I have some questions about Ironic deleting and inspecting. > > 1、every time I delete my baremetal nodes, I need to delete the associated > servers first. the servers delete successfully, but the associated servers > filed in nodes are still exist. So I need to set nodes to maintenance mode > before I can delete bare metal nodes. what's more, the port list in > 'openstack port list' which belong to the nodes can not be deleted > automatically. How can I delete nodes correctly? > > Okay, are you trying `openstack baremetal node delete` before unprovisioning the instances? Basically, if you're integrated with nova, the instance has to be unprovisioned. Even with ironic on it's own, `openstack baremetal node unprovision` is what you're looking for most likely. Since we're managing physical machines, we never really want people to delete baremetal nodes from ironic except as some sort of permanent last resort or removal from ironic. Any need to do so we consider to be a bug, if that makes sense. The nodes in ironic will change states upon unprovision to "available" instead of "active" which indicates that it is deployed. A little different, but it all comes down to management and tracking of distinct long living physical machines. So Ironic does *not* delete the port if it is pre-created, because the port can realistically be moved elsewhere and the MAC address can be reset. In some cases nova doesn't delete a port upon un-provision, so it kind of depends how you reached that point if the services involved will remove the port. If you're doing it manually to provision a server, it will still need to be removed. > 2、If the new created baremetal nodes' system disks have exist OS data.I > cannot inspect them, I need to clean them first, but the cleaning step need > MAC address of the nodes and the MAC addresses are obtained by inspecting. > So what should I do? I don't want to fill in the MAC addresses manually. > I have got the PXE boot but was immediately plugged into the existing > system as: > Inspection is optional, but a MAC address is functionally required to identify the machine since BMC identification by address is not reliable on all hardware vendors. This is even more so with the case that you have an existing operating system on the machine. Granted, you may want to check your inspection PXE configuration and PXE templates, since I guess your default falls back to the disk where instead you can have the configuration fall to inspection. Generally, most people tend to use iPXE because it is a bit more powerful for things such as this. > [image: image.png] > 0.1s later: > [image: image.png] > > Looking forward to your help. > > Ankele. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 190407 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 106333 bytes Desc: not available URL: From gouthampravi at gmail.com Wed Mar 3 00:56:28 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 2 Mar 2021 16:56:28 -0800 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > Hi all, > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > switching the default backend in Devstack to OVN. A lot of work has > > been done since, from porting the OVN devstack module to the DevStack > > tree, refactoring the DevStack module to install OVN from distro > > packages, implementing features to close the parity gap with ML2/OVS, > > fixing issues with tests and distros, etc... > > > > We are now very close to being able to make the switch and we've > > thought about sending this email to the broader community to raise > > awareness about this change as well as bring more attention to the > > patches that are current on review. > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > going to be developed and maintained by the upstream Neutron > > community. > can we ensure that this does not happen until the xena release. > in generall i think its ok to change the default but not this late in the cycle. > i would also like to ensure we keep at least one non ovn based multi node job in > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. > right now the event/neutorn interaction is not the same during move operations. > > > > Below is a e per project explanation with relevant links and issues of > > where we stand with this work right now: > > > > * Keystone: > > > > Everything should be good for Keystone, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > * Glance: > > > > Everything should be good for Glace, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > * Swift: > > > > Everything should be good for Swift, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > * Ironic: > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > progress, we've changed most of the Ironic jobs to explicitly enable > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > Here is the test patch: > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > * Cinder: > > > > Cinder is almost complete. There's one test failure in the > > "tempest-slow-py3" job run on the > > "test_port_security_macspoofing_port" test. > > > > This failure is due to a bug in core OVN [1]. This bug has already > > been fixed upstream [2] and the fix has been backported down to the > > branch-20.03 [3] of the OVN project. However, since we install OVN > > from packages we are currently waiting for this fix to be included in > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > contacted the package maintainer which has been very supportive of > > this work and will work on the package update, but he maintain a > > handful of backports in that package which is not yet included in OVN > > 20.03 upstream and he's now working with the core OVN community [4] to > > include it first in the branch and then create a new package for it. > > Hopefully this will happen soon. > > > > But for now we have a few options moving on with this issue: > > > > 1- Wait for the new package version > > 2- Mark the test as unstable until we get the new package version > > 3- Compile OVN from source instead of installing it from packages > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > i dont think we should default to ovn untill a souce build is not required. > compiling form souce while not supper expensice still adds time to the job > execution and im not sure we should be paying that cost on every devstack job run. > > we could maybe compile it once and bake the package into the image or host it on a mirror > but i think we should avoid this option if we have alternitives. > > > > What do you think about it ? > > > > Here is the test patch for Cinder: > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > * Nova: > > > > There are a few patches waiting for review for Nova, which are: > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > the scripts were trying to stop the Neutron agent (q-agt) process > > which is not part of an ML2/OVN deployment. The patch changes the code > > to check if that system unit exists before trying to stop it. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > which can be removed one release cycle after we switch DevStack to > > ML2/OVN. Grenade will test updating from the release version to the > > master branch but, since the default of the released version is not > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > is not supported. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > therefore we are temporarily enabling ML2/OVS for this job until that > > feature lands in core OVN. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > because he feels like a change in the default network driver a few > > weeks prior to the upstream code freeze can be concerning. We do not > > know yet precisely when we are changing the default due to the current > > patches we need to get merged but, if this is a shared feeling among > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > until we get a new release in OpenStack. > yep this is still my view. > i would suggest we do the work required in the repos but not merge it until the xena release > is open. thats technically at RC1 so march 25th > i think we can safely do the swich after that but i would not change the defualt in any project > before then. > > > > Here's the test patch for Nova: > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > * DevStack: > > > > And this is the final patch that will make this all happen: > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > a long and bumpy road to get to this point and I would like to say > > thanks to everyone involved so far and everyone that read the whole > > email, please let me know your thoughts. > thanks for working on this. > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > Cheers, > > Lucas ++ Thank you indeed for working diligently on this important change. Please do note that devstack, and the base job that you're modifying is used by many other projects besides the ones that you have enumerated in the subject line. I suggest using [all] as a better subject line indicator to get the attention of folks like me who have filters based on the subject line. Also, the network substrate is important for the project I help maintain: Manila, which provides shared file systems over a network - so I followed your lead and submitted a dependent patch. I hope to reach out to you in case we see some breakages: https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > > > From anlin.kong at gmail.com Wed Mar 3 02:59:32 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 3 Mar 2021 15:59:32 +1300 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: Hi, Thanks for all your hard work on this. I'm wondering is there any doc proposed for devstack to tell people who are not interested in OVN to keep the current devstack behaviour? I have a feeling that using OVN as default Neutron driver would break the CI jobs for some projects like Octavia, Trove, etc. which rely on ovs port for the set up. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi wrote: > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > Hi all, > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > switching the default backend in Devstack to OVN. A lot of work has > > > been done since, from porting the OVN devstack module to the DevStack > > > tree, refactoring the DevStack module to install OVN from distro > > > packages, implementing features to close the parity gap with ML2/OVS, > > > fixing issues with tests and distros, etc... > > > > > > We are now very close to being able to make the switch and we've > > > thought about sending this email to the broader community to raise > > > awareness about this change as well as bring more attention to the > > > patches that are current on review. > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > going to be developed and maintained by the upstream Neutron > > > community. > > can we ensure that this does not happen until the xena release. > > in generall i think its ok to change the default but not this late in > the cycle. > > i would also like to ensure we keep at least one non ovn based multi > node job in > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > merged and possible after. > > right now the event/neutorn interaction is not the same during move > operations. > > > > > > Below is a e per project explanation with relevant links and issues of > > > where we stand with this work right now: > > > > > > * Keystone: > > > > > > Everything should be good for Keystone, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > * Glance: > > > > > > Everything should be good for Glace, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > * Swift: > > > > > > Everything should be good for Swift, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > * Ironic: > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > Here is the test patch: > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > * Cinder: > > > > > > Cinder is almost complete. There's one test failure in the > > > "tempest-slow-py3" job run on the > > > "test_port_security_macspoofing_port" test. > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > been fixed upstream [2] and the fix has been backported down to the > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > from packages we are currently waiting for this fix to be included in > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > contacted the package maintainer which has been very supportive of > > > this work and will work on the package update, but he maintain a > > > handful of backports in that package which is not yet included in OVN > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > include it first in the branch and then create a new package for it. > > > Hopefully this will happen soon. > > > > > > But for now we have a few options moving on with this issue: > > > > > > 1- Wait for the new package version > > > 2- Mark the test as unstable until we get the new package version > > > 3- Compile OVN from source instead of installing it from packages > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > i dont think we should default to ovn untill a souce build is not > required. > > compiling form souce while not supper expensice still adds time to the > job > > execution and im not sure we should be paying that cost on every > devstack job run. > > > > we could maybe compile it once and bake the package into the image or > host it on a mirror > > but i think we should avoid this option if we have alternitives. > > > > > > What do you think about it ? > > > > > > Here is the test patch for Cinder: > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > * Nova: > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > to check if that system unit exists before trying to stop it. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > which can be removed one release cycle after we switch DevStack to > > > ML2/OVN. Grenade will test updating from the release version to the > > > master branch but, since the default of the released version is not > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > is not supported. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > feature lands in core OVN. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > because he feels like a change in the default network driver a few > > > weeks prior to the upstream code freeze can be concerning. We do not > > > know yet precisely when we are changing the default due to the current > > > patches we need to get merged but, if this is a shared feeling among > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > until we get a new release in OpenStack. > > yep this is still my view. > > i would suggest we do the work required in the repos but not merge it > until the xena release > > is open. thats technically at RC1 so march 25th > > i think we can safely do the swich after that but i would not change the > defualt in any project > > before then. > > > > > > Here's the test patch for Nova: > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > * DevStack: > > > > > > And this is the final patch that will make this all happen: > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > a long and bumpy road to get to this point and I would like to say > > > thanks to everyone involved so far and everyone that read the whole > > > email, please let me know your thoughts. > > thanks for working on this. > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > [2] > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > > [3] > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > > [4] > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > > [5] > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > Cheers, > > > Lucas > > ++ Thank you indeed for working diligently on this important change. > > Please do note that devstack, and the base job that you're modifying > is used by many other projects besides the ones that you have > enumerated in the subject line. > I suggest using [all] as a better subject line indicator to get the > attention of folks like me who have filters based on the subject line. > Also, the network substrate is important for the project I help > maintain: Manila, which provides shared file systems over a network - > so I followed your lead and submitted a dependent patch. I hope to > reach out to you in case we see some breakages: > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 04:01:24 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 04:01:24 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > Hi, > > Thanks for all your hard work on this. > > I'm wondering is there any doc proposed for devstack to tell people who are > not interested in OVN to keep the current devstack behaviour? I have a > feeling that using OVN as default Neutron driver would break the CI jobs > for some projects like Octavia, Trove, etc. which rely on ovs port for the > set up. well ovn is just an alternivie contoler for ovs. so ovn replace the neutron l2 agent it does not replace ovs. project like octavia or trove that deploy loadblances or dbs in vms should not be able to observe a difference. they may still want to deploy ml2/ovs but unless they are doing something directly on the host like adding port directly to ovs because they are not using vms they should not be aware of this change. my reticense to cahnge at this point in the cyle for nova is motiveated maily by gate stablity. the active contibutes to nova dont really have experince with ovn and how to debug it in the gate. we also are getting close to FF when we tend to get a lot of patches and the gate stablity become even more imporant so adding a new vaiabrly to that mix but swaping out the networkbacked between now and the wallaby release seams problematic. in any cases swapping back should ideallly be as simple as setting Q_AGENT=openvswitch. i have not looked at the patches but to swap betwween ovs and linuxbidge you just define Q_AGENT=linuxbridge for the most part so im expecting that we would just enable Q_AGENT=ovn or somthing simialr for ovn. i know ovn used to have its own devstack plugin but if we are makeing it the default that means it need to be support nativly in devstack not as a plugin so useing Q_AGENT=ovn to enable it and make that the new default would seam to be the simplest way to manage that. but yes documenting how to enabel the old behavior is still important the example nova patch shows how to hardcode the old behavior https://review.opendev.org/c/openstack/nova/+/776944/5/.zuul.yaml#234 it seams to be doing this a little more explictly then i would like but its not that hard. i would suggest adding a second sample loca.conf in devstack for standard ovs deployments. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi > wrote: > > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > > Hi all, > > > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > > switching the default backend in Devstack to OVN. A lot of work has > > > > been done since, from porting the OVN devstack module to the DevStack > > > > tree, refactoring the DevStack module to install OVN from distro > > > > packages, implementing features to close the parity gap with ML2/OVS, > > > > fixing issues with tests and distros, etc... > > > > > > > > We are now very close to being able to make the switch and we've > > > > thought about sending this email to the broader community to raise > > > > awareness about this change as well as bring more attention to the > > > > patches that are current on review. > > > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > > going to be developed and maintained by the upstream Neutron > > > > community. > > > can we ensure that this does not happen until the xena release. > > > in generall i think its ok to change the default but not this late in > > the cycle. > > > i would also like to ensure we keep at least one non ovn based multi > > node job in > > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > > merged and possible after. > > > right now the event/neutorn interaction is not the same during move > > operations. > > > > > > > > Below is a e per project explanation with relevant links and issues of > > > > where we stand with this work right now: > > > > > > > > * Keystone: > > > > > > > > Everything should be good for Keystone, the gate is happy with the > > > > changes. Here is the test patch: > > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > > > * Glance: > > > > > > > > Everything should be good for Glace, the gate is happy with the > > > > changes. Here is the test patch: > > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > > > * Swift: > > > > > > > > Everything should be good for Swift, the gate is happy with the > > > > changes. Here is the test patch: > > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > > > * Ironic: > > > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > > Here is the test patch: > > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > > > * Cinder: > > > > > > > > Cinder is almost complete. There's one test failure in the > > > > "tempest-slow-py3" job run on the > > > > "test_port_security_macspoofing_port" test. > > > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > > been fixed upstream [2] and the fix has been backported down to the > > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > > from packages we are currently waiting for this fix to be included in > > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > > contacted the package maintainer which has been very supportive of > > > > this work and will work on the package update, but he maintain a > > > > handful of backports in that package which is not yet included in OVN > > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > > include it first in the branch and then create a new package for it. > > > > Hopefully this will happen soon. > > > > > > > > But for now we have a few options moving on with this issue: > > > > > > > > 1- Wait for the new package version > > > > 2- Mark the test as unstable until we get the new package version > > > > 3- Compile OVN from source instead of installing it from packages > > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > i dont think we should default to ovn untill a souce build is not > > required. > > > compiling form souce while not supper expensice still adds time to the > > job > > > execution and im not sure we should be paying that cost on every > > devstack job run. > > > > > > we could maybe compile it once and bake the package into the image or > > host it on a mirror > > > but i think we should avoid this option if we have alternitives. > > > > > > > > What do you think about it ? > > > > > > > > Here is the test patch for Cinder: > > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > > > * Nova: > > > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > > to check if that system unit exists before trying to stop it. > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > > which can be removed one release cycle after we switch DevStack to > > > > ML2/OVN. Grenade will test updating from the release version to the > > > > master branch but, since the default of the released version is not > > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > > is not supported. > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > > feature lands in core OVN. > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > > because he feels like a change in the default network driver a few > > > > weeks prior to the upstream code freeze can be concerning. We do not > > > > know yet precisely when we are changing the default due to the current > > > > patches we need to get merged but, if this is a shared feeling among > > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > > until we get a new release in OpenStack. > > > yep this is still my view. > > > i would suggest we do the work required in the repos but not merge it > > until the xena release > > > is open. thats technically at RC1 so march 25th > > > i think we can safely do the swich after that but i would not change the > > defualt in any project > > > before then. > > > > > > > > Here's the test patch for Nova: > > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > > > * DevStack: > > > > > > > > And this is the final patch that will make this all happen: > > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > > a long and bumpy road to get to this point and I would like to say > > > > thanks to everyone involved so far and everyone that read the whole > > > > email, please let me know your thoughts. > > > thanks for working on this. > > > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > > [2] > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > > > [3] > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > > > [4] > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > > > [5] > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > > > Cheers, > > > > Lucas > > > > ++ Thank you indeed for working diligently on this important change. > > > > Please do note that devstack, and the base job that you're modifying > > is used by many other projects besides the ones that you have > > enumerated in the subject line. > > I suggest using [all] as a better subject line indicator to get the > > attention of folks like me who have filters based on the subject line. > > Also, the network substrate is important for the project I help > > maintain: Manila, which provides shared file systems over a network - > > so I followed your lead and submitted a dependent patch. I hope to > > reach out to you in case we see some breakages: > > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > > > > > > > > > > > > > > > > > From skaplons at redhat.com Wed Mar 3 07:32:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 03 Mar 2021 08:32:31 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <3222930.vunY0J1ypg@p1> Hi, Dnia środa, 3 marca 2021 05:01:24 CET Sean Mooney pisze: > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell people who > > are > > not interested in OVN to keep the current devstack behaviour? I have a > > feeling that using OVN as default Neutron driver would break the CI jobs > > for some projects like Octavia, Trove, etc. which rely on ovs port for the > > set up. You can look on how we set some of our jobs to use ML2/OVS: https:// review.opendev.org/c/openstack/neutron-tempest-plugin/+/749503/26/zuul.d/ master_jobs.yaml#121 > > well ovn is just an alternivie contoler for ovs. > so ovn replace the neutron l2 agent it does not replace ovs. > project like octavia or trove that deploy loadblances or dbs in vms should > not be able to observe a difference. they may still want to deploy ml2/ovs > but unless they are doing something directly on the host like adding port > directly to ovs because they are not using vms they should not be aware of > this change. > > my reticense to cahnge at this point in the cyle for nova is motiveated > maily by gate stablity. the active contibutes to nova dont really have > experince with ovn and how to debug it in the gate. we also are getting > close to FF when we tend to get a lot of patches and the gate stablity > become even more imporant so adding a new vaiabrly to that mix but swaping > out the networkbacked between now and the wallaby release seams > problematic. > > in any cases swapping back should ideallly be as simple as setting > Q_AGENT=openvswitch. > > i have not looked at the patches but to swap betwween ovs and linuxbidge you > just define Q_AGENT=linuxbridge for the most part so im expecting that we > would just enable Q_AGENT=ovn or somthing simialr for ovn. > > i know ovn used to have its own devstack plugin but if we are makeing it the > default that means it need to be support nativly in devstack not as a > plugin so useing Q_AGENT=ovn to enable it and make that the new default > would seam to be the simplest way to manage that. > > but yes documenting how to enabel the old behavior is still important > the example nova patch shows how to hardcode the old behavior > https://review.opendev.org/c/openstack/nova/+/776944/5/.zuul.yaml#234 > it seams to be doing this a little more explictly then i would like but its > not that hard. i would suggest adding a second sample loca.conf in devstack > for standard ovs deployments. > > --- > > Lingxian Kong > > Senior Cloud Engineer (Catalyst Cloud) > > Trove PTL (OpenStack) > > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi > > > > wrote: > > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > > > Hi all, > > > > > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > > > switching the default backend in Devstack to OVN. A lot of work has > > > > > been done since, from porting the OVN devstack module to the > > > > > DevStack > > > > > tree, refactoring the DevStack module to install OVN from distro > > > > > packages, implementing features to close the parity gap with > > > > > ML2/OVS, > > > > > fixing issues with tests and distros, etc... > > > > > > > > > > We are now very close to being able to make the switch and we've > > > > > thought about sending this email to the broader community to raise > > > > > awareness about this change as well as bring more attention to the > > > > > patches that are current on review. > > > > > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > > > discontinued and/or not supported anymore. The ML2/OVS driver is > > > > > still > > > > > going to be developed and maintained by the upstream Neutron > > > > > community. > > > > > > > > can we ensure that this does not happen until the xena release. > > > > in generall i think its ok to change the default but not this late in > > > > > > the cycle. > > > > > > > i would also like to ensure we keep at least one non ovn based multi > > > > > > node job in > > > > > > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > > > > > > merged and possible after. > > > > > > > right now the event/neutorn interaction is not the same during move > > > > > > operations. > > > > > > > > Below is a e per project explanation with relevant links and issues > > > > > of > > > > > where we stand with this work right now: > > > > > > > > > > * Keystone: > > > > > > > > > > Everything should be good for Keystone, the gate is happy with the > > > > > changes. Here is the test patch: > > > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > > > > > * Glance: > > > > > > > > > > Everything should be good for Glace, the gate is happy with the > > > > > changes. Here is the test patch: > > > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > > > > > * Swift: > > > > > > > > > > Everything should be good for Swift, the gate is happy with the > > > > > changes. Here is the test patch: > > > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > > > > > * Ironic: > > > > > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > > > ML2/OVS and everything is merged, so we should be good for Ironic > > > > > too. > > > > > Here is the test patch: > > > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > > > > > * Cinder: > > > > > > > > > > Cinder is almost complete. There's one test failure in the > > > > > "tempest-slow-py3" job run on the > > > > > "test_port_security_macspoofing_port" test. > > > > > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > > > been fixed upstream [2] and the fix has been backported down to the > > > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > > > from packages we are currently waiting for this fix to be included > > > > > in > > > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > > > contacted the package maintainer which has been very supportive of > > > > > this work and will work on the package update, but he maintain a > > > > > handful of backports in that package which is not yet included in > > > > > OVN > > > > > 20.03 upstream and he's now working with the core OVN community [4] > > > > > to > > > > > include it first in the branch and then create a new package for it. > > > > > Hopefully this will happen soon. > > > > > > > > > > But for now we have a few options moving on with this issue: > > > > > > > > > > 1- Wait for the new package version > > > > > 2- Mark the test as unstable until we get the new package version > > > > > 3- Compile OVN from source instead of installing it from packages > > > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > > > > > > i dont think we should default to ovn untill a souce build is not > > > > > > required. > > > > > > > compiling form souce while not supper expensice still adds time to the > > > > > > job > > > > > > > execution and im not sure we should be paying that cost on every > > > > > > devstack job run. > > > > > > > we could maybe compile it once and bake the package into the image or > > > > > > host it on a mirror > > > > > > > but i think we should avoid this option if we have alternitives. > > > > > > > > > What do you think about it ? > > > > > > > > > > Here is the test patch for Cinder: > > > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > > > > > * Nova: > > > > > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: > > > > > Basically > > > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > > > which is not part of an ML2/OVN deployment. The patch changes the > > > > > code > > > > > to check if that system unit exists before trying to stop it. > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > > > which can be removed one release cycle after we switch DevStack to > > > > > ML2/OVN. Grenade will test updating from the release version to the > > > > > master branch but, since the default of the released version is not > > > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade > > > > > job > > > > > is not supported. > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > > > minimum bandwidth feature which is not yet supported by ML2/OVN > > > > > [5][6] > > > > > therefore we are temporarily enabling ML2/OVS for this job until > > > > > that > > > > > feature lands in core OVN. > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about > > > > > these > > > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for > > > > > now > > > > > because he feels like a change in the default network driver a few > > > > > weeks prior to the upstream code freeze can be concerning. We do not > > > > > know yet precisely when we are changing the default due to the > > > > > current > > > > > patches we need to get merged but, if this is a shared feeling among > > > > > the Nova community I can work on enabling ML2/OVS on all jobs in > > > > > Nova > > > > > until we get a new release in OpenStack. > > > > > > > > yep this is still my view. > > > > i would suggest we do the work required in the repos but not merge it > > > > > > until the xena release > > > > > > > is open. thats technically at RC1 so march 25th > > > > i think we can safely do the swich after that but i would not change > > > > the > > > > > > defualt in any project > > > > > > > before then. > > > > > > > > > Here's the test patch for Nova: > > > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > > > > > * DevStack: > > > > > > > > > > And this is the final patch that will make this all happen: > > > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's > > > > > been > > > > > a long and bumpy road to get to this point and I would like to say > > > > > thanks to everyone involved so far and everyone that read the whole > > > > > email, please let me know your thoughts. > > > > > > > > thanks for working on this. > > > > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > > > [2] > > > > > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.47 > > > 3776-1-numans at ovn.org/> > > > > > > [3] > > > > > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab3 > > > 9380cb> > > > > > > [4] > > > > > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961. > > > html> > > > > > > [5] > > > > > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe > > > 76a9ee1/gate/post_test_hook.sh#L129-L143> > > > > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > > > > > Cheers, > > > > > Lucas > > > > > > ++ Thank you indeed for working diligently on this important change. > > > > > > Please do note that devstack, and the base job that you're modifying > > > is used by many other projects besides the ones that you have > > > enumerated in the subject line. > > > I suggest using [all] as a better subject line indicator to get the > > > attention of folks like me who have filters based on the subject line. > > > Also, the network substrate is important for the project I help > > > maintain: Manila, which provides shared file systems over a network - > > > so I followed your lead and submitted a dependent patch. I hope to > > > reach out to you in case we see some breakages: > > > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From syedammad83 at gmail.com Wed Mar 3 07:31:07 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 3 Mar 2021 12:31:07 +0500 Subject: [victoria][nova] Instance Live Migration Message-ID: Hi, I am trying to configure live migration of instance backed by lvm iscsi storage. I have done below mentioned config in nova.conf. [libvirt] virt_type = kvm volume_use_multipath = True live_migration_uri = qemu+ssh://nova@%s/system The live migration is being failed with below logs. 2021-03-03 07:15:51.917 706290 INFO nova.compute.manager [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Took 6.02 seconds for pre_live_migration on destination host kvm12-a1-khi01. 2021-03-03 07:15:52.029 706290 ERROR nova.virt.libvirt.driver [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+ssh://nova at kvm12-a1-khi01/system: Cannot recv data: Host key verification failed.: Connection reset by peer: libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+ssh://nova at kvm12-a1-khi01/system: Cannot recv data: Host key verification failed.: Connection reset by peer 2021-03-03 07:15:52.444 706290 ERROR nova.virt.libvirt.driver [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Migration operation has aborted 2021-03-03 07:15:52.489 706290 INFO nova.compute.manager [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Swapping old allocation on dict_keys(['3e5679c8-165e-4c35-9de5-ef0d3af0f9d8']) held by migration f519f768-5a62-4f17-9ac0-1cc2c814ec3a for instance 2021-03-03 07:15:55.927 706290 WARNING nova.compute.manager [req-116cf2ef-4720-421e-9aad-75451e3deafe b95c0ad0fb9848598988b5bff99ca55b abf41db7e6dc429086713f4b46cd3655 - default default] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Received unexpected event network-vif-unplugged-df7666d6-add3-4638-98e1-14db30122c3f for instance with vm_state active and task_state None. I am able to login from host1 to host2 via ssh without password. Also I have disabled hostkey check in ssh config of nova. (~/.ssh/config) Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null I am able to connect host2 from host 1 and vice versa via virsh uri. nova at kvm10-a1-khi01:~$ virsh -c qemu+ssh://nova at kvm12-a1-khi01/system Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # exit No sure where i am missing something. - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Mar 3 08:44:50 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 3 Mar 2021 09:44:50 +0100 Subject: anti-affinity: what're the mechanics? In-Reply-To: <371d077a5d2e99be4df344eceb25c43f@jots.org> References: <371d077a5d2e99be4df344eceb25c43f@jots.org> Message-ID: On Tue, Mar 2, 2021 at 9:44 PM Ken D'Ambrosio wrote: > Hey, all. Turns out we really need anti-affinity running on our (very > freaking old -- Juno) clouds. I'm trying to find docs that describe its > functionality, and am failing. If I enable it, and (say) have 10 > hypervisors, and 12 VMs to fire off, what happens when VM #11 goes to > fire? Does it fail, or does the scheduler just continue to at least > *try* to maintain as few possible on each hypervisor? > > No, it's a hard-stop. To be clear, that means that all the instances *from the same instance group* (with a anti-affinity policy) are not mixed between the same compute nodes. With your example, this means that if you have 10 compute nodes, an anti-affinity instance group can support up to 10 instances, because an eleven created instance from the same group would get a NoValidHost. This being said, you can of course create more than 10 instances, provided they don't share the same group. For what said Sean, soft-anti-affinity was a new feature that was provided by the Liberty timeframe. The difference with hard anti-affinity is that it's no longer an hardstop, you can have more than 10 instances within your group, it's just that the scheduler will try to spread them correctly (using weighers) between all your computes. FWIW, I wouldn't address backporting the feature as it's not only providing soft-affiinity and soft-anti-affinity weighers but it also adds those policies into the os-servergroup API. You should rather think about upgrading to Liberty if you do really care about soft-affinity or not use the filter. There are other possibilities for spreading instances between computes (for example using aggregates) that don't give you NoValidHosts exceptions on boots. -Sylvain Thanks! > > -Ken > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Wed Mar 3 10:32:11 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 3 Mar 2021 10:32:11 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Mon, Mar 1, 2021 at 6:25 PM Sean Mooney wrote: > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > Hi all, > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > switching the default backend in Devstack to OVN. A lot of work has > > been done since, from porting the OVN devstack module to the DevStack > > tree, refactoring the DevStack module to install OVN from distro > > packages, implementing features to close the parity gap with ML2/OVS, > > fixing issues with tests and distros, etc... > > > > We are now very close to being able to make the switch and we've > > thought about sending this email to the broader community to raise > > awareness about this change as well as bring more attention to the > > patches that are current on review. > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > going to be developed and maintained by the upstream Neutron > > community. > can we ensure that this does not happen until the xena release. > in generall i think its ok to change the default but not this late in the cycle. > i would also like to ensure we keep at least one non ovn based multi node job in > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. > right now the event/neutorn interaction is not the same during move operations. I think it's fair to wait for the new release cycle to start since we are just a few weeks away and then we can flip the default in DevStack. I will state this in the last DevStack patch and set the workflow -1 until then. That said, I also think that other patches could be merged before that, those are just adapting a few scripts to work with ML2/OVN and enabling ML2/OVS explicitly where it makes sense. That way, when time comes, we will just need to merge the DevStack patch. > > > > Below is a e per project explanation with relevant links and issues of > > where we stand with this work right now: > > > > * Keystone: > > > > Everything should be good for Keystone, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > * Glance: > > > > Everything should be good for Glace, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > * Swift: > > > > Everything should be good for Swift, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > * Ironic: > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > progress, we've changed most of the Ironic jobs to explicitly enable > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > Here is the test patch: > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > * Cinder: > > > > Cinder is almost complete. There's one test failure in the > > "tempest-slow-py3" job run on the > > "test_port_security_macspoofing_port" test. > > > > This failure is due to a bug in core OVN [1]. This bug has already > > been fixed upstream [2] and the fix has been backported down to the > > branch-20.03 [3] of the OVN project. However, since we install OVN > > from packages we are currently waiting for this fix to be included in > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > contacted the package maintainer which has been very supportive of > > this work and will work on the package update, but he maintain a > > handful of backports in that package which is not yet included in OVN > > 20.03 upstream and he's now working with the core OVN community [4] to > > include it first in the branch and then create a new package for it. > > Hopefully this will happen soon. > > > > But for now we have a few options moving on with this issue: > > > > 1- Wait for the new package version > > 2- Mark the test as unstable until we get the new package version > > 3- Compile OVN from source instead of installing it from packages > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > i dont think we should default to ovn untill a souce build is not required. > compiling form souce while not supper expensice still adds time to the job > execution and im not sure we should be paying that cost on every devstack job run. > > we could maybe compile it once and bake the package into the image or host it on a mirror > but i think we should avoid this option if we have alternitives. Since this patch https://review.opendev.org/c/openstack/devstack/+/763402 we no longer default to compiling OVN from source anymore, it's installed using the distro packages now. Yeah the alternatives are not straight forward, I was talking to some core OVN folks yesterday regarding the backports proposed by Canonical to the 20.03 branch and they seem to be fine with it, it needs more reviews since there are around ~20 patches being backported there. But I hope they are going to be looking into it and we should get a new OVN package for Ubuntu Focal soon. > > > > What do you think about it ? > > > > Here is the test patch for Cinder: > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > * Nova: > > > > There are a few patches waiting for review for Nova, which are: > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > the scripts were trying to stop the Neutron agent (q-agt) process > > which is not part of an ML2/OVN deployment. The patch changes the code > > to check if that system unit exists before trying to stop it. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > which can be removed one release cycle after we switch DevStack to > > ML2/OVN. Grenade will test updating from the release version to the > > master branch but, since the default of the released version is not > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > is not supported. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > therefore we are temporarily enabling ML2/OVS for this job until that > > feature lands in core OVN. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > because he feels like a change in the default network driver a few > > weeks prior to the upstream code freeze can be concerning. We do not > > know yet precisely when we are changing the default due to the current > > patches we need to get merged but, if this is a shared feeling among > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > until we get a new release in OpenStack. > yep this is still my view. > i would suggest we do the work required in the repos but not merge it until the xena release > is open. thats technically at RC1 so march 25th > i think we can safely do the swich after that but i would not change the defualt in any project > before then. Yeah, no problem waiting from my side, but hoping we can keep reviewing the rest of the patches until then. > > > > Here's the test patch for Nova: > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > * DevStack: > > > > And this is the final patch that will make this all happen: > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > a long and bumpy road to get to this point and I would like to say > > thanks to everyone involved so far and everyone that read the whole > > email, please let me know your thoughts. > thanks for working on this. Thanks for the inputs > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > Cheers, > > Lucas > > > > > From lucasagomes at gmail.com Wed Mar 3 10:35:09 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 3 Mar 2021 10:35:09 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Wed, Mar 3, 2021 at 12:57 AM Goutham Pacha Ravi wrote: > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > Hi all, > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > switching the default backend in Devstack to OVN. A lot of work has > > > been done since, from porting the OVN devstack module to the DevStack > > > tree, refactoring the DevStack module to install OVN from distro > > > packages, implementing features to close the parity gap with ML2/OVS, > > > fixing issues with tests and distros, etc... > > > > > > We are now very close to being able to make the switch and we've > > > thought about sending this email to the broader community to raise > > > awareness about this change as well as bring more attention to the > > > patches that are current on review. > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > going to be developed and maintained by the upstream Neutron > > > community. > > can we ensure that this does not happen until the xena release. > > in generall i think its ok to change the default but not this late in the cycle. > > i would also like to ensure we keep at least one non ovn based multi node job in > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. > > right now the event/neutorn interaction is not the same during move operations. > > > > > > Below is a e per project explanation with relevant links and issues of > > > where we stand with this work right now: > > > > > > * Keystone: > > > > > > Everything should be good for Keystone, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > * Glance: > > > > > > Everything should be good for Glace, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > * Swift: > > > > > > Everything should be good for Swift, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > * Ironic: > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > Here is the test patch: > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > * Cinder: > > > > > > Cinder is almost complete. There's one test failure in the > > > "tempest-slow-py3" job run on the > > > "test_port_security_macspoofing_port" test. > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > been fixed upstream [2] and the fix has been backported down to the > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > from packages we are currently waiting for this fix to be included in > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > contacted the package maintainer which has been very supportive of > > > this work and will work on the package update, but he maintain a > > > handful of backports in that package which is not yet included in OVN > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > include it first in the branch and then create a new package for it. > > > Hopefully this will happen soon. > > > > > > But for now we have a few options moving on with this issue: > > > > > > 1- Wait for the new package version > > > 2- Mark the test as unstable until we get the new package version > > > 3- Compile OVN from source instead of installing it from packages > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > i dont think we should default to ovn untill a souce build is not required. > > compiling form souce while not supper expensice still adds time to the job > > execution and im not sure we should be paying that cost on every devstack job run. > > > > we could maybe compile it once and bake the package into the image or host it on a mirror > > but i think we should avoid this option if we have alternitives. > > > > > > What do you think about it ? > > > > > > Here is the test patch for Cinder: > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > * Nova: > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > to check if that system unit exists before trying to stop it. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > which can be removed one release cycle after we switch DevStack to > > > ML2/OVN. Grenade will test updating from the release version to the > > > master branch but, since the default of the released version is not > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > is not supported. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > feature lands in core OVN. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > because he feels like a change in the default network driver a few > > > weeks prior to the upstream code freeze can be concerning. We do not > > > know yet precisely when we are changing the default due to the current > > > patches we need to get merged but, if this is a shared feeling among > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > until we get a new release in OpenStack. > > yep this is still my view. > > i would suggest we do the work required in the repos but not merge it until the xena release > > is open. thats technically at RC1 so march 25th > > i think we can safely do the swich after that but i would not change the defualt in any project > > before then. > > > > > > Here's the test patch for Nova: > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > * DevStack: > > > > > > And this is the final patch that will make this all happen: > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > a long and bumpy road to get to this point and I would like to say > > > thanks to everyone involved so far and everyone that read the whole > > > email, please let me know your thoughts. > > thanks for working on this. > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > Cheers, > > > Lucas > > ++ Thank you indeed for working diligently on this important change. > > Please do note that devstack, and the base job that you're modifying > is used by many other projects besides the ones that you have > enumerated in the subject line. > I suggest using [all] as a better subject line indicator to get the > attention of folks like me who have filters based on the subject line. > Also, the network substrate is important for the project I help > maintain: Manila, which provides shared file systems over a network - > so I followed your lead and submitted a dependent patch. I hope to > reach out to you in case we see some breakages: > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > Thanks for the suggestion, you are right the [ALL] would make sense. I will try to raise more awareness as possible on this change in the default. Also thanks for proposing the test patch for Manilla, I see the gate is happy there! But yeah, feel free to reach out to me if anything breaks and I will glad looking into it. > > > > > > > > > > From lucasagomes at gmail.com Wed Mar 3 10:42:52 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 3 Mar 2021 10:42:52 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: <3222930.vunY0J1ypg@p1> References: <3222930.vunY0J1ypg@p1> Message-ID: On Wed, Mar 3, 2021 at 7:35 AM Slawek Kaplonski wrote: > > Hi, > > Dnia środa, 3 marca 2021 05:01:24 CET Sean Mooney pisze: > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > > Hi, > > > > > > Thanks for all your hard work on this. > > > > > > I'm wondering is there any doc proposed for devstack to tell people who > > > are > > > not interested in OVN to keep the current devstack behaviour? I have a > > > feeling that using OVN as default Neutron driver would break the CI jobs > > > for some projects like Octavia, Trove, etc. which rely on ovs port for the > > > set up. > > You can look on how we set some of our jobs to use ML2/OVS: https:// > review.opendev.org/c/openstack/neutron-tempest-plugin/+/749503/26/zuul.d/ > master_jobs.yaml#121 Thanks Lingxian and Slaweq for the suggestion/replies. Apart from what Slaweq pointed out, the DevStack documentation have a sample config file (https://docs.openstack.org/devstack/latest/#create-a-local-conf) as part of the steps to deploy it, I was thinking I could propose another sample file that would enable ML2/OVS for the deployment and add a note in that doc; if you think it's worth it. Also what gives us more confidence regarding projects like Octavia is that TripleO already uses ML2/OVN as the default driver for their overcloud, so many of these projects are already being tested/used with OVN. > > > > > well ovn is just an alternivie contoler for ovs. > > so ovn replace the neutron l2 agent it does not replace ovs. > > project like octavia or trove that deploy loadblances or dbs in vms should > > not be able to observe a difference. they may still want to deploy ml2/ovs > > but unless they are doing something directly on the host like adding port > > directly to ovs because they are not using vms they should not be aware of > > this change. > > > > my reticense to cahnge at this point in the cyle for nova is motiveated > > maily by gate stablity. the active contibutes to nova dont really have > > experince with ovn and how to debug it in the gate. we also are getting > > close to FF when we tend to get a lot of patches and the gate stablity > > become even more imporant so adding a new vaiabrly to that mix but swaping > > out the networkbacked between now and the wallaby release seams > > problematic. > > > > in any cases swapping back should ideallly be as simple as setting > > Q_AGENT=openvswitch. > > > > i have not looked at the patches but to swap betwween ovs and linuxbidge you > > just define Q_AGENT=linuxbridge for the most part so im expecting that we > > would just enable Q_AGENT=ovn or somthing simialr for ovn. > > > > i know ovn used to have its own devstack plugin but if we are makeing it the > > default that means it need to be support nativly in devstack not as a > > plugin so useing Q_AGENT=ovn to enable it and make that the new default > > would seam to be the simplest way to manage that. > > > > but yes documenting how to enabel the old behavior is still important > > the example nova patch shows how to hardcode the old behavior > > https://review.opendev.org/c/openstack/nova/+/776944/5/.zuul.yaml#234 > > it seams to be doing this a little more explictly then i would like but its > > not that hard. i would suggest adding a second sample loca.conf in devstack > > for standard ovs deployments. > > > --- > > > Lingxian Kong > > > Senior Cloud Engineer (Catalyst Cloud) > > > Trove PTL (OpenStack) > > > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > > > > > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi > > > > > > wrote: > > > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > > > > Hi all, > > > > > > > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > > > > switching the default backend in Devstack to OVN. A lot of work has > > > > > > been done since, from porting the OVN devstack module to the > > > > > > DevStack > > > > > > tree, refactoring the DevStack module to install OVN from distro > > > > > > packages, implementing features to close the parity gap with > > > > > > ML2/OVS, > > > > > > fixing issues with tests and distros, etc... > > > > > > > > > > > > We are now very close to being able to make the switch and we've > > > > > > thought about sending this email to the broader community to raise > > > > > > awareness about this change as well as bring more attention to the > > > > > > patches that are current on review. > > > > > > > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > > > > discontinued and/or not supported anymore. The ML2/OVS driver is > > > > > > still > > > > > > going to be developed and maintained by the upstream Neutron > > > > > > community. > > > > > > > > > > can we ensure that this does not happen until the xena release. > > > > > in generall i think its ok to change the default but not this late in > > > > > > > > the cycle. > > > > > > > > > i would also like to ensure we keep at least one non ovn based multi > > > > > > > > node job in > > > > > > > > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > > > > > > > > merged and possible after. > > > > > > > > > right now the event/neutorn interaction is not the same during move > > > > > > > > operations. > > > > > > > > > > Below is a e per project explanation with relevant links and issues > > > > > > of > > > > > > where we stand with this work right now: > > > > > > > > > > > > * Keystone: > > > > > > > > > > > > Everything should be good for Keystone, the gate is happy with the > > > > > > changes. Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > > > > > > > * Glance: > > > > > > > > > > > > Everything should be good for Glace, the gate is happy with the > > > > > > changes. Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > > > > > > > * Swift: > > > > > > > > > > > > Everything should be good for Swift, the gate is happy with the > > > > > > changes. Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > > > > > > > * Ironic: > > > > > > > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > > > > ML2/OVS and everything is merged, so we should be good for Ironic > > > > > > too. > > > > > > Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > > > > > > > * Cinder: > > > > > > > > > > > > Cinder is almost complete. There's one test failure in the > > > > > > "tempest-slow-py3" job run on the > > > > > > "test_port_security_macspoofing_port" test. > > > > > > > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > > > > been fixed upstream [2] and the fix has been backported down to the > > > > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > > > > from packages we are currently waiting for this fix to be included > > > > > > in > > > > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > > > > contacted the package maintainer which has been very supportive of > > > > > > this work and will work on the package update, but he maintain a > > > > > > handful of backports in that package which is not yet included in > > > > > > OVN > > > > > > 20.03 upstream and he's now working with the core OVN community [4] > > > > > > to > > > > > > include it first in the branch and then create a new package for it. > > > > > > Hopefully this will happen soon. > > > > > > > > > > > > But for now we have a few options moving on with this issue: > > > > > > > > > > > > 1- Wait for the new package version > > > > > > 2- Mark the test as unstable until we get the new package version > > > > > > 3- Compile OVN from source instead of installing it from packages > > > > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > > > > > > > > i dont think we should default to ovn untill a souce build is not > > > > > > > > required. > > > > > > > > > compiling form souce while not supper expensice still adds time to the > > > > > > > > job > > > > > > > > > execution and im not sure we should be paying that cost on every > > > > > > > > devstack job run. > > > > > > > > > we could maybe compile it once and bake the package into the image or > > > > > > > > host it on a mirror > > > > > > > > > but i think we should avoid this option if we have alternitives. > > > > > > > > > > > What do you think about it ? > > > > > > > > > > > > Here is the test patch for Cinder: > > > > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > > > > > > > * Nova: > > > > > > > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: > > > > > > Basically > > > > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > > > > which is not part of an ML2/OVN deployment. The patch changes the > > > > > > code > > > > > > to check if that system unit exists before trying to stop it. > > > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > > > > which can be removed one release cycle after we switch DevStack to > > > > > > ML2/OVN. Grenade will test updating from the release version to the > > > > > > master branch but, since the default of the released version is not > > > > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade > > > > > > job > > > > > > is not supported. > > > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > > > > minimum bandwidth feature which is not yet supported by ML2/OVN > > > > > > [5][6] > > > > > > therefore we are temporarily enabling ML2/OVS for this job until > > > > > > that > > > > > > feature lands in core OVN. > > > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about > > > > > > these > > > > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for > > > > > > now > > > > > > because he feels like a change in the default network driver a few > > > > > > weeks prior to the upstream code freeze can be concerning. We do not > > > > > > know yet precisely when we are changing the default due to the > > > > > > current > > > > > > patches we need to get merged but, if this is a shared feeling among > > > > > > the Nova community I can work on enabling ML2/OVS on all jobs in > > > > > > Nova > > > > > > until we get a new release in OpenStack. > > > > > > > > > > yep this is still my view. > > > > > i would suggest we do the work required in the repos but not merge it > > > > > > > > until the xena release > > > > > > > > > is open. thats technically at RC1 so march 25th > > > > > i think we can safely do the swich after that but i would not change > > > > > the > > > > > > > > defualt in any project > > > > > > > > > before then. > > > > > > > > > > > Here's the test patch for Nova: > > > > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > > > > > > > * DevStack: > > > > > > > > > > > > And this is the final patch that will make this all happen: > > > > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's > > > > > > been > > > > > > a long and bumpy road to get to this point and I would like to say > > > > > > thanks to everyone involved so far and everyone that read the whole > > > > > > email, please let me know your thoughts. > > > > > > > > > > thanks for working on this. > > > > > > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > > > > [2] > > > > > > > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.47 > > > > 3776-1-numans at ovn.org/> > > > > > > > [3] > > > > > > > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab3 > > > > 9380cb> > > > > > > > [4] > > > > > > > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961. > > > > html> > > > > > > > [5] > > > > > > > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe > > > > 76a9ee1/gate/post_test_hook.sh#L129-L143> > > > > > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > > > > > > > Cheers, > > > > > > Lucas > > > > > > > > ++ Thank you indeed for working diligently on this important change. > > > > > > > > Please do note that devstack, and the base job that you're modifying > > > > is used by many other projects besides the ones that you have > > > > enumerated in the subject line. > > > > I suggest using [all] as a better subject line indicator to get the > > > > attention of folks like me who have filters based on the subject line. > > > > Also, the network substrate is important for the project I help > > > > maintain: Manila, which provides shared file systems over a network - > > > > so I followed your lead and submitted a dependent patch. I hope to > > > > reach out to you in case we see some breakages: > > > > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From lyarwood at redhat.com Wed Mar 3 10:46:52 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 3 Mar 2021 10:46:52 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? Message-ID: Hello all, I recently landed a fix in Cirros [1] to allow it to be used by the nova-next job when it is configured to use the q35 machine type [2], something that we would like to make the default sometime in the future. With no release of Cirros on the horizon I was wondering if I could instead host a build of the image somewhere for use solely by the nova-next job? If this isn't possible I'll go back to the Cirros folks upstream and ask for a release but given my change is the only one to land there in almost a year I wanted to ask about this approach first. Many thanks in advance, Lee [1] https://github.com/cirros-dev/cirros/pull/65 [2] https://review.opendev.org/c/openstack/nova/+/708701 From mbultel at redhat.com Wed Mar 3 10:49:50 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Wed, 3 Mar 2021 11:49:50 +0100 Subject: [TripleO][Validation] Validation CLI simplification Message-ID: Hi TripleO Folks, I'm raising this topic to the ML because it appears we have some divergence regarding some design around the way the Validations should be used with and without TripleO and I wanted to have a larger audience, in particular PTL and core thoughts around this topic. The current situation is: We have an openstack tripleo validator set of sub commands to handle Validation (run, list ...). The CLI validation is taking several parameters as an entry point and in particular the stack/plan, Openstack authentication and static inventory file. By asking the stack/plan name, the CLI is trying to verify and understand if the plan or the stack is valid, if the Overcloud exists somewhere in the cloud before passing that to the tripleo-ansible-inventory script and trying to generate a static inventory file in regard to what --plan or stack has been passed. The code is mainly here: [1]. This behavior implies several constraints: * Validation CLI needs Openstack authentication in order to do those checks * It introduces some complexity in the Validation code part: querying Heat to get the plan name to be sure the name provided is correct, get the status of the stack... In case of Standalone deployment, it adds more complexity then. * This code is only valid for "standard" deployments and usage meaning it doesn't work for Standalone, for some Upgrade and FFU stages and needs to be bypassed for pre-undercloud deployment. * We hit several blockers around this part of code. My proposal is the following: Since we are thinking of the future of Validation and we want something more robust, stronger, simpler, usable and efficient, I propose to get rid of the plan/stack and authentication functionalities in the Validation code, and only ask for a valid inventory provided by the user. I propose as well to create a new entry point in the TripleO CLI to generate a static inventory such as: openstack tripleo inventory generate --output-file my-inv.yaml and then: openstack tripleo validator run --validation my-validation --inventory my-inv.yaml By doing that, I think we gain a lot in simplification, it's more robust, and Validation will only do what it aims for: wrapp Ansible execution to provide better logging information and history. The main concerns about this approach is that the user will have to provide a valid inventory to the Validation CLI. I understand the point of view of getting something fully autonomous, and the way of just kicking *one* command and the Validation can be *magically* executed against your cloud, but I think the less complex the Validation code is, the more robust, stable and usable it will be. Deferring a specific entry point for the inventory, which is a key part of post deployment action, seems something more clear and robust as well. This part of code could be shared and used for any other usages instead of calling the inventory script stored into tripleo-validations. It could then use the tripleo-common inventory library directly with tripleoclient, instead of calling from client -> tripleo-validations/scripts -> query tripleo-common inventory library. I know it changes a little bit the usage (adding one command line in the execution process for getting a valid inventory) but it's going in a less buggy and racy direction. And the inventory should be generated only once, or at least at any big major cloud change. So, I'm glad to get your thoughts on that topic and your overall views around this topic. Thanks, Mathieu [1] https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Mar 3 10:50:59 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 3 Mar 2021 23:50:59 +1300 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney wrote: > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell people who > are > > not interested in OVN to keep the current devstack behaviour? I have a > > feeling that using OVN as default Neutron driver would break the CI jobs > > for some projects like Octavia, Trove, etc. which rely on ovs port for > the > > set up. > > well ovn is just an alternivie contoler for ovs. > so ovn replace the neutron l2 agent it does not replace ovs. > project like octavia or trove that deploy loadblances or dbs in vms should > not be able to observe a difference. > they may still want to deploy ml2/ovs but unless they are doing something > directly on the host like adding port > directly to ovs because they are not using vms they should not be aware of > this change. > Yes, they are. Please see https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 as an example for Octavia. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Mar 3 11:21:03 2021 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 3 Mar 2021 16:51:03 +0530 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20210302150451.ksnug53wzup747t4@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> <20210302150451.ksnug53wzup747t4@yuggoth.org> Message-ID: Hi, On Tue, Mar 2, 2021 at 8:40 PM Jeremy Stanley wrote: > > On 2021-03-02 19:58:19 +0530 (+0530), Yatin Karel wrote: > [...] > > is it still necessary to abandon all open reviews? > [...] > > Gerrit will not allow deletion of a branch if there are any changes > still open for it. > -- Ok wasn't aware of it, Thanks for sharing. We have cleaned up all open reviews, please proceed with branch deletion. > Jeremy Stanley Thanks and Regards Yatin Karel From goody698 at gmail.com Wed Mar 3 12:03:44 2021 From: goody698 at gmail.com (Heib, Mohammad (Nokia - IL/Kfar Sava)) Date: Wed, 3 Mar 2021 14:03:44 +0200 Subject: Train/SR-IOV direct port mode live-migration - question Message-ID: <78875ebc-6556-c63b-66a8-68850b748785@gmail.com> Hi All, we recently moved to openstack Train and according to the version release notes the SR-IOV direct port live-migration support was added to this version. so we tried to do some live-migration to vm with two SR-IOV direct ports attached to bond interface on the vm (no vSwitch interface or a indirect only direct ports) but seems that we have an issue with maintaining the network connectivity during and after the live-migration, according to [1] i see that in order to maintain network connectivity during the migration we have to add one indirect or vswitch port to the bond and this interface will carry the traffic during the migration and once the migration completed the direct port will back to be the primary slave on the target VM bond. my question is if we don't add the indirect port we will lose the connectivity during the migration which makes sense to me, but why the connectivity does not back after the migration completed successfully, and why the bond master boot up with no slaves on the target VM? according to some documents and blueprints that I found in google (including [2],[3]) the guest os will receive a virtual hotplug add even and the bond master will enslave those devices and connectivity will back which is not the case here. so I wondering maybe I need to add some scripts to handle those events (if so where to add them) or some network flags to the ifcfg-* files or i need to use a specific guest os? [1] : https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/libvirt-neutron-sriov-livemigration.html [2] :https://www.researchgate.net/publication/228722278_Live_migration_with_pass-through_device_for_Linux_VM [3] :https://openstack.nimeyo.com/72653/openstack-nova-neutron-live-migration-with-direct-passthru *bond master ifcfg file:* DEVICE=bond1 BONDING_OPTS=mode=active-backup HOTPLUG=yes TYPE=Bond BONDING_MASTER=yes BOOTPROTO=none NAME=bond1 ONBOOT=yes *slaves ifcfg files:* TYPE=Ethernet DEVICE=eth0 HOTPLUG=no ONBOOT=no MASTER=bond1 SLAVE=yes BOOTPROTO=none NM_CONTROLLED=no *guest OS:* CentOS Linux release 7.7.1908 (Core) * * */Thanks in advance for any help :)/,* -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Mar 3 12:45:32 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 3 Mar 2021 14:45:32 +0200 Subject: [TripleO][Validation] Validation CLI simplification In-Reply-To: References: Message-ID: On Wed, Mar 3, 2021 at 12:51 PM Mathieu Bultel wrote: > Hi TripleO Folks, > > I'm raising this topic to the ML because it appears we have some > divergence regarding some design around the way the Validations should be > used with and without TripleO and I wanted to have a larger audience, in > particular PTL and core thoughts around this topic. > > The current situation is: > We have an openstack tripleo validator set of sub commands to handle > Validation (run, list ...). > The CLI validation is taking several parameters as an entry point and in > particular the stack/plan, Openstack authentication and static inventory > file. > > By asking the stack/plan name, the CLI is trying to verify and understand > if the plan or the stack is valid, if the Overcloud exists somewhere in the > cloud before passing that to the tripleo-ansible-inventory script and > trying to generate a static inventory file in regard to what --plan or > stack has been passed. > Sorry if silly question but, can't we just make 'validate the stack status' as one of the validations? In fact you already have something like that there https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/roles/stack_health/tasks/main.yml#L2 . Then only this validation will require the stack name passed in instead of on every validation run. BTW as an aside we should probably remove 'plan' from that code altogether given the recent 'remove swift and overcloud plan' work from ramishra/cloudnull and co @ https://review.opendev.org/q/topic:%22env_merging%22+(status:open%20OR%20status:merged) > The code is mainly here: [1]. > > This behavior implies several constraints: > * Validation CLI needs Openstack authentication in order to do those checks > * It introduces some complexity in the Validation code part: querying Heat > to get the plan name to be sure the name provided is correct, get the > status of the stack... In case of Standalone deployment, it adds more > complexity then. > * This code is only valid for "standard" deployments and usage meaning it > doesn't work for Standalone, for some Upgrade and FFU stages and needs to > be bypassed for pre-undercloud deployment. > * We hit several blockers around this part of code. > > My proposal is the following: > > Since we are thinking of the future of Validation and we want something > more robust, stronger, simpler, usable and efficient, I propose to get rid > of the plan/stack and authentication functionalities in the Validation > code, and only ask for a valid inventory provided by the user. > I propose as well to create a new entry point in the TripleO CLI to > generate a static inventory such as: > openstack tripleo inventory generate --output-file my-inv.yaml > and then: > openstack tripleo validator run --validation my-validation --inventory > my-inv.yaml > > By doing that, I think we gain a lot in simplification, it's more robust, > and Validation will only do what it aims for: wrapp Ansible execution to > provide better logging information and history. > > The main concerns about this approach is that the user will have to > provide a valid inventory to the Validation CLI. > I understand the point of view of getting something fully autonomous, and > the way of just kicking *one* command and the Validation can be *magically* > executed against your cloud, but I think the less complex the Validation > code is, the more robust, stable and usable it will be. > > Deferring a specific entry point for the inventory, which is a key part of > post deployment action, seems something more clear and robust as well. > This part of code could be shared and used for any other usages instead of > calling the inventory script stored into tripleo-validations. It could then > use the tripleo-common inventory library directly with tripleoclient, > instead of calling from client -> tripleo-validations/scripts -> query > tripleo-common inventory library. > > I know it changes a little bit the usage (adding one command line in the > execution process for getting a valid inventory) but it's going in a less > buggy and racy direction. > And the inventory should be generated only once, or at least at any big > major cloud change. > > So, I'm glad to get your thoughts on that topic and your overall views > around this topic. > The proposal sounds sane to me, but just to be clear by "authentication functionalities" are you referring specifically to the '--ssh-user' argument ( https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/tripleo_validations/tripleo_validator.py#L243)? i.e. we will already have that in the generated static inventory so no need to have in on the CLI? If the only cost is that we have to have an extra step for generating the inventory then IMO it is worth doing. I would however be interested to hear from those that are objecting to the proposal about why it is a bad idea ;) since you said there has been a divergence in opinions over the design regards, marios > > Thanks, > Mathieu > > [1] > https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 12:52:40 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 12:52:40 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: Message-ID: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> On Wed, 2021-03-03 at 10:46 +0000, Lee Yarwood wrote: > Hello all, > > I recently landed a fix in Cirros [1] to allow it to be used by the > nova-next job when it is configured to use the q35 machine type [2], > something that we would like to make the default sometime in the > future. > > With no release of Cirros on the horizon I was wondering if I could > instead host a build of the image somewhere for use solely by the > nova-next job? > > If this isn't possible I'll go back to the Cirros folks upstream and > ask for a release but given my change is the only one to land there in > almost a year I wanted to ask about this approach first. am im not sure where would be the best location but when i have tought about it in the past i came up with a few options that may or may not work. i know that the nodepool images are hosted publiclaly but i can never remberer where. if we wraped the cirror build in a dib element that might be an approch we coudl take. build it woith nodepool build then mirror that to the cloud and use it. the other idea i had for this in the past was to put the image on tarballs.openstack.org we have the ironic ipa image there already so if we did a one of build we coud maybe see if infra were ok with hosting it there and the mirroing that to our cloud providres to avoid the need to pull it. the other option might be to embed it in the nodepool images. while not all jobs will need it most will so if we had a static copy of it we could inject it into the image in the devstack data dir or anogher whle know cache dir in the image and just have devstack use that instead of downloading it. > > Many thanks in advance, > > Lee > > [1] https://github.com/cirros-dev/cirros/pull/65 > [2] https://review.opendev.org/c/openstack/nova/+/708701 > > From smooney at redhat.com Wed Mar 3 13:05:25 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 13:05:25 +0000 Subject: Train/SR-IOV direct port mode live-migration - question In-Reply-To: <78875ebc-6556-c63b-66a8-68850b748785@gmail.com> References: <78875ebc-6556-c63b-66a8-68850b748785@gmail.com> Message-ID: On Wed, 2021-03-03 at 14:03 +0200, Heib, Mohammad (Nokia - IL/Kfar Sava) wrote: > Hi All, > > we recently moved to openstack Train and according to the version > release notes the SR-IOV direct port live-migration support was added to > this version. > > so we tried to do some live-migration to vm with two SR-IOV direct ports > attached to bond interface on the vm (no vSwitch interface or a indirect > only direct ports) but seems that we have an issue with maintaining the > network yes that is expected. live migration with direct mode sriov involvs hot unpluggin the interface before the migration and hot plugging it after. nic vendors do not actully support sriov migration so we developed a workaround for that hardware limiation by using pcie hotplug. > > connectivity during and after the live-migration, according to [1] i see > that in order to maintain network connectivity during the migration we > have to add one indirect or vswitch port to the bond and this interface > will carry the > traffic during the migration and once the migration completed the direct > port will back to be the primary slave on the target VM bond. yes this is correct > > > my question is if we don't add the indirect port we will lose the > connectivity during the migration which makes sense to me, > > but why the connectivity does not back after the migration completed > successfully, and why the bond master boot up with no slaves on the > target VM? that sound like your networking setup in the guest is not correctly detecting the change. e.g. you are missing a udev rule or your network manager or systemd-netowrkd configuriton is only running on first boot but not when an interface is added/removed > > according to some documents and blueprints that I found in google > (including [2],[3]) the guest os will receive a virtual hotplug add even > and the > > bond master will enslave those devices and connectivity will back which > is not the case here. > so I wondering maybe I need to add some scripts to handle those events > (if so where to add them) or some network flags to the ifcfg-* files or > i need to use a specific guest os? you would need to but i do not have files for this unfortunetly. > > > [1] : > https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/libvirt-neutron-sriov-livemigration.html > > > [2] > :https://www.researchgate.net/publication/228722278_Live_migration_with_pass-through_device_for_Linux_VM > > > [3] > :https://openstack.nimeyo.com/72653/openstack-nova-neutron-live-migration-with-direct-passthru > > > > *bond master ifcfg file:* > > DEVICE=bond1 > BONDING_OPTS=mode=active-backup > HOTPLUG=yes > TYPE=Bond > BONDING_MASTER=yes > BOOTPROTO=none > NAME=bond1 > ONBOOT=yes > > *slaves ifcfg files:* > > TYPE=Ethernet > DEVICE=eth0 > HOTPLUG=no try setting this to yes i suspect you need to mark all the slave interfaces as hotplugable since they are what will be hotplugged. you might also want to set onboot but hotplug seams more relevent. > ONBOOT=no > MASTER=bond1 > SLAVE=yes > BOOTPROTO=none > NM_CONTROLLED=no > > *guest OS:* > > CentOS Linux release 7.7.1908 (Core) > > * > * > > */Thanks in advance for any help :)/,* > > > From mbultel at redhat.com Wed Mar 3 13:13:38 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Wed, 3 Mar 2021 14:13:38 +0100 Subject: [TripleO][Validation] Validation CLI simplification In-Reply-To: References: Message-ID: Thank you Marios for the response. On Wed, Mar 3, 2021 at 1:45 PM Marios Andreou wrote: > > > On Wed, Mar 3, 2021 at 12:51 PM Mathieu Bultel wrote: > >> Hi TripleO Folks, >> >> I'm raising this topic to the ML because it appears we have some >> divergence regarding some design around the way the Validations should be >> used with and without TripleO and I wanted to have a larger audience, in >> particular PTL and core thoughts around this topic. >> >> The current situation is: >> We have an openstack tripleo validator set of sub commands to handle >> Validation (run, list ...). >> The CLI validation is taking several parameters as an entry point and in >> particular the stack/plan, Openstack authentication and static inventory >> file. >> >> By asking the stack/plan name, the CLI is trying to verify and understand >> if the plan or the stack is valid, if the Overcloud exists somewhere in the >> cloud before passing that to the tripleo-ansible-inventory script and >> trying to generate a static inventory file in regard to what --plan or >> stack has been passed. >> > > > Sorry if silly question but, can't we just make 'validate the stack > status' as one of the validations? In fact you already have something like > that there > https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/roles/stack_health/tasks/main.yml#L2 > . Then only this validation will require the stack name passed in instead > of on every validation run. > > BTW as an aside we should probably remove 'plan' from that code altogether > given the recent 'remove swift and overcloud plan' work from > ramishra/cloudnull and co @ > https://review.opendev.org/q/topic:%22env_merging%22+(status:open%20OR%20status:merged) > > Not exactly, the main goal of checking if the --stack/plan value is correct and if the stack provided exists and is right is for getting a valid Ansible inventory to execute the Validations. Meaning that all the extra checks in the code in [1] is made for generating the inventory, which imho should not belong to the Validation CLI but to something else, and Validation should consider that the --inventory file is correct (because of the reasons mentioned earlier). > > >> The code is mainly here: [1]. >> >> This behavior implies several constraints: >> * Validation CLI needs Openstack authentication in order to do those >> checks >> * It introduces some complexity in the Validation code part: querying >> Heat to get the plan name to be sure the name provided is correct, get the >> status of the stack... In case of Standalone deployment, it adds more >> complexity then. >> * This code is only valid for "standard" deployments and usage meaning it >> doesn't work for Standalone, for some Upgrade and FFU stages and needs to >> be bypassed for pre-undercloud deployment. >> * We hit several blockers around this part of code. >> >> My proposal is the following: >> >> Since we are thinking of the future of Validation and we want something >> more robust, stronger, simpler, usable and efficient, I propose to get rid >> of the plan/stack and authentication functionalities in the Validation >> code, and only ask for a valid inventory provided by the user. >> I propose as well to create a new entry point in the TripleO CLI to >> generate a static inventory such as: >> openstack tripleo inventory generate --output-file my-inv.yaml >> and then: >> openstack tripleo validator run --validation my-validation --inventory >> my-inv.yaml >> >> By doing that, I think we gain a lot in simplification, it's more robust, >> and Validation will only do what it aims for: wrapp Ansible execution to >> provide better logging information and history. >> >> The main concerns about this approach is that the user will have to >> provide a valid inventory to the Validation CLI. >> I understand the point of view of getting something fully autonomous, and >> the way of just kicking *one* command and the Validation can be *magically* >> executed against your cloud, but I think the less complex the Validation >> code is, the more robust, stable and usable it will be. >> >> Deferring a specific entry point for the inventory, which is a key part >> of post deployment action, seems something more clear and robust as well. >> This part of code could be shared and used for any other usages instead >> of calling the inventory script stored into tripleo-validations. It could >> then use the tripleo-common inventory library directly with tripleoclient, >> instead of calling from client -> tripleo-validations/scripts -> query >> tripleo-common inventory library. >> >> I know it changes a little bit the usage (adding one command line in the >> execution process for getting a valid inventory) but it's going in a less >> buggy and racy direction. >> And the inventory should be generated only once, or at least at any big >> major cloud change. >> >> So, I'm glad to get your thoughts on that topic and your overall views >> around this topic. >> > > > The proposal sounds sane to me, but just to be clear by "authentication > functionalities" are you referring specifically to the '--ssh-user' > argument ( > https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/tripleo_validations/tripleo_validator.py#L243)? > i.e. we will already have that in the generated static inventory so no need > to have in on the CLI? > The authentication in the current CLI is only needed to get the stack output in order to generate the inventory. If the user provides his inventory, no authentication, no heat stack check and then Validation can run everywhere on every stage of a TripleO deployment (early without any TripleO bits, or in the middle of LEAP Upgrade for example). > > If the only cost is that we have to have an extra step for generating the > inventory then IMO it is worth doing. I would however be interested to hear > from those that are objecting to the proposal about why it is a bad idea ;) > since you said there has been a divergence in opinions over the design > > regards, marios > > > >> >> Thanks, >> Mathieu >> >> [1] >> https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 13:22:31 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 13:22:31 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <5fc0ff7968480dd70d3dd14cfc4f9ebea0c2529d.camel@redhat.com> On Wed, 2021-03-03 at 23:50 +1300, Lingxian Kong wrote: > On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney wrote: > > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > > Hi, > > > > > > Thanks for all your hard work on this. > > > > > > I'm wondering is there any doc proposed for devstack to tell people who > > are > > > not interested in OVN to keep the current devstack behaviour? I have a > > > feeling that using OVN as default Neutron driver would break the CI jobs > > > for some projects like Octavia, Trove, etc. which rely on ovs port for > > the > > > set up. > > > > well ovn is just an alternivie contoler for ovs. > > so ovn replace the neutron l2 agent it does not replace ovs. > > project like octavia or trove that deploy loadblances or dbs in vms should > > not be able to observe a difference. > > they may still want to deploy ml2/ovs but unless they are doing something > > directly on the host like adding port > > directly to ovs because they are not using vms they should not be aware of > > this change. > > > > Yes, they are. > > Please see > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 as > an example for Octavia. right but octavia really should not be doing that. i mean it will still work in the sense that ovn shoudl be abel to correalte that port to this hos if you have correcly bound that port too the host but adding internal ports to br-int to provide network connectivy seams like a hack. ml2/ovn still calls the amin ovs bridge br-int so the port add will work as it did before. the br-int is owned by neutron and operators are not allowed to add flows or interface to that brdige normally. so in a real deployment i woudl hope that the woiuld not be adding this port and assigning a mac or ip rules like this. conenctiviy shoudl be provided via the br-ex /phsyical netowrk using provider networking right? in anycase you can swap back to ml2/ovs for ocatvia jobs. if you have correctly bound the manament port to the current host then i think ovn shoudl be able to install the correct openflow rules to allow it to function however so it may be as simipar as extending the elif to work with ovn. the port create on https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L458 does seam to have set the host properly its still kind of troubling to me to see something like this being down our side an openstack agent on the host. for example i would have expected an ml2 l2 openvswigh agent extntion to create the prot via os-vif or similar in respocne to the api action instead of manually doing this. > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) From fungi at yuggoth.org Wed Mar 3 13:41:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 13:41:35 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> Message-ID: <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> On 2021-03-03 12:52:40 +0000 (+0000), Sean Mooney wrote: [...] > i know that the nodepool images are hosted publiclaly but i can > never remberer where. if we wraped the cirror build in a dib > element that might be an approch we coudl take. build it woith > nodepool build then mirror that to the cloud and use it. The images we build from Nodepool 1. are created with diskimage-builder (does DIB support creating Cirros images?), and are uploaded to Glance in all our donor-providers (are we likely to actually boot it directly in them?). > the other idea i had for this in the past was to put the image on > tarballs.openstack.org we have the ironic ipa image there already > so if we did a one of build we coud maybe see if infra were ok > with hosting it there and the mirroing that to our cloud providres > to avoid the need to pull it. The files hosted on the tarballs site (including the IPA images) are built by Zuul jobs. If you can figure out what project that job should reside in and a cadence to run it, that might be an option. > the other option might be to embed it in the nodepool images. > while not all jobs will need it most will so if we had a static > copy of it we could inject it into the image in the devstack data > dir or anogher whle know cache dir in the image and just have > devstack use that instead of downloading it. We already do that for official Cirros images: https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images If you look in the build logs of a DevStack-based job, you'll see it check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img and then not wget that image over the network because it exists on the filesystem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From balazs.gibizer at est.tech Wed Mar 3 14:08:22 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Wed, 03 Mar 2021 15:08:22 +0100 Subject: [election][nova] PTL Candidacy for Wallaby Message-ID: Hi, I would like to continue serving as the Nova PTL in the Xena cycle. In Wallaby I hope I helped to keep Nova alive and kicking and I intend to continue what I have started. My employer is supportive to continue spending time on PTL and core duties. If I am elected then I will try to focus on the following areas: * Keeping the Nova bug backlog under control * Working on changes that help the maintainability and stability of Nova * Finishing up things that are already ongoing before starting new items Also, I'd like to note that if elected then Xena will be my 3rd consecutive term as Nova PTL so I will spend some time during the cycle to find my successor for the Y cycle. Cheers, gibi From marios at redhat.com Wed Mar 3 16:36:47 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 3 Mar 2021 18:36:47 +0200 Subject: [election][TripleO] PTL candidacy for Xena Message-ID: (nomination posted to https://review.opendev.org/c/openstack/election/+/778500) Hello I would like to continue serving as PTL for TripleO during the Xena cycle. Wallaby was my first PTL experience and it has been challenging and rewarding. I've learnt a lot about release processes and other repo related admin (like moving projects to the Independent release cycle). As I said in the W candidacy nomination [1], nowadays TripleO works in self-contained squads that drive technical decisions. We've had a lot of work across these squads (as always) during W with progress in a number of areas including moving port/network creation outside of Heat (ports v2), tripleo-ceph/tripleo-ceph-client and switching out ceph-ansible for cephadm, removing swift and the deployment plan, BGP support with frrouter and using ephemeral Heat for the overcloud deployment. I'd like to continue what i started in W, which is, to increase socialisation across tripleo squads and try to reign in the multitude of repos that we have created (and some now abandoned) over the years. After a well attended Wallaby PTG we have reinstated our traditional IRC meetings in #tripleo. I've started to "tidy up" our repos, moving some of the older and no longer used projects to 'independent' (os-refresh-config and friends, tripleo-ipsec) and started the process to mark older branches as EOL, starting with Rocky. There is a lot more to do here and I will be happy to do it if you give me the opportunity, thanks for your consideration, marios [1] https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/Tripleo/marios%40redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Wed Mar 3 17:33:59 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 3 Mar 2021 17:33:59 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> Message-ID: On Wed, 3 Mar 2021 at 13:45, Jeremy Stanley wrote: > > On 2021-03-03 12:52:40 +0000 (+0000), Sean Mooney wrote: > [...] > > i know that the nodepool images are hosted publiclaly but i can > > never remberer where. if we wraped the cirror build in a dib > > element that might be an approch we coudl take. build it woith > > nodepool build then mirror that to the cloud and use it. > > The images we build from Nodepool 1. are created with > diskimage-builder (does DIB support creating Cirros images?), and > are uploaded to Glance in all our donor-providers (are we likely to > actually boot it directly in them?). > > > the other idea i had for this in the past was to put the image on > > tarballs.openstack.org we have the ironic ipa image there already > > so if we did a one of build we coud maybe see if infra were ok > > with hosting it there and the mirroing that to our cloud providres > > to avoid the need to pull it. > > The files hosted on the tarballs site (including the IPA images) are > built by Zuul jobs. If you can figure out what project that job > should reside in and a cadence to run it, that might be an option. > > > the other option might be to embed it in the nodepool images. > > while not all jobs will need it most will so if we had a static > > copy of it we could inject it into the image in the devstack data > > dir or anogher whle know cache dir in the image and just have > > devstack use that instead of downloading it. > > We already do that for official Cirros images: > > https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images > > If you look in the build logs of a DevStack-based job, you'll see it > check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img > and then not wget that image over the network because it exists on > the filesystem. Thanks both, So this final option looks like the most promising. It would not however involve any independent build steps in nodepool or zuul. If infra are okay with this I can host the dev build I have with my change applied and post a change to cache it alongside the official images. Thanks again, Lee From skaplons at redhat.com Wed Mar 3 18:24:19 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 03 Mar 2021 19:24:19 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <5567179.ztC2WF7GR3@p1> Hi, Dnia środa, 3 marca 2021 11:32:11 CET Lucas Alvares Gomes pisze: > On Mon, Mar 1, 2021 at 6:25 PM Sean Mooney wrote: > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > Hi all, > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > switching the default backend in Devstack to OVN. A lot of work has > > > been done since, from porting the OVN devstack module to the DevStack > > > tree, refactoring the DevStack module to install OVN from distro > > > packages, implementing features to close the parity gap with ML2/OVS, > > > fixing issues with tests and distros, etc... > > > > > > We are now very close to being able to make the switch and we've > > > thought about sending this email to the broader community to raise > > > awareness about this change as well as bring more attention to the > > > patches that are current on review. > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > going to be developed and maintained by the upstream Neutron > > > community. > > > > can we ensure that this does not happen until the xena release. > > in generall i think its ok to change the default but not this late in the > > cycle. i would also like to ensure we keep at least one non ovn based > > multi node job in nova until > > https://review.opendev.org/c/openstack/nova/+/602432 is merged and > > possible after. right now the event/neutorn interaction is not the same > > during move operations. > I think it's fair to wait for the new release cycle to start since we > are just a few weeks away and then we can flip the default in > DevStack. I will state this in the last DevStack patch and set the > workflow -1 until then. That said, I also think that other patches > could be merged before that, those are just adapting a few scripts to > work with ML2/OVN and enabling ML2/OVS explicitly where it makes > sense. That way, when time comes, we will just need to merge the > DevStack patch. +1. Let's do that very early in Xena cycle :) > > > > Below is a e per project explanation with relevant links and issues of > > > where we stand with this work right now: > > > > > > * Keystone: > > > > > > Everything should be good for Keystone, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > * Glance: > > > > > > Everything should be good for Glace, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > * Swift: > > > > > > Everything should be good for Swift, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > * Ironic: > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > Here is the test patch: > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > * Cinder: > > > > > > Cinder is almost complete. There's one test failure in the > > > "tempest-slow-py3" job run on the > > > "test_port_security_macspoofing_port" test. > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > been fixed upstream [2] and the fix has been backported down to the > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > from packages we are currently waiting for this fix to be included in > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > contacted the package maintainer which has been very supportive of > > > this work and will work on the package update, but he maintain a > > > handful of backports in that package which is not yet included in OVN > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > include it first in the branch and then create a new package for it. > > > Hopefully this will happen soon. > > > > > > But for now we have a few options moving on with this issue: > > > > > > 1- Wait for the new package version > > > 2- Mark the test as unstable until we get the new package version > > > 3- Compile OVN from source instead of installing it from packages > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > > i dont think we should default to ovn untill a souce build is not > > required. > > compiling form souce while not supper expensice still adds time to the job > > execution and im not sure we should be paying that cost on every devstack > > job run. > > > > we could maybe compile it once and bake the package into the image or host > > it on a mirror but i think we should avoid this option if we have > > alternitives. > > Since this patch > https://review.opendev.org/c/openstack/devstack/+/763402 we no longer > default to compiling OVN from source anymore, it's installed using the > distro packages now. > > Yeah the alternatives are not straight forward, I was talking to some > core OVN folks yesterday regarding the backports proposed by Canonical > to the 20.03 branch and they seem to be fine with it, it needs more > reviews since there are around ~20 patches being backported there. But > I hope they are going to be looking into it and we should get a new > OVN package for Ubuntu Focal soon. > > > > What do you think about it ? > > > > > > Here is the test patch for Cinder: > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > * Nova: > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > to check if that system unit exists before trying to stop it. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > which can be removed one release cycle after we switch DevStack to > > > ML2/OVN. Grenade will test updating from the release version to the > > > master branch but, since the default of the released version is not > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > is not supported. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > feature lands in core OVN. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > because he feels like a change in the default network driver a few > > > weeks prior to the upstream code freeze can be concerning. We do not > > > know yet precisely when we are changing the default due to the current > > > patches we need to get merged but, if this is a shared feeling among > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > until we get a new release in OpenStack. > > > > yep this is still my view. > > i would suggest we do the work required in the repos but not merge it > > until the xena release is open. thats technically at RC1 so march 25th > > i think we can safely do the swich after that but i would not change the > > defualt in any project before then. > > Yeah, no problem waiting from my side, but hoping we can keep > reviewing the rest of the patches until then. > > > > Here's the test patch for Nova: > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > * DevStack: > > > > > > And this is the final patch that will make this all happen: > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > a long and bumpy road to get to this point and I would like to say > > > thanks to everyone involved so far and everyone that read the whole > > > email, please let me know your thoughts. > > > > thanks for working on this. > > Thanks for the inputs > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > [2] > > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.4 > > > 73776-1-numans at ovn.org/ [3] > > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab > > > 39380cb [4] > > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961 > > > .html [5] > > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99f > > > e76a9ee1/gate/post_test_hook.sh#L129-L143 [6] > > > https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > Cheers, > > > Lucas -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Wed Mar 3 18:25:56 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 03 Mar 2021 19:25:56 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <25317159.dUgV3lgUje@p1> Hi, Dnia środa, 3 marca 2021 11:50:59 CET Lingxian Kong pisze: > On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney wrote: > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > > Hi, > > > > > > Thanks for all your hard work on this. > > > > > > I'm wondering is there any doc proposed for devstack to tell people who > > > > are > > > > > not interested in OVN to keep the current devstack behaviour? I have a > > > feeling that using OVN as default Neutron driver would break the CI jobs > > > for some projects like Octavia, Trove, etc. which rely on ovs port for > > > > the > > > > > set up. > > > > well ovn is just an alternivie contoler for ovs. > > so ovn replace the neutron l2 agent it does not replace ovs. > > project like octavia or trove that deploy loadblances or dbs in vms should > > not be able to observe a difference. > > they may still want to deploy ml2/ovs but unless they are doing something > > directly on the host like adding port > > directly to ovs because they are not using vms they should not be aware of > > this change. > > Yes, they are. > > Please see > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 as > an example for Octavia. I'm really not an Octavia expert but AFAIK we are testing Octavia with ML2/OVN in TripleO jobs already as we switched default neutron backend in TripleO to be ML2/OVN somewhere around Stein cycle IIRC. > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Wed Mar 3 19:27:46 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 19:27:46 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> Message-ID: <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> On 2021-03-03 17:33:59 +0000 (+0000), Lee Yarwood wrote: > On Wed, 3 Mar 2021 at 13:45, Jeremy Stanley wrote: [...] > > https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images > > > > If you look in the build logs of a DevStack-based job, you'll see it > > check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img > > and then not wget that image over the network because it exists on > > the filesystem. > > Thanks both, > > So this final option looks like the most promising. It would not > however involve any independent build steps in nodepool or zuul. If > infra are okay with this I can host the dev build I have with my > change applied and post a change to cache it alongside the official > images. Yeah, I don't see that being much different from hitting download.cirros-cloud.net during out image builds. If you're forking or otherwise making unofficial builds of Cirros though, you might want to name the files something which makes that clear so people seeing it in job build logs know it's not an official Cirros provided version. I believe our Nodepool builders also cache such downloads locally, so you'll probably see them downloaded once each from handful of builders and, not again until the next time you update the filename in that DIB element. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From narjes.bessghaier.1 at ens.etsmtl.ca Wed Mar 3 15:14:25 2021 From: narjes.bessghaier.1 at ens.etsmtl.ca (Bessghaier, Narjes) Date: Wed, 3 Mar 2021 15:14:25 +0000 Subject: Puppet openstack modules In-Reply-To: References: , Message-ID: Thank you for the clarification. Get Outlook for Android ________________________________ From: Takashi Kajinami Sent: Saturday, February 27, 2021 10:06:58 AM To: Bessghaier, Narjes Cc: openstack-discuss at lists.openstack.org Subject: Re: Puppet openstack modules > 1 is supposed to be used only for testing during deployment Sorry I meant to say 1 is supposed to be used only for testing during "development" On Sun, Feb 28, 2021 at 12:06 AM Takashi Kajinami > wrote: Ruby codes in puppet-openstack repos are used for the following three purposes. 1. unit tests and acceptance tests using serverspec framework (files placed under spec) 2. implementation of custom type, provider, and function 3. template files (We use ERB instead of pure Ruby about this, though) 1 is supposed to be used only for testing during deployment but 2 and 3 can be used in any production use case in combination with puppet manifest files to manage OpenStack deployments. On Sat, Feb 27, 2021 at 5:01 AM Bessghaier, Narjes > wrote: Dear OpenStack team, My name is Narjes and I'm a PhD student at the University of Montréal, Canada. My current work consists of analyzing code reviews on the puppet modules. I would like to precisely know what the ruby files are used for in the puppet modules. As mentioned in the official website, most of unit test are written in ruby. Are ruby files destined to carry out units tests or destined for production code. I appreciate your help, Thank you -- ---------- Takashi Kajinami Senior Software Maintenance Engineer Customer Experience and Engagement Red Hat email: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankit at aptira.com Wed Mar 3 18:25:25 2021 From: ankit at aptira.com (Ankit Goel) Date: Wed, 3 Mar 2021 18:25:25 +0000 Subject: Need help to get rally test suite Message-ID: Hello Experts, Need some help on rally. I have installed Openstack rally on centos. Now I need to run some benchmarking test suite. I need to know is there any pre-existing test suite which covers majority of test cases for Openstack control plane. So that I can just run that test suite. I remember earlier we use to get the samples under rally/samples/tasks/scenarios for each component but now I am not seeing it but I am seeing some dummy files. So from where I can get those samples json/yaml files which is required to run the rally tests. Awaiting for your response. Thanks in Advance Regards, Ankit Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Wed Mar 3 19:29:27 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Wed, 3 Mar 2021 14:29:27 -0500 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <6572eb3b-3cd5-a3d7-10ac-eb8b8ef34bf3@gmail.com> On 3/3/21 5:50 AM, Lingxian Kong wrote: > On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney > wrote: > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell > people who are > > not interested in OVN to keep the current devstack behaviour? I > have a > > feeling that using OVN as default Neutron driver would break the > CI jobs > > for some projects like Octavia, Trove, etc. which rely on ovs > port for the > > set up. > > well ovn is just an alternivie contoler for ovs. > so ovn replace the neutron l2 agent it does not replace ovs. > project like octavia or trove that deploy loadblances or dbs in vms > should not be able to observe a difference. > they may still want to deploy ml2/ovs but unless they are doing > something directly on the host like adding port > directly to ovs because they are not using vms they should not be > aware of this change. > > > Yes, they are. > > Please see > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 > as an example for Octavia. The code to do this configuration was all migrated to the neutron repository, with the last bit of cleanup for the Octavia code you highlighted here: https://review.opendev.org/c/openstack/octavia/+/718192 It just needs a final push to get it merged. -Brian From peter.matulis at canonical.com Wed Mar 3 19:36:01 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 3 Mar 2021 14:36:01 -0500 Subject: [docs] Project guides in PDF format Message-ID: Hello, I understand that there was a push [1] to make PDFs available for download from the published pages on docs.openstack.org for the various guides (Sphinx projects). Is there any steam left in this initiative? The next best thing is to have something at the repository level that people can work with. Currently, there are a limited number of guides that can be converted to PDF from the openstack-manuals repository. What is the suggested implementation for individual projects? Thanks, Peter Matulis [1]: https://etherpad.opendev.org/p/train-pdf-support-goal -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 3 19:45:26 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 19:45:26 +0000 Subject: [docs] Project guides in PDF format In-Reply-To: References: Message-ID: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> On 2021-03-03 14:36:01 -0500 (-0500), Peter Matulis wrote: > I understand that there was a push [1] to make PDFs available for download > from the published pages on docs.openstack.org for the various guides > (Sphinx projects). Is there any steam left in this initiative? > > The next best thing is to have something at the repository level that > people can work with. Currently, there are a limited number of guides that > can be converted to PDF from the openstack-manuals repository. What is the > suggested implementation for individual projects? > > [1]: https://etherpad.opendev.org/p/train-pdf-support-goal I may be misunderstanding, but can you elaborate on what's missing which you expected to find? There are, for example, PDFs like: https://docs.openstack.org/nova/latest/doc-nova.pdf Is it that we're not building those for some projects yet which you want added, or that there are other non-project-specific documents which need similar treatment, or simply that we're not doing enough to make the existence of the PDF versions discoverable? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From peter.matulis at canonical.com Wed Mar 3 20:05:10 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 3 Mar 2021 15:05:10 -0500 Subject: [docs] Project guides in PDF format In-Reply-To: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> Message-ID: On Wed, Mar 3, 2021 at 2:47 PM Jeremy Stanley wrote: > > I may be misunderstanding, but can you elaborate on what's missing > which you expected to find? There are, for example, PDFs like: > > https://docs.openstack.org/nova/latest/doc-nova.pdf > > Is it that we're not building those for some projects yet which you > want added, or that there are other non-project-specific documents > which need similar treatment, or simply that we're not doing enough > to make the existence of the PDF versions discoverable? > How do I get a download PDF link like what is available in the published pages of the Nova project? Where is that documented? In short, yes, I am interested in having downloadable PDFs for the projects that I maintain: https://opendev.org/openstack/charm-guide https://opendev.org/openstack/charm-deployment-guide -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 3 20:30:27 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 20:30:27 +0000 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> Message-ID: <20210303203027.c3pgopms57zf4ehk@yuggoth.org> On 2021-03-03 15:05:10 -0500 (-0500), Peter Matulis wrote: [...] > How do I get a download PDF link like what is available in the > published pages of the Nova project? Where is that documented? > > In short, yes, I am interested in having downloadable PDFs for the > projects that I maintain: > > https://opendev.org/openstack/charm-guide > https://opendev.org/openstack/charm-deployment-guide The official goal document is still available here: https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html Some technical detail can also be found in the earlier docs spec: https://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html A bit of spelunking in Git history turns up, for example, this change implementing PDF generation for openstack/ironic (you can find plenty more if you hunt): https://review.opendev.org/680585 I expect you would just do something similar to that. If memory serves (it's been a couple years now), each project hit slightly different challenges as no two bodies of documentation are every quite the same. You'll likely have to dig deep occasionally in Sphinx and LaTeX examples to iron things out. One thing which would have been nice as an output of that cycle goal was if the PTI section for documentation was updated with related technical guidance on building PDFs, but it's rather lacking in that department still: https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation If you can come up with a succinct summary for what's needed, I expect adding it there would be really useful to others too. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tpb at dyncloud.net Wed Mar 3 20:52:28 2021 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 3 Mar 2021 15:52:28 -0500 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: <6572eb3b-3cd5-a3d7-10ac-eb8b8ef34bf3@gmail.com> References: <6572eb3b-3cd5-a3d7-10ac-eb8b8ef34bf3@gmail.com> Message-ID: <20210303205228.c6c6dvraxdgoo6ba@barron.net> On 03/03/21 14:29 -0500, Brian Haley wrote: >On 3/3/21 5:50 AM, Lingxian Kong wrote: >>On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney >> wrote: >> >> On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: >> > Hi, >> > >> > Thanks for all your hard work on this. >> > >> > I'm wondering is there any doc proposed for devstack to tell >> people who are >> > not interested in OVN to keep the current devstack behaviour? I >> have a >> > feeling that using OVN as default Neutron driver would break the >> CI jobs >> > for some projects like Octavia, Trove, etc. which rely on ovs >> port for the >> > set up. >> >> well ovn is just an alternivie contoler for ovs. >> so ovn replace the neutron l2 agent it does not replace ovs. >> project like octavia or trove that deploy loadblances or dbs in vms >> should not be able to observe a difference. >> they may still want to deploy ml2/ovs but unless they are doing >> something directly on the host like adding port >> directly to ovs because they are not using vms they should not be >> aware of this change. >> >> >>Yes, they are. >> >>Please see https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 >>as an example for Octavia. > >The code to do this configuration was all migrated to the neutron >repository, with the last bit of cleanup for the Octavia code you >highlighted here: > >https://review.opendev.org/c/openstack/octavia/+/718192 > >It just needs a final push to get it merged. > >-Brian > There's similar plug-into-ovs code used by Manila container and generic backend drivers [1], [2], in the manila codebase, not migrated into neutron as done for octavia. Putting my downstream hat on, I'll remark that we tested Manila with OVN and run it that way now in production but downstream these drivers are not supported so I don't think we know they will work in upstream CI. Incidentally, I have always thought and said that it seems to me to be a layer violation for Manila to manipulate the ovs switches directly. We should be calling a neutron API. This would also free us of the topology restriction of having to run the Manila share service on the node with the OVS switch in question. [1] https://github.com/openstack/manila/blob/master/manila/share/drivers/container/driver.py#L245 [2] https://github.com/openstack/manila/blob/master/manila/network/linux/ovs_lib.py#L51 -- Tom From iwienand at redhat.com Wed Mar 3 22:31:08 2021 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 4 Mar 2021 09:31:08 +1100 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> Message-ID: On Wed, Mar 03, 2021 at 07:27:46PM +0000, Jeremy Stanley wrote: > Yeah, I don't see that being much different from hitting > download.cirros-cloud.net during out image builds. If you're forking > or otherwise making unofficial builds of Cirros though, you might > want to name the files something which makes that clear so people > seeing it in job build logs know it's not an official Cirros > provided version. I'd personally prefer to build it in a job and publish it so that you can see what's been done. Feels like a similar situation to the RPMs we build for openafs (see ~[1]). If you put your patch somewhere, you can use a file-matcher to just rebuild/upload when something changes. I see the argument that one binary blob from upstream is about as trustworthy as any other, though. -i [1] https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/playbooks/openafs-rpm-package-build From lyarwood at redhat.com Wed Mar 3 22:31:22 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 3 Mar 2021 22:31:22 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> Message-ID: On Wed, 3 Mar 2021 at 19:33, Jeremy Stanley wrote: > > On 2021-03-03 17:33:59 +0000 (+0000), Lee Yarwood wrote: > > On Wed, 3 Mar 2021 at 13:45, Jeremy Stanley wrote: > [...] > > > https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images > > > > > > If you look in the build logs of a DevStack-based job, you'll see it > > > check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img > > > and then not wget that image over the network because it exists on > > > the filesystem. > > > > Thanks both, > > > > So this final option looks like the most promising. It would not > > however involve any independent build steps in nodepool or zuul. If > > infra are okay with this I can host the dev build I have with my > > change applied and post a change to cache it alongside the official > > images. > > Yeah, I don't see that being much different from hitting > download.cirros-cloud.net during out image builds. If you're forking > or otherwise making unofficial builds of Cirros though, you might > want to name the files something which makes that clear so people > seeing it in job build logs know it's not an official Cirros > provided version. I believe our Nodepool builders also cache such > downloads locally, so you'll probably see them downloaded once each > from handful of builders and, not again until the next time you > update the filename in that DIB element. Okay I've pushed the following for this: Add custom cirros image with ahci module enabled to cache https://review.opendev.org/c/openstack/project-config/+/778590 I've gone for cirros-0.5.1-dev-ahci-x86_64-disk.img as the image name but we can hash that out in the review. Many thanks again! Lee From emiller at genesishosting.com Wed Mar 3 22:37:30 2021 From: emiller at genesishosting.com (Eric K. Miller) Date: Wed, 3 Mar 2021 16:37:30 -0600 Subject: [kolla][kolla-ansible] distro mixing on hosts Message-ID: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> Hi, We have been pretty consistent in deploying CentOS 7 on physical hardware where Kolla Ansible deploys its containers. However, we're going to be switching to Debian 10.8. It looks like Kolla Ansible likely won't care which distro is installed on the physical hardware, and so some nodes could have CentOS 7 and others have Debian 10.8. Long-term, we'll replace the CentOS 7 machines with Debian 10.8 and re-deploy. Kolla Container images will still be CentOS 7, but we will be recreating Kolla images with Debian and deploying those long term too. Is this supported? Any obvious issues we should be aware of? We will have the same interface names, so we will be pretty consistent in terms of the environment - but wanted to see if there was any gotchas that we didn't think of. Also, I guess I should ask if there are any specific Debian 10.8 issues we should be aware of? Has anybody else done this? :) Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 3 22:38:03 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 22:38:03 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> Message-ID: <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> On 2021-03-04 09:31:08 +1100 (+1100), Ian Wienand wrote: [...] > I'd personally prefer to build it in a job and publish it so that you > can see what's been done. [...] Sure, all things being equal I'd also prefer a transparent automated build with a periodic refresh, but that seems like something we can make incremental improvement on. Of course, you're a more active reviewer on DevStack than I am, so I'm happy to follow your lead if it's something you feel strongly about. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From bdobreli at redhat.com Thu Mar 4 08:28:45 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 4 Mar 2021 09:28:45 +0100 Subject: [docs] Project guides in PDF format In-Reply-To: <20210303203027.c3pgopms57zf4ehk@yuggoth.org> References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> Message-ID: On 3/3/21 9:30 PM, Jeremy Stanley wrote: > On 2021-03-03 15:05:10 -0500 (-0500), Peter Matulis wrote: > [...] >> How do I get a download PDF link like what is available in the >> published pages of the Nova project? Where is that documented? On each project's documentation page, there is a "Download PDF" button, at the top, near to the "Report a Bug". >> >> In short, yes, I am interested in having downloadable PDFs for the >> projects that I maintain: >> >> https://opendev.org/openstack/charm-guide >> https://opendev.org/openstack/charm-deployment-guide > > The official goal document is still available here: > > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html > > Some technical detail can also be found in the earlier docs spec: > > https://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html > > A bit of spelunking in Git history turns up, for example, this > change implementing PDF generation for openstack/ironic (you can > find plenty more if you hunt): > > https://review.opendev.org/680585 > > I expect you would just do something similar to that. If memory > serves (it's been a couple years now), each project hit slightly > different challenges as no two bodies of documentation are every > quite the same. You'll likely have to dig deep occasionally in > Sphinx and LaTeX examples to iron things out. > > One thing which would have been nice as an output of that cycle goal > was if the PTI section for documentation was updated with related > technical guidance on building PDFs, but it's rather lacking in that > department still: > > https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation > > If you can come up with a succinct summary for what's needed, I > expect adding it there would be really useful to others too. > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jean-francois.taltavull at elca.ch Thu Mar 4 09:07:25 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Thu, 4 Mar 2021 09:07:25 +0000 Subject: Need help to get rally test suite In-Reply-To: References: Message-ID: Hello Ankit, Take a look here : https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios/ Regards, Jean-Francois From: Ankit Goel Sent: mercredi, 3 mars 2021 19:25 To: openstack-dev at lists.openstack.org Subject: Need help to get rally test suite Hello Experts, Need some help on rally. I have installed Openstack rally on centos. Now I need to run some benchmarking test suite. I need to know is there any pre-existing test suite which covers majority of test cases for Openstack control plane. So that I can just run that test suite. I remember earlier we use to get the samples under rally/samples/tasks/scenarios for each component but now I am not seeing it but I am seeing some dummy files. So from where I can get those samples json/yaml files which is required to run the rally tests. Awaiting for your response. Thanks in Advance Regards, Ankit Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 4 09:08:50 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 4 Mar 2021 10:08:50 +0100 Subject: =?UTF-8?Q?=5Belection=5D=5Brelease=5D_Herv=C3=A9_Beraud_candidacy_for_Rele?= =?UTF-8?Q?ase_Management_PTL_for_Xena?= Message-ID: (nomination posted to https://review.opendev.org/c/openstack/election/+/778623) Hello, I would like to continue serving as PTL for Release Management during the Xena cycle. Wallaby was my first PTL experience and it allowed me to deeply improve my knowledge about Openstack. I'd like to continue to work on growing the core team to constitute experienced team members to whom we could pass the torch. Anything we can do to help anyone to join the team and know what to do and how to do it will be better for us all. We have a lot of our processes mostly automated, but we also have few unfinished business from the Wallaby cycle. So as a PTL I have the intention to make the team focus on finishing these work. It has been an interesting, active and rewarding release cycle, and I am excited to continue learning and applying those lessons to keep things moving and looking for any ways I can add to the already great body of work we have in our release. I will be happy to do it if you give me the opportunity. Thanks for your consideration, Hervé -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Mar 4 09:12:31 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 4 Mar 2021 09:12:31 +0000 Subject: [kolla][kolla-ansible] distro mixing on hosts In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> Message-ID: On Wed, 3 Mar 2021 at 22:38, Eric K. Miller wrote: > > Hi, > > > > We have been pretty consistent in deploying CentOS 7 on physical hardware where Kolla Ansible deploys its containers. However, we're going to be switching to Debian 10.8. > > > > It looks like Kolla Ansible likely won't care which distro is installed on the physical hardware, and so some nodes could have CentOS 7 and others have Debian 10.8. Long-term, we'll replace the CentOS 7 machines with Debian 10.8 and re-deploy. > > > > Kolla Container images will still be CentOS 7, but we will be recreating Kolla images with Debian and deploying those long term too. > > > > Is this supported? Any obvious issues we should be aware of? We will have the same interface names, so we will be pretty consistent in terms of the environment - but wanted to see if there was any gotchas that we didn't think of. > Hi Eric, While this is possible, it is not something we "support" upstream. We already have many combinations of OS distro and binary/source. I know that people have tried it in the past, and while it should mostly just work, occasionally it causes issues. A classic issue would be using an ansible fact about the host OS to infer something about the container. While we'll likely accept reasonable fixes for these issues upstream, mixing host and container OS may cause people to pay less attention to bugs raised. If you want to migrate from CentOS 7 to Debian, I would suggest keeping the host and container OS the same, by basing the kolla_base_distro variable on groups. Of course this will also be an untested configuration, and I suggest that you test it thoroughly, in particular things like migration between different versions of libvirt/qemu. Mark > > > Also, I guess I should ask if there are any specific Debian 10.8 issues we should be aware of? > > > > Has anybody else done this? :) > > > > Thanks! > > > Eric > > From andr.kurilin at gmail.com Thu Mar 4 09:56:47 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 4 Mar 2021 11:56:47 +0200 Subject: Need help to get rally test suite In-Reply-To: References: Message-ID: Hi! Rally plugins for OpenStack platform moved under separate project&repository. See https://github.com/openstack/rally/blob/master/CHANGELOG.rst#100---2018-06-20 for more details. чт, 4 мар. 2021 г. в 11:14, Taltavull Jean-Francois < jean-francois.taltavull at elca.ch>: > Hello Ankit, > > > > Take a look here : > https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios/ > > > > Regards, > > Jean-Francois > > > > > > *From:* Ankit Goel > *Sent:* mercredi, 3 mars 2021 19:25 > *To:* openstack-dev at lists.openstack.org > *Subject:* Need help to get rally test suite > > > > Hello Experts, > > > > Need some help on rally. > > > > I have installed Openstack rally on centos. Now I need to run some > benchmarking test suite. I need to know is there any pre-existing test > suite which covers majority of test cases for Openstack control plane. So > that I can just run that test suite. > > > > I remember earlier we use to get the samples under > rally/samples/tasks/scenarios for each component but now I am not seeing it > but I am seeing some dummy files. So from where I can get those samples > json/yaml files which is required to run the rally tests. > > > > Awaiting for your response. > > > > Thanks in Advance > > > > Regards, > > Ankit Goel > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Thu Mar 4 11:07:55 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 4 Mar 2021 11:07:55 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> Message-ID: On Wed, 3 Mar 2021 at 22:41, Jeremy Stanley wrote: > > On 2021-03-04 09:31:08 +1100 (+1100), Ian Wienand wrote: > [...] > > I'd personally prefer to build it in a job and publish it so that you > > can see what's been done. > [...] > > Sure, all things being equal I'd also prefer a transparent automated > build with a periodic refresh, but that seems like something we can > make incremental improvement on. Of course, you're a more active > reviewer on DevStack than I am, so I'm happy to follow your lead if > it's something you feel strongly about. Agreed, if there's a need to build and cache another unreleased cirros fix then I would be happy to look into automating the build somewhere but for a one off I think this is just about acceptable. FWIW nova-next is passing with the image below: WIP: nova-next: Start testing the 'q35' machine type https://review.opendev.org/c/openstack/nova/+/708701 I'll clean that change up later today. Thanks again, Lee From admin at gsic.uva.es Thu Mar 4 13:04:13 2021 From: admin at gsic.uva.es (Cristina Mayo Sarmiento) Date: Thu, 04 Mar 2021 14:04:13 +0100 Subject: [glance] Doc about swift as backend Message-ID: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> Hi everyone, Are there any specific documentation about how to configure Swift as backend of Glance service? I've found this: https://docs.openstack.org/glance/latest/configuration/configuring.html but I'm not sure what are the options I need. Firstly, I follow the installation guides of glance: https://docs.openstack.org/glance/latest/install/install-ubuntu.html and I have now file as store but I'rather change it. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Mar 4 13:29:59 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 04 Mar 2021 14:29:59 +0100 Subject: [neutron] Drivers meeting 05.03.2021 cancelled Message-ID: <11123192.FyhnmifhmU@p1> Hi, Due to lack of agenda let's cancel tomorrow's drivers meeting. Have a great weekend and see You on the meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From meberhardt at unl.edu.ar Thu Mar 4 15:09:37 2021 From: meberhardt at unl.edu.ar (meberhardt at unl.edu.ar) Date: Thu, 04 Mar 2021 15:09:37 +0000 Subject: [Keystone][Oslo] Policy problems Message-ID: <20210304150937.Horde.HUlWGgiGudMxr39pvoxGJzT@webmail.unl.edu.ar> Hi, my installation complains about deprecated policies and throw errors when I try to run spacific commands in cli (list projects or list users, for example). "You are not authorized to perform the requested action: identity:list_users." I tried to fix this by upgrading the keystone policies using oslopolicy-policy-generator and oslopolicy-policy-upgrade. Just found two places where I have those keystone policy files in my sistem: /etc/openstack_dashboard/keystone_policy.json and cd /lib/python3.6/site-packages/openstack_auth/tests/conf/keystone_policy.json I regenerated & upgraded it but Keystone still complains about the old polices. ¿Where are it placed? ¿How I shoud fix it? OS: CentOS 8 Openstack version: Ussuri Manual installation Thanks, Matias /* --------------------------------------------------------------- */ /* Matías A. Eberhardt */ /* */ /* Centro de Telemática */ /* Secretaría General */ /* UNIVERSIDAD NACIONAL DEL LITORAL */ /* Pje. Martínez 2652 - S3002AAB Santa Fe - Argentina */ /* tel +54(342)455-4245 - FAX +54(342)457-1240 */ /* --------------------------------------------------------------- */ From cjeanner at redhat.com Thu Mar 4 16:16:08 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 4 Mar 2021 17:16:08 +0100 Subject: [TripleO][Validation] Validation CLI simplification In-Reply-To: References: Message-ID: <01e89e71-11ec-2af2-0b66-e14d246fd737@redhat.com> Hello there, On 3/3/21 2:13 PM, Mathieu Bultel wrote: > Thank you Marios for the response. > > On Wed, Mar 3, 2021 at 1:45 PM Marios Andreou > wrote: > > > > On Wed, Mar 3, 2021 at 12:51 PM Mathieu Bultel > wrote: > > Hi TripleO Folks, > > I'm raising this topic to the ML because it appears we have some > divergence regarding some design around the way the Validations > should be used with and without TripleO and I wanted to have a > larger audience, in particular PTL and core thoughts around this > topic. > > The current situation is: > We have an openstack tripleo validator set of sub commands to > handle Validation (run, list ...). > The CLI validation is taking several parameters as an entry > point and in particular the stack/plan, Openstack authentication > and static inventory file. > > By asking the stack/plan name, the CLI is trying to verify and > understand if the plan or the stack is valid, if the Overcloud > exists somewhere in the cloud before passing that to the > tripleo-ansible-inventory script and trying to generate a static > inventory file in regard to what --plan or stack has been passed. > > > > Sorry if silly question but, can't we just make 'validate the stack > status' as one of the validations? In fact you already have > something like that > there https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/roles/stack_health/tasks/main.yml#L2 > > . Then only this validation will require the stack name passed in > instead of on every validation run. > > BTW as an aside we should probably remove 'plan' from that code > altogether given the recent 'remove swift and overcloud plan' work > from ramishra/cloudnull and > co @ https://review.opendev.org/q/topic:%22env_merging%22+(status:open%20OR%20status:merged) > > > Not exactly, the main goal of checking if the --stack/plan value is > correct and if the stack provided exists and is right is for getting a > valid Ansible inventory to execute the Validations. > Meaning that all the extra checks in the code in [1] is made for > generating the inventory, which imho should not belong to the Validation > CLI but to something else, and Validation should consider that the > --inventory file is correct (because of the reasons mentioned earlier). well... It actually isn't part of the validation CLI, it's part of the plugin for tripleoclient wrapping the actual validation library... Sooo... That's a usage we'd intend to get, isn't it? All the "bad things" are on the tripleo side, NOT the VF cli/code/lib/content. Cheers, C. > >   > > The code is mainly here: [1]. > > This behavior implies several constraints: > * Validation CLI needs Openstack authentication in order to do > those checks > * It introduces some complexity in the Validation code part: > querying Heat to get the plan name to be sure the name provided > is correct, get the status of the stack... In case of Standalone > deployment, it adds more complexity then. > * This code is only valid for "standard" deployments and usage > meaning it doesn't work for Standalone, for some Upgrade and FFU > stages and needs to be bypassed for pre-undercloud deployment.  > * We hit several blockers around this part of code. > > My proposal is the following: > > Since we are thinking of the future of Validation and we want > something more robust, stronger, simpler, usable and efficient, > I propose to get rid of the plan/stack and authentication > functionalities in the Validation code, and only ask for a valid > inventory provided by the user. > I propose as well to create a new entry point in the TripleO CLI > to generate a static inventory such as: > openstack tripleo inventory generate --output-file my-inv.yaml > and then: > openstack tripleo validator run --validation my-validation > --inventory my-inv.yaml > > By doing that, I think we gain a lot in simplification, it's > more robust, and Validation will only do what it aims for: wrapp > Ansible execution to provide better logging information and history. > > The main concerns about this approach is that the user will have > to provide a valid inventory to the Validation CLI. > I understand the point of view of getting something fully > autonomous, and the way of just kicking *one* command and the > Validation can be *magically* executed against your cloud, but I > think the less complex the Validation code is, the more robust, > stable and usable it will be. > > Deferring a specific entry point for the inventory, which is a > key part of post deployment action, seems something more clear > and robust as well. > This part of code could be shared and used for any other usages > instead of calling the inventory script stored into > tripleo-validations. It could then use the tripleo-common > inventory library directly with tripleoclient, instead of > calling from client -> tripleo-validations/scripts -> query > tripleo-common inventory library. > > I know it changes a little bit the usage (adding one command > line in the execution process for getting a valid inventory) but > it's going in a less buggy and racy direction. > And the inventory should be generated only once, or at least at > any big major cloud change. > > So, I'm glad to get your thoughts on that topic and your overall > views around this topic. > > > > The proposal sounds sane to me, but just to be clear by > "authentication functionalities" are you referring specifically to > the '--ssh-user' argument > (https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/tripleo_validations/tripleo_validator.py#L243 > )?  > i.e. we will already have that in the generated static inventory so > no need to have in on the CLI?  > > The authentication in the current CLI is only needed to get the stack > output in order to generate the inventory. > If the user provides his inventory, no authentication, no heat stack > check and then Validation can run everywhere on every stage of a TripleO > deployment (early without any TripleO bits, or in the middle of LEAP > Upgrade for example). > > > If the only cost is that we have to have an extra step for > generating the inventory then IMO it is worth doing. I would however > be interested to hear from those that are objecting to the proposal > about why it is a bad idea ;) since you said there has been a > divergence in opinions over the design  > > regards, marios > >   > > > Thanks, > Mathieu > > [1] https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 > > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From E.Panter at mittwald.de Thu Mar 4 16:43:38 2021 From: E.Panter at mittwald.de (Erik Panter) Date: Thu, 4 Mar 2021 16:43:38 +0000 Subject: [kolla-ansible] partial upgrades / mixed releases in deployments Message-ID: Hi, We are currently preparing to upgrade a kolla-ansible deployed OpenStack cluster and were wondering if it is possible to upgrade individual services independently of each other, for example to upgrade one service at a time to Ussuri while still using kolla-ansible to deploy and reconfigure the Train versions of the other services. Our idea was that either the Ussuri release of kolla-ansible is used to deploy Train and Ussuri services (with properly migrated configuration), or that two different releases and configurations are used for the two sets of services in the same deployment. Does anyone have experience if this is practical or even possible? Thank you in advance, Erik _____ Erik Panter Systementwickler | Infrastruktur Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 Mobil: 0151 / 12345678 e.panter at mittwald.de https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From ikatzir at infinidat.com Thu Mar 4 17:56:00 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Thu, 4 Mar 2021 19:56:00 +0200 Subject: [ironic] How to move node from active state to manageable Message-ID: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> Hello Forum, I have an overcloud that gone bad and I am trying to re-deploy it, Running rhos16.1 with one director and two overcloud nodes (compute and controller) I have re-installed undercloud and having both nodes in an active provisioning state. Do I need to run introspection again? Here is the outputted for baremetal node list: (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | c7bf16b7-eb3c-4022-88de-7c5a78cda174 | power on | active | False | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | 99223f65-6985-4815-92ff-e19a28c2aab1 | power on | active | False | +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ When I want to move each node from active > manage I get an error: (undercloud) [stack at interop010 ~]$ openstack baremetal node manage 4b02703a-f765-4ebb-85ed-75e88b4cbea5 The requested action "manage" can not be performed on node "4b02703a-f765-4ebb-85ed-75e88b4cbea5" while it is in state "active". (HTTP 400) How do I get to a state which is ready for deployment (available) ? Thanks, Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.faulkner at verizonmedia.com Thu Mar 4 18:12:33 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 4 Mar 2021 10:12:33 -0800 Subject: [E] [ironic] How to move node from active state to manageable In-Reply-To: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> References: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> Message-ID: When a node is active with an instance UUID set, that generally indicates a nova instance (with that UUID) is provisioned onto the node. Nodes that are provisioned (active) are not able to be moved to manageable state. If you want to reprovision these nodes, you'll want to delete the associated instances from Nova (openstack server delete instanceUUID), and after they complete a cleaning cycle they'll return to available. Good luck, Jay Faulkner On Thu, Mar 4, 2021 at 10:01 AM Igal Katzir wrote: > Hello Forum, > > I have an overcloud that gone bad and I am trying to re-deploy it, Running > rhos16.1 with one director and two overcloud nodes (compute and controller) > I have re-installed undercloud and having both nodes in an *active * > provisioning state. > Do I need to run introspection again? > Here is the outputted for baremetal node list: > (undercloud) [stack at interop010 ~]$ openstack baremetal node list > > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > | UUID | Name | > Instance UUID | Power State | Provisioning State | > Maintenance | > > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | > c7bf16b7-eb3c-4022-88de-7c5a78cda174 | power on | active | > False | > | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | > 99223f65-6985-4815-92ff-e19a28c2aab1 | power on | active | > False | > > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > When I want to move each node from active > manage I get an error: > (undercloud) [stack at interop010 ~]$ openstack baremetal node manage > 4b02703a-f765-4ebb-85ed-75e88b4cbea5 > The requested action "manage" can not be performed on node > "4b02703a-f765-4ebb-85ed-75e88b4cbea5" while it is in state "active". (HTTP > 400) > > How do I get to a state which is ready for deployment (available) ? > > Thanks, > Igal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Mar 4 19:05:01 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 4 Mar 2021 11:05:01 -0800 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> Message-ID: Peter, Feel free to message me on IRC (johnsom) if you run into questions about enabling the PDF docs for your projects. I did the work for Octavia so might have some answers. Michael On Thu, Mar 4, 2021 at 12:36 AM Bogdan Dobrelya wrote: > > On 3/3/21 9:30 PM, Jeremy Stanley wrote: > > On 2021-03-03 15:05:10 -0500 (-0500), Peter Matulis wrote: > > [...] > >> How do I get a download PDF link like what is available in the > >> published pages of the Nova project? Where is that documented? > > On each project's documentation page, there is a "Download PDF" button, > at the top, near to the "Report a Bug". > > >> > >> In short, yes, I am interested in having downloadable PDFs for the > >> projects that I maintain: > >> > >> https://opendev.org/openstack/charm-guide > >> https://opendev.org/openstack/charm-deployment-guide > > > > The official goal document is still available here: > > > > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html > > > > Some technical detail can also be found in the earlier docs spec: > > > > https://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html > > > > A bit of spelunking in Git history turns up, for example, this > > change implementing PDF generation for openstack/ironic (you can > > find plenty more if you hunt): > > > > https://review.opendev.org/680585 > > > > I expect you would just do something similar to that. If memory > > serves (it's been a couple years now), each project hit slightly > > different challenges as no two bodies of documentation are every > > quite the same. You'll likely have to dig deep occasionally in > > Sphinx and LaTeX examples to iron things out. > > > > One thing which would have been nice as an output of that cycle goal > > was if the PTI section for documentation was updated with related > > technical guidance on building PDFs, but it's rather lacking in that > > department still: > > > > https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation > > > > If you can come up with a succinct summary for what's needed, I > > expect adding it there would be really useful to others too. > > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > From tcr1br24 at gmail.com Thu Mar 4 09:26:52 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Thu, 4 Mar 2021 17:26:52 +0800 Subject: [Consultation] SFC issue Message-ID: Dear Sir, We are testing a simple SFC in OpenStack (Stein) + OpenDaylight (Neon) + Open vSwitch (v2.11.1). It's an all-in-one deployment. We have read the document: https://readthedocs.org/projects/odl-sfc/downloads/pdf/latest/ and https://github.com/opnfv/sfc/blob/master/docs/release/scenarios/os-odl-sfc-noha/scenario.description.rst [image: image.png] Our SFC topology: [image: enter image description here] All on the same compute node. Build SFC through API: 1. openstack sfc flow classifier create --source-ip-prefix 10.20.0.0/24 --logical-source-port p0 FC1 2. openstack sfc port pair create --description "Firewall SF instance 1" --ingress p1 --egress p1 --service-function-parameters correlation=None PP1 3. openstack sfc port pair group create --port-pair PP1 PPG1 4. openstack sfc port chain create --port-pair-group PPG1 --flow-classifier FC1 --chain-parameters correlation=nsh PC1 Ping from client to server, but packet did not pass through firewall,open vswitch log show: [image: enter image description here] Flow table: [image: enter image description here] trace flow: [image: enter image description here] Is there something wrong with the OpenStack instructions? It seems SFC proxy not work or there may be some bugs in "networking-sfc"? Thanks! Sincerely, Jhen-Hao >From NTU CSIE [image: Mailtrack] Sender notified by Mailtrack 03/04/21, 05:26:34 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 29563 bytes Desc: not available URL: From emiller at genesishosting.com Thu Mar 4 19:50:52 2021 From: emiller at genesishosting.com (Eric K. Miller) Date: Thu, 4 Mar 2021 13:50:52 -0600 Subject: [kolla][kolla-ansible] distro mixing on hosts In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA048DFF3D@gmsxchsvr01.thecreation.com> > While this is possible, it is not something we "support" upstream. We > already have many combinations of OS distro and binary/source. I know > that people have tried it in the past, and while it should mostly just > work, occasionally it causes issues. A classic issue would be using an > ansible fact about the host OS to infer something about the container. > While we'll likely accept reasonable fixes for these issues upstream, > mixing host and container OS may cause people to pay less attention to > bugs raised. Understood. If we run into issues, I'll report regardless of whether there is attention, just so someone knows what was tried, and what failed. > If you want to migrate from CentOS 7 to Debian, I would suggest > keeping the host and container OS the same, by basing the > kolla_base_distro variable on groups. Of course this will also be an > untested configuration, and I suggest that you test it thoroughly, in > particular things like migration between different versions of > libvirt/qemu. Ah, that's a good idea, setting kolla_base_distro in groups as opposed to globally. Thanks! If we can do that, then we'll definitely keep both host and container OS the same. We're testing in a VM environment first, so will try to identify problems before we have to do this on live systems. Thanks again for the quick responses as always Mark! Eric From masayuki.igawa at gmail.com Thu Mar 4 23:09:05 2021 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Fri, 05 Mar 2021 08:09:05 +0900 Subject: [qa][election][ptl] PTL non-candidacy Message-ID: Hello all, I will not be running again for the QA PTL for the Xena cycle because I believe rotating leadership for open source projects is a good thing basically, and I will also have very limited bandwidth for the cycle. Therefore it would be best if someone else runs for QA PTL for the cycle. But I'll be around and continue to involve in OpenStack though it'll be downstream mostly. So, feel free to ping me if you want. Best Regards, -- Masayuki Igawa From rosmaita.fossdev at gmail.com Fri Mar 5 02:51:46 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 4 Mar 2021 21:51:46 -0500 Subject: [cinder] final patches for wallaby os-brick release need reviews Message-ID: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> Hello members of the cinder community: Please direct your attention to the following patches at your earliest convenience (i.e., right now): https://review.opendev.org/c/openstack/os-brick/+/777086 "NVMeOF connector driver connection information compatibility fix" - this fixes the regression in the nvmeof connector - patch has passed CI: Zuul, Mellanox SPDK (to detect the regression), and Kioxia (which uses the mdraid feature) - looks pretty much ready to go https://review.opendev.org/c/openstack/os-brick/+/778810 "Add release note for nvmeof connector" - the title says it all https://review.opendev.org/c/openstack/os-brick/+/778807 "Update requirements for wallaby release" - raises the minimum versions in the various requirements files to reflect what we're actually testing with right now - nothing major because of the adjustments made back in January to deal with the upgraded pip dependency resolver Happy reviewing! brian From mark at stackhpc.com Fri Mar 5 08:49:17 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 5 Mar 2021 08:49:17 +0000 Subject: [kolla-ansible] partial upgrades / mixed releases in deployments In-Reply-To: References: Message-ID: On Thu, 4 Mar 2021 at 16:44, Erik Panter wrote: > > Hi, > > We are currently preparing to upgrade a kolla-ansible deployed > OpenStack cluster and were wondering if it is possible to upgrade > individual services independently of each other, for example to > upgrade one service at a time to Ussuri while still using > kolla-ansible to deploy and reconfigure the Train versions of the > other services. > > Our idea was that either the Ussuri release of kolla-ansible is used > to deploy Train and Ussuri services (with properly migrated > configuration), or that two different releases and configurations are > used for the two sets of services in the same deployment. > > Does anyone have experience if this is practical or even possible? Hi Erik, In general, this might work, however it's not something we test or "support", so it cannot be guaranteed. Generally there are not too many changes in how services are deployed from release to release, but there are times when procedures change, configuration changes, or we may introduce incompatibilities between the container images and the Ansible deployment tooling. At runtime, there is also the operation of services in a mixed environment to consider, although stable components and APIs help here. All of that is to say that such a configuration would need testing, and ideally not be in place for a long period of time. One similar case we often have is upgrading a single service, often Magnum, to a newer release than the rest of the cloud. To achieve this we set the magnum_tag variable. Mark > > Thank you in advance, > > Erik > > _____ > > Erik Panter > Systementwickler | Infrastruktur > > > Mittwald CM Service GmbH & Co. KG > Königsberger Straße 4-6 > 32339 Espelkamp > > Tel.: 05772 / 293-900 > Fax: 05772 / 293-333 > Mobil: 0151 / 12345678 > > e.panter at mittwald.de > https://www.mittwald.de > > Geschäftsführer: Robert Meyer, Florian Jürgens > > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen > > Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit > gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. > From hberaud at redhat.com Fri Mar 5 09:56:48 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 5 Mar 2021 10:56:48 +0100 Subject: [release] Release countdown for week R-5 Mar 08 - Mar 12 Message-ID: Development Focus ----------------- We are getting close to the end of the Wallaby cycle! Next week on 11 March 2021 is the Wallaby-3 milestone, also known as feature freeze. It's time to wrap up feature work in the services and their client libraries, and defer features that won't make it to the Xena cycle. General Information ------------------- This coming week is the deadline for client libraries: their last feature release needs to happen before "Client library freeze" on 11 March, 2021. Only bugfix releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/wallaby branching request with the review. As an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2 11 March, 2021 is also the deadline for feature work in all OpenStack deliverables following the cycle-with-rc model. To help those projects produce a first release candidate in time, only bugfixes should be allowed in the master branch beyond this point. Any feature work past that deadline has to be raised as a Feature Freeze Exception (FFE) and approved by the team PTL. Finally, feature freeze is also the deadline for submitting a first version of your cycle-highlights. Cycle highlights are the raw data that helps shape what is communicated in press releases and other release activity at the end of the cycle, avoiding direct contacts from marketing folks. See https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights for more details. Upcoming Deadlines & Dates -------------------------- Cross-project events: - Wallaby-3 milestone (feature freeze): 11 March, 2021 (R-5 week) - RC1 deadline: 22 March, 2021 (R-3 week) - Final RC deadline: 5 April, 2021 (R-1 week) - Final Wallaby final release: 14 April, 2021 Project-specific events: - Cinder 3rd Party CI Compliance Checkpoint: 12 March, 2021 (R-5 week) -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Fri Mar 5 11:54:59 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 05 Mar 2021 12:54:59 +0100 Subject: [cinder] final patches for wallaby os-brick release need reviews In-Reply-To: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> References: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> Message-ID: <3632106.q0ZmV6gNhb@whitebase.usersys.redhat.com> On Friday, 5 March 2021 03:51:46 CET Brian Rosmaita wrote: > Hello members of the cinder community: > > Please direct your attention to the following patches at your earliest > convenience (i.e., right now): > > https://review.opendev.org/c/openstack/os-brick/+/777086 > "NVMeOF connector driver connection information compatibility fix" > - this fixes the regression in the nvmeof connector > - patch has passed CI: Zuul, Mellanox SPDK (to detect the regression), > and Kioxia (which uses the mdraid feature) > - looks pretty much ready to go > > https://review.opendev.org/c/openstack/os-brick/+/778810 > "Add release note for nvmeof connector" > - the title says it all > > https://review.opendev.org/c/openstack/os-brick/+/778807 > "Update requirements for wallaby release" > - raises the minimum versions in the various requirements files to > reflect what we're actually testing with right now > - nothing major because of the adjustments made back in January to deal > with the upgraded pip dependency resolver Maybe also: https://review.opendev.org/c/openstack/os-brick/+/775545 "Avoid unhandled exceptions during connecting to iSCSI portals" which may make the code more robust? Ciao -- Luigi From rosmaita.fossdev at gmail.com Fri Mar 5 14:31:58 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 5 Mar 2021 09:31:58 -0500 Subject: [glance] Doc about swift as backend In-Reply-To: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> References: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> Message-ID: On 3/4/21 8:04 AM, Cristina Mayo Sarmiento wrote: > Hi everyone, > > Are there any specific documentation about how to configure Swift as > backend of Glance service? I've found this: > https://docs.openstack.org/glance/latest/configuration/configuring.html > but I'm not sure what are the options I need. Firstly, I follow the > installation guides of glance: > https://docs.openstack.org/glance/latest/install/install-ubuntu.htmland > I have now file as store but I'rather change it. The only thing I'm aware of is the "configuring swift" section of that doc you already found: https://docs.openstack.org/glance/latest/configuration/configuring.html#configuring-the-swift-storage-backend The first decision you have to make is whether to use single tenant (all images stored in an account "owned" by glance) or multi-tenant (the data for an image owned by tenant T is stored in T's account in swift). There are pros and cons to each; you may want to add [ops] to your subject and see if other operators can give you some advice. They may also know of some other helpful documentation. > > Thanks! From donny at fortnebula.com Fri Mar 5 15:23:59 2021 From: donny at fortnebula.com (Donny Davis) Date: Fri, 5 Mar 2021 10:23:59 -0500 Subject: [glance] Doc about swift as backend In-Reply-To: References: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> Message-ID: I would say that if you're building a public facing service the multi-tenant approach makes sense. If this is internal then a single tenant is the way to go IMO. On Fri, Mar 5, 2021 at 9:35 AM Brian Rosmaita wrote: > On 3/4/21 8:04 AM, Cristina Mayo Sarmiento wrote: > > Hi everyone, > > > > Are there any specific documentation about how to configure Swift as > > backend of Glance service? I've found this: > > https://docs.openstack.org/glance/latest/configuration/configuring.html > > but I'm not sure what are the options I need. Firstly, I follow the > > installation guides of glance: > > https://docs.openstack.org/glance/latest/install/install-ubuntu.htmland > > I have now file as store but I'rather change it. > > The only thing I'm aware of is the "configuring swift" section of that > doc you already found: > > https://docs.openstack.org/glance/latest/configuration/configuring.html#configuring-the-swift-storage-backend > > The first decision you have to make is whether to use single tenant (all > images stored in an account "owned" by glance) or multi-tenant (the data > for an image owned by tenant T is stored in T's account in swift). > > There are pros and cons to each; you may want to add [ops] to your > subject and see if other operators can give you some advice. They may > also know of some other helpful documentation. > > > > > > Thanks! > > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Mar 5 15:37:02 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 5 Mar 2021 17:37:02 +0200 Subject: [TripleO] Wallaby cycle highlights Message-ID: Hello all o/ Before you have a heart attack ;) Wallaby is still ~1 month away (and more like ~1.5 months+ for TripleO as we always trail the release) :) As mentioned in the last tripleo irc meeting [1] the deadline for wallaby cycle highlights is next Friday 12th March [2]. I had a go at and posted the proposal to https://review.opendev.org/c/openstack/releases/+/778971 . Please comment on the review or here if you want to update the wording and especially if I have forgotten to include something please accept my apologies if you let me know I can fix it :) thanks, marios [1] http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-03-02-14.00.log.html#l-176 [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020714.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Mar 5 16:26:19 2021 From: melwittt at gmail.com (melanie witt) Date: Fri, 5 Mar 2021 08:26:19 -0800 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? Message-ID: Hello all, I'm seeking input from the neutron and nova teams regarding policy enforcement for allowing attachment to external networks. Details below. Recently we've been looking at an issue that was reported quite a long time ago (2017) [1] where we have a policy check in nova-compute that controls whether to allow users to attach an external network to their instances. This has historically been a pain point for operators as (1) it goes against convention of having policy checks in nova-api only and (2) setting the policy to anything other than the default requires deploying a policy file change to all of the compute hosts in the deployment. The launchpad bug report mentions neutron refactoring work that was happening at the time, which was thought might make the 'network:attach_external_network' policy check on the nova side redundant. Years have passed since then and customers are still running into this problem, so we are thinking, can this policy check be removed on the nova-compute side now? I did a local test with devstack to verify what the behavior is if we were to remove the 'network:attach_external_network' policy check entirely [2] and found that neutron appears to properly enforce permission to attach to external networks itself. It appears that the enforcement on the neutron side makes the nova policy check redundant. When I tried to boot an instance to attach to an external network, neutron API returned the following: INFO neutron.pecan_wsgi.hooks.translation [req-58fdb103-cd20-48c9-b73b-c9074061998c req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create port on this network [...] INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 Can anyone from the neutron team confirm whether it would be OK for us to remove our nova-compute policy check for external network attach permission and let neutron take care of the check? And on the nova side, I assume we would need a deprecation cycle before removing the 'network:attach_external_network' policy. If we can get confirmation from the neutron team, is anyone opposed to the idea of deprecating the 'network:attach_external_network' policy in the Wallaby cycle, to be removed in the Xena release? I would appreciate your thoughts. Cheers, -melanie [1] https://bugs.launchpad.net/nova/+bug/1675486 [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 From ankit at aptira.com Fri Mar 5 05:51:03 2021 From: ankit at aptira.com (Ankit Goel) Date: Fri, 5 Mar 2021 05:51:03 +0000 Subject: Need help to get rally test suite In-Reply-To: References: Message-ID: Oh ok. Thanks Jean. I got it now. Regards, Ankit Goel From: Andrey Kurilin Sent: 04 March 2021 15:27 To: Taltavull Jean-Francois Cc: Ankit Goel ; openstack-dev at lists.openstack.org Subject: Re: Need help to get rally test suite Hi! Rally plugins for OpenStack platform moved under separate project&repository. See https://github.com/openstack/rally/blob/master/CHANGELOG.rst#100---2018-06-20 for more details. чт, 4 мар. 2021 г. в 11:14, Taltavull Jean-Francois >: Hello Ankit, Take a look here : https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios/ Regards, Jean-Francois From: Ankit Goel > Sent: mercredi, 3 mars 2021 19:25 To: openstack-dev at lists.openstack.org Subject: Need help to get rally test suite Hello Experts, Need some help on rally. I have installed Openstack rally on centos. Now I need to run some benchmarking test suite. I need to know is there any pre-existing test suite which covers majority of test cases for Openstack control plane. So that I can just run that test suite. I remember earlier we use to get the samples under rally/samples/tasks/scenarios for each component but now I am not seeing it but I am seeing some dummy files. So from where I can get those samples json/yaml files which is required to run the rally tests. Awaiting for your response. Thanks in Advance Regards, Ankit Goel -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Mar 5 17:27:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 5 Mar 2021 12:27:51 -0500 Subject: [cinder] final patches for wallaby os-brick release need reviews In-Reply-To: <3632106.q0ZmV6gNhb@whitebase.usersys.redhat.com> References: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> <3632106.q0ZmV6gNhb@whitebase.usersys.redhat.com> Message-ID: On 3/5/21 6:54 AM, Luigi Toscano wrote: > On Friday, 5 March 2021 03:51:46 CET Brian Rosmaita wrote: >> Hello members of the cinder community: >> >> Please direct your attention to the following patches at your earliest >> convenience (i.e., right now): >> >> https://review.opendev.org/c/openstack/os-brick/+/777086 >> "NVMeOF connector driver connection information compatibility fix" >> - this fixes the regression in the nvmeof connector >> - patch has passed CI: Zuul, Mellanox SPDK (to detect the regression), >> and Kioxia (which uses the mdraid feature) >> - looks pretty much ready to go >> >> https://review.opendev.org/c/openstack/os-brick/+/778810 >> "Add release note for nvmeof connector" >> - the title says it all >> >> https://review.opendev.org/c/openstack/os-brick/+/778807 >> "Update requirements for wallaby release" >> - raises the minimum versions in the various requirements files to >> reflect what we're actually testing with right now >> - nothing major because of the adjustments made back in January to deal >> with the upgraded pip dependency resolver > > Maybe also: > https://review.opendev.org/c/openstack/os-brick/+/775545 > "Avoid unhandled exceptions during connecting to iSCSI portals" > > which may make the code more robust? Thanks for pointing this one out, and thanks to Takashi for being so responsive to the reviews and making quick revisions. It's in the gate now. > > Ciao > From rlandy at redhat.com Fri Mar 5 17:50:50 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 5 Mar 2021 12:50:50 -0500 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) Message-ID: Hello All, Just a reminder that we will be starting to implement steps to migrate from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. The plan is outlined in: https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday In summary, on Sunday, we plan to: - Move the master integration line for promotions to build containers and images on centos-8 stream nodes - Change the release files to bring down centos-8 stream repos for use in test jobs (test jobs will still start on centos-8 nodes - changing this nodeset will happen later) - Image build and container build check jobs will be moved to non-voting during this transition. We have already run all the test jobs in RDO with centos-8 stream content running on centos-8 nodes to prequalify this transition. We will update this list with status as we go forward with next steps. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Mar 5 21:59:54 2021 From: amy at demarco.com (Amy) Date: Fri, 5 Mar 2021 15:59:54 -0600 Subject: [openstack-community] Keystone deprecated policies problem In-Reply-To: <20210305211222.Horde.HHcZNM5k3FoOKQXguoLtJMA@webmail.unl.edu.ar> References: <20210305211222.Horde.HHcZNM5k3FoOKQXguoLtJMA@webmail.unl.edu.ar> Message-ID: Adding the OpenStack discuss list. Thanks, Amy (spotz) > On Mar 5, 2021, at 3:19 PM, meberhardt at unl.edu.ar wrote: > > Hi, > > my installation complains about deprecated policies and throw errors when I try to run specific commands in cli as admin user(list projects or list users, for example). > > "You are not authorized to perform the requested action: identity:list_users." > > I tried to fix this by upgrading the keystone policies using oslopolicy-policy-generator and oslopolicy-policy-upgrade. Just found two places where I have those keystone policy files in my sistem: /etc/openstack_dashboard/keystone_policy.json and cd /lib/python3.6/site-packages/openstack_auth/tests/conf/keystone_policy.json > > I regenerated & upgraded it but Keystone still complains about the old polices. Where are it placed? How I shoud fix it? > > OS: CentOS 8 > Openstack version: Ussuri > Manual installation > > Thanks, > > Matias > > /* --------------------------------------------------------------- */ > /* Matías A. Eberhardt */ > /* */ > /* Centro de Telemática */ > /* Secretaría General */ > /* UNIVERSIDAD NACIONAL DEL LITORAL */ > /* Pje. Martínez 2652 - S3002AAB Santa Fe - Argentina */ > /* tel +54(342)455-4245 - FAX +54(342)457-1240 */ > /* --------------------------------------------------------------- */ > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community From victoria at vmartinezdelacruz.com Fri Mar 5 22:13:03 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 5 Mar 2021 23:13:03 +0100 Subject: [manila] Wallaby Collab Review - CephFS drivers updates (Mar 8th) Message-ID: Hi everyone, Next Monday (March 8th) we will hold a Manila collaborative review session in which we'll go through the code and review the latest changes introduced in the CephFS drivers [0][1] This meeting is scheduled to last one hour, starting at 5.00pm UTC. It might extend a bit, but the most important should be covered within the hour. Meeting notes and relevant links are available in [2] Your feedback will be truly appreciated. Cheers, V [0] https://review.opendev.org/q/topic:%22bp%252Fupdate-cephfs-drivers%22+(status:open%20OR%20status:merged) [1] https://review.opendev.org/q/topic:%22bp%252Fcreate-share-from-snapshot-cephfs%22+(status:open%20OR%20status:merged) [2] https://etherpad.opendev.org/p/update-cephfs-drivers-collab-review -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Mar 5 22:13:09 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 5 Mar 2021 14:13:09 -0800 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: Octavia doesn't really care what the ML2 is in neutron, there are no dependencies on it as we use stable neutron APIs. Devstack however is creating a port on the management network for the controller processes. Octavia has a function hook to allow the SDN providers to handle creating access to the management network. When OVN moved into neutron this hook was implemented in neutron for linux bridge, OVS, and OVN. I should also note that we have been running Octavia gate jobs, on neutron with OVN, since before the migration of OVN into neutron, so I would not expect any issues from the proposed change to the default ML2 in neutron. Michael On Tue, Mar 2, 2021 at 7:05 PM Lingxian Kong wrote: > > Hi, > > Thanks for all your hard work on this. > > I'm wondering is there any doc proposed for devstack to tell people who are not interested in OVN to keep the current devstack behaviour? I have a feeling that using OVN as default Neutron driver would break the CI jobs for some projects like Octavia, Trove, etc. which rely on ovs port for the set up. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi wrote: >> >> On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: >> > >> > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: >> > > Hi all, >> > > >> > > As part of the Victoria PTG [0] the Neutron community agreed upon >> > > switching the default backend in Devstack to OVN. A lot of work has >> > > been done since, from porting the OVN devstack module to the DevStack >> > > tree, refactoring the DevStack module to install OVN from distro >> > > packages, implementing features to close the parity gap with ML2/OVS, >> > > fixing issues with tests and distros, etc... >> > > >> > > We are now very close to being able to make the switch and we've >> > > thought about sending this email to the broader community to raise >> > > awareness about this change as well as bring more attention to the >> > > patches that are current on review. >> > > >> > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is >> > > discontinued and/or not supported anymore. The ML2/OVS driver is still >> > > going to be developed and maintained by the upstream Neutron >> > > community. >> > can we ensure that this does not happen until the xena release. >> > in generall i think its ok to change the default but not this late in the cycle. >> > i would also like to ensure we keep at least one non ovn based multi node job in >> > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. >> > right now the event/neutorn interaction is not the same during move operations. >> > > >> > > Below is a e per project explanation with relevant links and issues of >> > > where we stand with this work right now: >> > > >> > > * Keystone: >> > > >> > > Everything should be good for Keystone, the gate is happy with the >> > > changes. Here is the test patch: >> > > https://review.opendev.org/c/openstack/keystone/+/777963 >> > > >> > > * Glance: >> > > >> > > Everything should be good for Glace, the gate is happy with the >> > > changes. Here is the test patch: >> > > https://review.opendev.org/c/openstack/glance/+/748390 >> > > >> > > * Swift: >> > > >> > > Everything should be good for Swift, the gate is happy with the >> > > changes. Here is the test patch: >> > > https://review.opendev.org/c/openstack/swift/+/748403 >> > > >> > > * Ironic: >> > > >> > > Since chainloading iPXE by the OVN built-in DHCP server is work in >> > > progress, we've changed most of the Ironic jobs to explicitly enable >> > > ML2/OVS and everything is merged, so we should be good for Ironic too. >> > > Here is the test patch: >> > > https://review.opendev.org/c/openstack/ironic/+/748405 >> > > >> > > * Cinder: >> > > >> > > Cinder is almost complete. There's one test failure in the >> > > "tempest-slow-py3" job run on the >> > > "test_port_security_macspoofing_port" test. >> > > >> > > This failure is due to a bug in core OVN [1]. This bug has already >> > > been fixed upstream [2] and the fix has been backported down to the >> > > branch-20.03 [3] of the OVN project. However, since we install OVN >> > > from packages we are currently waiting for this fix to be included in >> > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already >> > > contacted the package maintainer which has been very supportive of >> > > this work and will work on the package update, but he maintain a >> > > handful of backports in that package which is not yet included in OVN >> > > 20.03 upstream and he's now working with the core OVN community [4] to >> > > include it first in the branch and then create a new package for it. >> > > Hopefully this will happen soon. >> > > >> > > But for now we have a few options moving on with this issue: >> > > >> > > 1- Wait for the new package version >> > > 2- Mark the test as unstable until we get the new package version >> > > 3- Compile OVN from source instead of installing it from packages >> > > (OVN_BUILD_FROM_SOURCE=True in local.conf) >> > i dont think we should default to ovn untill a souce build is not required. >> > compiling form souce while not supper expensice still adds time to the job >> > execution and im not sure we should be paying that cost on every devstack job run. >> > >> > we could maybe compile it once and bake the package into the image or host it on a mirror >> > but i think we should avoid this option if we have alternitives. >> > > >> > > What do you think about it ? >> > > >> > > Here is the test patch for Cinder: >> > > https://review.opendev.org/c/openstack/cinder/+/748227 >> > > >> > > * Nova: >> > > >> > > There are a few patches waiting for review for Nova, which are: >> > > >> > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically >> > > the scripts were trying to stop the Neutron agent (q-agt) process >> > > which is not part of an ML2/OVN deployment. The patch changes the code >> > > to check if that system unit exists before trying to stop it. >> > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 >> > > >> > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change >> > > which can be removed one release cycle after we switch DevStack to >> > > ML2/OVN. Grenade will test updating from the release version to the >> > > master branch but, since the default of the released version is not >> > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job >> > > is not supported. >> > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 >> > > >> > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS >> > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] >> > > therefore we are temporarily enabling ML2/OVS for this job until that >> > > feature lands in core OVN. >> > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 >> > > >> > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these >> > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now >> > > because he feels like a change in the default network driver a few >> > > weeks prior to the upstream code freeze can be concerning. We do not >> > > know yet precisely when we are changing the default due to the current >> > > patches we need to get merged but, if this is a shared feeling among >> > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova >> > > until we get a new release in OpenStack. >> > yep this is still my view. >> > i would suggest we do the work required in the repos but not merge it until the xena release >> > is open. thats technically at RC1 so march 25th >> > i think we can safely do the swich after that but i would not change the defualt in any project >> > before then. >> > > >> > > Here's the test patch for Nova: >> > > https://review.opendev.org/c/openstack/nova/+/776945 >> > > >> > > * DevStack: >> > > >> > > And this is the final patch that will make this all happen: >> > > https://review.opendev.org/c/openstack/devstack/+/735097 >> > > >> > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been >> > > a long and bumpy road to get to this point and I would like to say >> > > thanks to everyone involved so far and everyone that read the whole >> > > email, please let me know your thoughts. >> > thanks for working on this. >> > > >> > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg >> > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 >> > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ >> > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb >> > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html >> > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 >> > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html >> > > >> > > Cheers, >> > > Lucas >> >> ++ Thank you indeed for working diligently on this important change. >> >> Please do note that devstack, and the base job that you're modifying >> is used by many other projects besides the ones that you have >> enumerated in the subject line. >> I suggest using [all] as a better subject line indicator to get the >> attention of folks like me who have filters based on the subject line. >> Also, the network substrate is important for the project I help >> maintain: Manila, which provides shared file systems over a network - >> so I followed your lead and submitted a dependent patch. I hope to >> reach out to you in case we see some breakages: >> https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 >> >> > > >> > >> > >> > >> From victoria at vmartinezdelacruz.com Fri Mar 5 22:17:11 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 5 Mar 2021 23:17:11 +0100 Subject: [manila] Wallaby Collab Review - CephFS drivers updates (Mar 8th) In-Reply-To: References: Message-ID: Small update: the meeting will start at 6.00pm UTC instead of 5.00pm UTC. Thanks, V On Fri, Mar 5, 2021 at 11:13 PM Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi everyone, > > Next Monday (March 8th) we will hold a Manila collaborative review session > in which we'll go through the code and review the latest changes introduced > in the CephFS drivers [0][1] > > This meeting is scheduled to last one hour, starting at 5.00pm UTC. It > might extend a bit, but the most important should be covered within the > hour. > > Meeting notes and relevant links are available in [2] > > Your feedback will be truly appreciated. > > Cheers, > > V > > [0] > https://review.opendev.org/q/topic:%22bp%252Fupdate-cephfs-drivers%22+(status:open%20OR%20status:merged) > [1] > https://review.opendev.org/q/topic:%22bp%252Fcreate-share-from-snapshot-cephfs%22+(status:open%20OR%20status:merged) > [2] https://etherpad.opendev.org/p/update-cephfs-drivers-collab-review > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sat Mar 6 07:37:39 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 06 Mar 2021 08:37:39 +0100 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: References: Message-ID: <2609049.HbdjPCY3gI@p1> Hi, Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > Hello all, > > I'm seeking input from the neutron and nova teams regarding policy > enforcement for allowing attachment to external networks. Details below. > > Recently we've been looking at an issue that was reported quite a long > time ago (2017) [1] where we have a policy check in nova-compute that > controls whether to allow users to attach an external network to their > instances. > > This has historically been a pain point for operators as (1) it goes > against convention of having policy checks in nova-api only and (2) > setting the policy to anything other than the default requires deploying > a policy file change to all of the compute hosts in the deployment. > > The launchpad bug report mentions neutron refactoring work that was > happening at the time, which was thought might make the > 'network:attach_external_network' policy check on the nova side redundant. > > Years have passed since then and customers are still running into this > problem, so we are thinking, can this policy check be removed on the > nova-compute side now? > > I did a local test with devstack to verify what the behavior is if we > were to remove the 'network:attach_external_network' policy check > entirely [2] and found that neutron appears to properly enforce > permission to attach to external networks itself. It appears that the > enforcement on the neutron side makes the nova policy check redundant. > > When I tried to boot an instance to attach to an external network, > neutron API returned the following: > > INFO neutron.pecan_wsgi.hooks.translation > [req-58fdb103-cd20-48c9-b73b-c9074061998c > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > port on this network > [...] > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 I just checked in Neutron code and we don't have any policy rule related directly to the creation of ports on the external network. Probably what You had there is the fact that Your router:external network was owned by other tenant and due to that You wasn't able to create port directly on it. If as an admin You would create external network which would belong to Your tenant, You would be allowed to create port there. > > Can anyone from the neutron team confirm whether it would be OK for us > to remove our nova-compute policy check for external network attach > permission and let neutron take care of the check? I don't know exactly the reasons why it is forbiden on Nova's side but TBH I don't see any reason why we should forbid pluging instances directly to the network marked as router:external=True. > > And on the nova side, I assume we would need a deprecation cycle before > removing the 'network:attach_external_network' policy. If we can get > confirmation from the neutron team, is anyone opposed to the idea of > deprecating the 'network:attach_external_network' policy in the Wallaby > cycle, to be removed in the Xena release? > > I would appreciate your thoughts. > > Cheers, > -melanie > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Sat Mar 6 08:30:58 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 06 Mar 2021 09:30:58 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <28531574.fdTSbW8Txg@p1> Hi, Dnia piątek, 5 marca 2021 23:13:09 CET Michael Johnson pisze: > Octavia doesn't really care what the ML2 is in neutron, there are no > dependencies on it as we use stable neutron APIs. > > Devstack however is creating a port on the management network for the > controller processes. Octavia has a function hook to allow the SDN > providers to handle creating access to the management network. When > OVN moved into neutron this hook was implemented in neutron for linux > bridge, OVS, and OVN. > > I should also note that we have been running Octavia gate jobs, on > neutron with OVN, since before the migration of OVN into neutron, so I > would not expect any issues from the proposed change to the default > ML2 in neutron. Thx for confirmation that Octavia should be fine with it :) > > Michael > > On Tue, Mar 2, 2021 at 7:05 PM Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell people who > > are not interested in OVN to keep the current devstack behaviour? I have > > a feeling that using OVN as default Neutron driver would break the CI > > jobs for some projects like Octavia, Trove, etc. which rely on ovs port > > for the set up. > > > > --- > > Lingxian Kong > > Senior Cloud Engineer (Catalyst Cloud) > > Trove PTL (OpenStack) > > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi wrote: > >> On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > >> > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > >> > > Hi all, > >> > > > >> > > As part of the Victoria PTG [0] the Neutron community agreed upon > >> > > switching the default backend in Devstack to OVN. A lot of work has > >> > > been done since, from porting the OVN devstack module to the DevStack > >> > > tree, refactoring the DevStack module to install OVN from distro > >> > > packages, implementing features to close the parity gap with ML2/OVS, > >> > > fixing issues with tests and distros, etc... > >> > > > >> > > We are now very close to being able to make the switch and we've > >> > > thought about sending this email to the broader community to raise > >> > > awareness about this change as well as bring more attention to the > >> > > patches that are current on review. > >> > > > >> > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > >> > > discontinued and/or not supported anymore. The ML2/OVS driver is > >> > > still > >> > > going to be developed and maintained by the upstream Neutron > >> > > community. > >> > > >> > can we ensure that this does not happen until the xena release. > >> > in generall i think its ok to change the default but not this late in > >> > the cycle. i would also like to ensure we keep at least one non ovn > >> > based multi node job in nova until > >> > https://review.opendev.org/c/openstack/nova/+/602432 is merged and > >> > possible after. right now the event/neutorn interaction is not the > >> > same during move operations.>> > > >> > > Below is a e per project explanation with relevant links and issues > >> > > of > >> > > where we stand with this work right now: > >> > > > >> > > * Keystone: > >> > > > >> > > Everything should be good for Keystone, the gate is happy with the > >> > > changes. Here is the test patch: > >> > > https://review.opendev.org/c/openstack/keystone/+/777963 > >> > > > >> > > * Glance: > >> > > > >> > > Everything should be good for Glace, the gate is happy with the > >> > > changes. Here is the test patch: > >> > > https://review.opendev.org/c/openstack/glance/+/748390 > >> > > > >> > > * Swift: > >> > > > >> > > Everything should be good for Swift, the gate is happy with the > >> > > changes. Here is the test patch: > >> > > https://review.opendev.org/c/openstack/swift/+/748403 > >> > > > >> > > * Ironic: > >> > > > >> > > Since chainloading iPXE by the OVN built-in DHCP server is work in > >> > > progress, we've changed most of the Ironic jobs to explicitly enable > >> > > ML2/OVS and everything is merged, so we should be good for Ironic > >> > > too. > >> > > Here is the test patch: > >> > > https://review.opendev.org/c/openstack/ironic/+/748405 > >> > > > >> > > * Cinder: > >> > > > >> > > Cinder is almost complete. There's one test failure in the > >> > > "tempest-slow-py3" job run on the > >> > > "test_port_security_macspoofing_port" test. > >> > > > >> > > This failure is due to a bug in core OVN [1]. This bug has already > >> > > been fixed upstream [2] and the fix has been backported down to the > >> > > branch-20.03 [3] of the OVN project. However, since we install OVN > >> > > from packages we are currently waiting for this fix to be included in > >> > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > >> > > contacted the package maintainer which has been very supportive of > >> > > this work and will work on the package update, but he maintain a > >> > > handful of backports in that package which is not yet included in OVN > >> > > 20.03 upstream and he's now working with the core OVN community [4] > >> > > to > >> > > include it first in the branch and then create a new package for it. > >> > > Hopefully this will happen soon. > >> > > > >> > > But for now we have a few options moving on with this issue: > >> > > > >> > > 1- Wait for the new package version > >> > > 2- Mark the test as unstable until we get the new package version > >> > > 3- Compile OVN from source instead of installing it from packages > >> > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > >> > > >> > i dont think we should default to ovn untill a souce build is not > >> > required. > >> > compiling form souce while not supper expensice still adds time to the > >> > job > >> > execution and im not sure we should be paying that cost on every > >> > devstack job run. > >> > > >> > we could maybe compile it once and bake the package into the image or > >> > host it on a mirror but i think we should avoid this option if we have > >> > alternitives. > >> > > >> > > What do you think about it ? > >> > > > >> > > Here is the test patch for Cinder: > >> > > https://review.opendev.org/c/openstack/cinder/+/748227 > >> > > > >> > > * Nova: > >> > > > >> > > There are a few patches waiting for review for Nova, which are: > >> > > > >> > > 1- Adapting the live migration scripts to work with ML2/OVN: > >> > > Basically > >> > > the scripts were trying to stop the Neutron agent (q-agt) process > >> > > which is not part of an ML2/OVN deployment. The patch changes the > >> > > code > >> > > to check if that system unit exists before trying to stop it. > >> > > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > >> > > > >> > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > >> > > which can be removed one release cycle after we switch DevStack to > >> > > ML2/OVN. Grenade will test updating from the release version to the > >> > > master branch but, since the default of the released version is not > >> > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > >> > > is not supported. > >> > > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > >> > > > >> > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > >> > > minimum bandwidth feature which is not yet supported by ML2/OVN > >> > > [5][6] > >> > > therefore we are temporarily enabling ML2/OVS for this job until that > >> > > feature lands in core OVN. > >> > > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > >> > > > >> > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about > >> > > these > >> > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > >> > > because he feels like a change in the default network driver a few > >> > > weeks prior to the upstream code freeze can be concerning. We do not > >> > > know yet precisely when we are changing the default due to the > >> > > current > >> > > patches we need to get merged but, if this is a shared feeling among > >> > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > >> > > until we get a new release in OpenStack. > >> > > >> > yep this is still my view. > >> > i would suggest we do the work required in the repos but not merge it > >> > until the xena release is open. thats technically at RC1 so march 25th > >> > i think we can safely do the swich after that but i would not change > >> > the defualt in any project before then. > >> > > >> > > Here's the test patch for Nova: > >> > > https://review.opendev.org/c/openstack/nova/+/776945 > >> > > > >> > > * DevStack: > >> > > > >> > > And this is the final patch that will make this all happen: > >> > > https://review.opendev.org/c/openstack/devstack/+/735097 > >> > > > >> > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > >> > > a long and bumpy road to get to this point and I would like to say > >> > > thanks to everyone involved so far and everyone that read the whole > >> > > email, please let me know your thoughts. > >> > > >> > thanks for working on this. > >> > > >> > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > >> > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > >> > > [2] > >> > > https://patchwork.ozlabs.org/project/openvswitch/patch/2020031912264 > >> > > 1.473776-1-numans at ovn.org/ [3] > >> > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d > >> > > 8ab39380cb [4] > >> > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050 > >> > > 961.html [5] > >> > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a > >> > > 99fe76a9ee1/gate/post_test_hook.sh#L129-L143 [6] > >> > > https://docs.openstack.org/neutron/latest/ovn/gaps.html > >> > > > >> > > Cheers, > >> > > Lucas > >> > >> ++ Thank you indeed for working diligently on this important change. > >> > >> Please do note that devstack, and the base job that you're modifying > >> is used by many other projects besides the ones that you have > >> enumerated in the subject line. > >> I suggest using [all] as a better subject line indicator to get the > >> attention of folks like me who have filters based on the subject line. > >> Also, the network substrate is important for the project I help > >> maintain: Manila, which provides shared file systems over a network - > >> so I followed your lead and submitted a dependent patch. I hope to > >> reach out to you in case we see some breakages: > >> https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Sat Mar 6 12:53:02 2021 From: smooney at redhat.com (Sean Mooney) Date: Sat, 06 Mar 2021 12:53:02 +0000 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: <2609049.HbdjPCY3gI@p1> References: <2609049.HbdjPCY3gI@p1> Message-ID: On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: > Hi, > > Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > > Hello all, > > > > I'm seeking input from the neutron and nova teams regarding policy > > enforcement for allowing attachment to external networks. Details below. > > > > Recently we've been looking at an issue that was reported quite a long > > time ago (2017) [1] where we have a policy check in nova-compute that > > controls whether to allow users to attach an external network to their > > instances. > > > > This has historically been a pain point for operators as (1) it goes > > against convention of having policy checks in nova-api only and (2) > > setting the policy to anything other than the default requires deploying > > a policy file change to all of the compute hosts in the deployment. > > > > The launchpad bug report mentions neutron refactoring work that was > > happening at the time, which was thought might make the > > 'network:attach_external_network' policy check on the nova side redundant. > > > > Years have passed since then and customers are still running into this > > problem, so we are thinking, can this policy check be removed on the > > nova-compute side now? > > > > I did a local test with devstack to verify what the behavior is if we > > were to remove the 'network:attach_external_network' policy check > > entirely [2] and found that neutron appears to properly enforce > > permission to attach to external networks itself. It appears that the > > enforcement on the neutron side makes the nova policy check redundant. > > > > When I tried to boot an instance to attach to an external network, > > neutron API returned the following: > > > > INFO neutron.pecan_wsgi.hooks.translation > > [req-58fdb103-cd20-48c9-b73b-c9074061998c > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > > port on this network > > [...] > > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 > > I just checked in Neutron code and we don't have any policy rule related > directly to the creation of ports on the external network. > Probably what You had there is the fact that Your router:external network was > owned by other tenant and due to that You wasn't able to create port directly > on it. If as an admin You would create external network which would belong to > Your tenant, You would be allowed to create port there. > > > > > Can anyone from the neutron team confirm whether it would be OK for us > > to remove our nova-compute policy check for external network attach > > permission and let neutron take care of the check? > > I don't know exactly the reasons why it is forbiden on Nova's side but TBH I > don't see any reason why we should forbid pluging instances directly to the > network marked as router:external=True. i have listed the majority of my consers in https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the downstream bug related to this. there are a number of issue sthat i was concerd about but tl;dr - booting ip form external network consumes ip from the floating ip subnet withtout using quota - by default neutron upstream and downstream is configured to provide nova metadata api access via the neutron router not the dhcp server so by default the metadata api will not work with external network. that would require neueton to be configre to use the dhcp server for metadta or config driver or else insance wont get ssh keys ingject by cloud init. - there might be security considertaions. typeically external networks are vlan or flat networks and in some cases operators may not want tenats to be able to boot on such networks expsially with vnic-type=driect-physical since that might allow them to violate tenant isolation if the top of rack switch was not configured by a heracical port binding driver to provide adiquite isolation in that case. this is not so much because this is an external network and more a concern anytime you do PF passtough but there may be other implication to allowing this by default. that said if neutron has a way to express policy in this regard nova does not have too. router:external=True is really used to mark a network as providing connectivity such that it can be used for the gateway port of neutron routers. the workaroud that i have come up with currently is to mark the network as shared and then use neturon rbac to only share it with the teant that owns it. i assigning external network to speficic tenat being useful when you want to provde a specific ip allocation pool to them or just a set of ips. i understand that the current motivation for this request is commign form some edge deployments. in general i dont thinkthis would be widely used but for those that need its better ux then marking it as shared. > > > > > And on the nova side, I assume we would need a deprecation cycle before > > removing the 'network:attach_external_network' policy. If we can get > > confirmation from the neutron team, is anyone opposed to the idea of > > deprecating the 'network:attach_external_network' policy in the Wallaby > > cycle, to be removed in the Xena release? > > > > I would appreciate your thoughts. > > > > Cheers, > > -melanie > > > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 > > From skaplons at redhat.com Sat Mar 6 16:00:02 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 06 Mar 2021 17:00:02 +0100 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: References: <2609049.HbdjPCY3gI@p1> Message-ID: <4299490.gzMO86gykG@p1> Hi, Dnia sobota, 6 marca 2021 13:53:02 CET Sean Mooney pisze: > On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: > > Hi, > > > > Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > > > Hello all, > > > > > > I'm seeking input from the neutron and nova teams regarding policy > > > enforcement for allowing attachment to external networks. Details below. > > > > > > Recently we've been looking at an issue that was reported quite a long > > > time ago (2017) [1] where we have a policy check in nova-compute that > > > controls whether to allow users to attach an external network to their > > > instances. > > > > > > This has historically been a pain point for operators as (1) it goes > > > against convention of having policy checks in nova-api only and (2) > > > setting the policy to anything other than the default requires deploying > > > a policy file change to all of the compute hosts in the deployment. > > > > > > The launchpad bug report mentions neutron refactoring work that was > > > happening at the time, which was thought might make the > > > 'network:attach_external_network' policy check on the nova side > > > redundant. > > > > > > Years have passed since then and customers are still running into this > > > problem, so we are thinking, can this policy check be removed on the > > > nova-compute side now? > > > > > > I did a local test with devstack to verify what the behavior is if we > > > were to remove the 'network:attach_external_network' policy check > > > entirely [2] and found that neutron appears to properly enforce > > > permission to attach to external networks itself. It appears that the > > > enforcement on the neutron side makes the nova policy check redundant. > > > > > > When I tried to boot an instance to attach to an external network, > > > neutron API returned the following: > > > > > > INFO neutron.pecan_wsgi.hooks.translation > > > [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > > > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > > > port on this network > > > [...] > > > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > > > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 > > > > I just checked in Neutron code and we don't have any policy rule related > > directly to the creation of ports on the external network. > > Probably what You had there is the fact that Your router:external network > > was owned by other tenant and due to that You wasn't able to create port > > directly on it. If as an admin You would create external network which > > would belong to Your tenant, You would be allowed to create port there. > > > > > Can anyone from the neutron team confirm whether it would be OK for us > > > to remove our nova-compute policy check for external network attach > > > permission and let neutron take care of the check? > > > > I don't know exactly the reasons why it is forbiden on Nova's side but TBH > > I don't see any reason why we should forbid pluging instances directly to > > the network marked as router:external=True. > > i have listed the majority of my consers in > https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the > downstream bug related to this. > there are a number of issue sthat i was concerd about but tl;dr > - booting ip form external network consumes ip from the floating ip subnet > withtout using quota - by default neutron upstream and downstream is > configured to provide nova metadata api access via the neutron router not > the dhcp server so by default the metadata api will not work with external > network. that would require neueton to be configre to use the dhcp server > for metadta or config driver or else insance wont get ssh keys ingject by > cloud init. > - there might be security considertaions. typeically external networks are > vlan or flat networks and in some cases operators may not want tenats to be > able to boot on such networks expsially with vnic-type=driect-physical > since that might allow them to violate tenant isolation if the top of rack > switch was not configured by a heracical port binding driver to provide > adiquite isolation in that case. this is not so much because this is an > external network and more a concern anytime you do PF passtough but there > may be other implication to allowing this by default. that said if neutron > has a way to express policy in this regard nova does not have too. Those are all valid points, true. But TBH, if administrator created such network as pool of FIPs for the user, then users will not be able to plug vms directly to that network as they aren't owners of the network so neutron will forbid that. > > router:external=True is really used to mark a network as providing > connectivity such that it can be used for the gateway port of neutron > routers. the workaroud that i have come up with currently is to mark the > network as shared and then use neturon rbac to only share it with the teant > that owns it. > > i assigning external network to speficic tenat being useful when you want to > provde a specific ip allocation pool to them or just a set of ips. i > understand that the current motivation for this request is commign form > some edge deployments. in general i dont thinkthis would be widely used but > for those that need its better ux then marking it as shared. > > > > And on the nova side, I assume we would need a deprecation cycle before > > > removing the 'network:attach_external_network' policy. If we can get > > > confirmation from the neutron team, is anyone opposed to the idea of > > > deprecating the 'network:attach_external_network' policy in the Wallaby > > > cycle, to be removed in the Xena release? > > > > > > I would appreciate your thoughts. > > > > > > Cheers, > > > -melanie > > > > > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > > > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ikatzir at infinidat.com Sun Mar 7 09:36:53 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Sun, 7 Mar 2021 11:36:53 +0200 Subject: [E] [ironic] How to move node from active state to manageable In-Reply-To: References: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> Message-ID: <8564CA59-4F41-4CB7-91B5-92CB1C38A28A@infinidat.com> Thanks Jay for the prompt response! (had my weekend off) I have deleted the instances through nova as you suggested, I then see them being cleaned , which is Good! > (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean wait | False | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power off | cleaning | False | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ And after a while they become available > (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power off | available | False | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | cleaning | False | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ now I can start deployment as planned. Igal > On 4 Mar 2021, at 20:12, Jay Faulkner wrote: > > are provisioned (active) are not able to be moved to manageable state. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manubk2020 at gmail.com Sun Mar 7 16:50:02 2021 From: manubk2020 at gmail.com (Manu B) Date: Sun, 7 Mar 2021 22:20:02 +0530 Subject: [neutron-dynamic-routing] Support for multiple BGP speaker Message-ID: Hi, As understood from the code, currently the neutron dynamic routing supports only one BGP speaker per host *# os-ken can only support One *speaker if* self.cache.get_hosted_bgp_speakers_count() == 1: raise bgp_driver_exc.BgpSpeakerMaxScheduled(count=1) * Could you please let me know if this is due to some limitation of the os-ken driver? Or are there any other reasons for this limitation? Thanks, Manu -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun Mar 7 23:24:21 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 7 Mar 2021 16:24:21 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: > Hello All, > > Just a reminder that we will be starting to implement steps to migrate > from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. > > The plan is outlined in: > https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday > > In summary, on Sunday, we plan to: > - Move the master integration line for promotions to build containers and > images on centos-8 stream nodes > - Change the release files to bring down centos-8 stream repos for use in > test jobs (test jobs will still start on centos-8 nodes - changing this > nodeset will happen later) > - Image build and container build check jobs will be moved to non-voting > during this transition. > > We have already run all the test jobs in RDO with centos-8 stream content > running on centos-8 nodes to prequalify this transition. > > We will update this list with status as we go forward with next steps. > > Thanks! > OK... status update. Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure Monday wasn't a disaster upstream. TripleO master jobs have successfully been migrated to CentOS-8-Stream today. You should see "8-stream" now in /etc/yum.repos.d/tripleo-centos.* repos. Your CentOS-8-Stream Master hash is: edd46672cb9b7a661ecf061942d71a72 Your master repos are: https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo Containers, and overcloud images should all be centos-8-stream. The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. Please open launchpad bugs w/ the "alert" tag if you are having any issues. Thanks and well done all! > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Mar 8 01:06:16 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 07 Mar 2021 19:06:16 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-6 Update Message-ID: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> Hello Everyone, Please find the week's R-6 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) Progress Summary: =============== Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml * Projects completed: 21 * Projects required to merge the patches: 8 * Projects required to push the patches: 2 (horizon and Openstackansible) * Projects do not need any work: 16 Patches ready to merge: ================== * Octavia: https://review.opendev.org/c/openstack/octavia/+/764578 * Magnum: https://review.opendev.org/c/openstack/magnum/+/767242 * Murano: https://review.opendev.org/c/openstack/murano/+/768520 * Panko: https://review.opendev.org/c/openstack/panko/+/768498 * Solum: https://review.opendev.org/c/openstack/solum/+/768381 * Zaqar: https://review.opendev.org/c/openstack/zaqar/+/768488 Updates: ======= * Fixed the lower constraints failure for all the current open patches. * Few project patches are failing on config object initialization, I am debugging those and will fix them soon. * It is important to merge all the service side patches so that JSON format deprecation happens in the Wallaby release. -gmann From amotoki at gmail.com Mon Mar 8 01:21:20 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 8 Mar 2021 10:21:20 +0900 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-6 Update In-Reply-To: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> References: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> Message-ID: On Mon, Mar 8, 2021 at 10:08 AM Ghanshyam Mann wrote: > > Hello Everyone, > > Please find the week's R-6 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. > > Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > Progress Summary: > =============== > Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > * Projects completed: 21 > * Projects required to merge the patches: 8 > * Projects required to push the patches: 2 (horizon and Openstackansible) Horizon does not use /etc//policy.json, so the community goal does not directly affect horizon or it is more complicated than what the goal says. However, https://review.opendev.org/c/openstack/horizon/+/750134 which is to handle policy-in-code and deprecated rules in horizon addressed the goal indirectly. Thanks, Akihiro > * Projects do not need any work: 16 > > Patches ready to merge: > ================== > * Octavia: https://review.opendev.org/c/openstack/octavia/+/764578 > * Magnum: https://review.opendev.org/c/openstack/magnum/+/767242 > * Murano: https://review.opendev.org/c/openstack/murano/+/768520 > * Panko: https://review.opendev.org/c/openstack/panko/+/768498 > * Solum: https://review.opendev.org/c/openstack/solum/+/768381 > * Zaqar: https://review.opendev.org/c/openstack/zaqar/+/768488 > > > Updates: > ======= > * Fixed the lower constraints failure for all the current open patches. > * Few project patches are failing on config object initialization, I am debugging those and will fix them soon. > * It is important to merge all the service side patches so that JSON format deprecation happens in the Wallaby release. > > -gmann > From gmann at ghanshyammann.com Mon Mar 8 02:35:11 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 07 Mar 2021 20:35:11 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-6 Update In-Reply-To: References: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> Message-ID: <1780fb03ba2.e898433f119842.1432973529588624272@ghanshyammann.com> ---- On Sun, 07 Mar 2021 19:21:20 -0600 Akihiro Motoki wrote ---- > On Mon, Mar 8, 2021 at 10:08 AM Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > Please find the week's R-6 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. > > > > Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > > > Progress Summary: > > =============== > > Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > > > * Projects completed: 21 > > * Projects required to merge the patches: 8 > > * Projects required to push the patches: 2 (horizon and Openstackansible) > > Horizon does not use /etc//policy.json, so the community goal > does not directly affect horizon or it is more complicated than what > the goal says. > However, https://review.opendev.org/c/openstack/horizon/+/750134 which > is to handle policy-in-code and deprecated rules in horizon addressed > the goal indirectly. Thanks amotoki for the updates and adding notes in etherpad, I have moved Horizon to the completed section. > > Thanks, > Akihiro > > > * Projects do not need any work: 16 > > > > Patches ready to merge: > > ================== > > * Octavia: https://review.opendev.org/c/openstack/octavia/+/764578 > > * Magnum: https://review.opendev.org/c/openstack/magnum/+/767242 > > * Murano: https://review.opendev.org/c/openstack/murano/+/768520 > > * Panko: https://review.opendev.org/c/openstack/panko/+/768498 > > * Solum: https://review.opendev.org/c/openstack/solum/+/768381 > > * Zaqar: https://review.opendev.org/c/openstack/zaqar/+/768488 > > > > > > Updates: > > ======= > > * Fixed the lower constraints failure for all the current open patches. > > * Few project patches are failing on config object initialization, I am debugging those and will fix them soon. > > * It is important to merge all the service side patches so that JSON format deprecation happens in the Wallaby release. > > > > -gmann > > > From amotoki at gmail.com Mon Mar 8 02:55:50 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 8 Mar 2021 11:55:50 +0900 Subject: [horizon][i18n][infra][horizon plugins] Renaming Chinese locales in Django from zh-cn/zh-tw to zh-hans/zh-hant In-Reply-To: References: Message-ID: Hi, The patch in openstack/openstack-zuul-jobs which renames Chinese locales (zh-cn and zh-tw) to zh-hans and zh-hant has landed last week. You can see Chinese locale renaming in recent translation import patches [2]. I also checked the job results of propose-translation-update job [3] and all worked expectedly. ACTIONS for project teams with horizon plugins: All horizon plugins which include Chinese translations need to be released. If your plugin contains wrote: > > Hi, > > The horizon team is planning to switch Chinese language codes in > Django codes from zh-cn/zh-tw to zh-hans/zh-hant. Django, a framework > used in horizon, recommends to use them since more than 5 years ago > [1][2]. > > This change touches Chinese locales in the dashbaord codes of horizon > and its plugins only. It does not change Chinese locales in other > translations like documentations and non-Django python codes. This is > to minimize the impact to other translations and the translation > platform. > > ### What are/are not changed in repositories > > * horizon and horizon plugins > * locales in the dashboard codes are renamed from zh-cn/zh-tw to > zh-hans/zh-hant > * locales in doc/ and releasenotes/ are not changed > * other repositories > * no locale change happens > > NOTE: > * This leads to a situation that we have two different locales in > horizon and plugin repositories (zh-hans/hant in the code folders and > zh-cn/tw in doc and releasenotes folders), but it affects only > developers and does not affect horizon consumers (operators/users) and > translators. > * In addition, documentations are translated in OpenStack-wide (not > only in horizon and plugins). By keeping locales in docs, locales in > documentation translations will be consistent. > > ### Impact on Zanata > > In Zanata (the translation platform), zh-cn/zh-tw continue to be used, > so no change is visible to translators. > The infra job proposes zh-cn/zh-tw GUI translatoins as zh-hans/zh-hant > translations to horizon and plugin repositories. > > NOTE: > The alternative is to create the corresponding language teams > (zh-hans/zh-hant) in Zanata, but it affects Chinese translators a lot. > They need to join two language teams to translate horizon > (zh-hans/zh-hant) and docs (zh-cn/zh-tw). It makes translator workflow > complicated. The proposed way has no impact on translators and they > can continue the current translation process and translate both > horizon and docs under a single language code. > > ### Changes in the infra scripts > > Converting Chinese locales of dashboard translations from zh-cn/zh-tw > to zh-hans/zh-hant is handled by the periodic translation job. > propose_translation_update job is responsible for this. > > [propose_translation_update.sh] > * Move zh-cn/zh-tw translations related to Django codes in horizon and > its plugins from zanata to zh-hans/hant directory. > * This should happen in the master branch (+ future stable branhces > such as stable/wallaby). > > ### Additional Remarks > > I18n SIG respects all language team coordinators & members, and is > looking forward to seeing discussions and/or active contributions from > the language teams. > > Currently all language codes follow the ISO 639-1 standard (language > codes with relevant country codes), but the change introduces new > language code forms like zh-hans/zh-hant. This follows IETF BCP 47 > which recommends a combination of language codes and ISO 15924 script > code (four letters). We now have two different language codes in > OpenStack world. This is just to minimize the impact on the existing > translations. It is not ideal. We are open for further discussion on > language codes and translation support. > > ### References > > [1] https://code.djangoproject.com/ticket/18419 > [2] https://www.djbook.ru/rel1.7/releases/1.7.html#language-codes-zh-cn-zh-tw-and-fy-nl > > Thanks, > Akihiro Motoki (irc: amotoki) From gouthampravi at gmail.com Mon Mar 8 05:53:42 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Sun, 7 Mar 2021 21:53:42 -0800 Subject: [election][manila] PTL Candidacy for Xena Message-ID: Greetings Zorillas & other Stackers, This is my candidacy to be the PTL for the OpenStack Manila team through the Xena cycle. I've had the great privilege of leading this unique team for the past three releases. During this time, we've had an enviable momentum and some fantastic achievements while building impactful software. I have enjoyed being a catalyst and an advocate for this motivated team, and I wish to do this a bit longer. In Wallaby, we harnessed our diverse strengths to mentor five new contributors through our student project internships, two of who were sponsored through Outreachy funding. They have all expressed their desire to become long term contributors. I am very proud of the manila team for their sincerity and enthusiasm in aiding new Stackers. I seek to further this outreach through the Xena release cycle. Testing and project quality will remain a top personal priority. The team was able to get new interop guidelines published last cycle, and we will focus on adding more test coverage under these guidelines, alongside addressing test gaps and innovating on more test scenarios. We have continued to implement the pro-active backport policy to get bug fixes back to stable branches and request timely releases of these stable branches. In the Xena cycle, I intend to codify this policy based on our learning and spread the release responsibility and know-how around the team. We split project maintenance into sub-teams during the Wallaby cycle, and recruited sub-team core reviewers. An increase in focus and velocity ensued across project code repositories. I intend to propose more additions to these sub-teams in the Xena cycle and encourage more contributors to take on project maintenance responsibilities. So, if you will have me, I wish to serve you through Xena and get things done. Thank you for your support, Goutham Pacha Ravi IRC: gouthamr From marios at redhat.com Mon Mar 8 07:46:22 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 8 Mar 2021 09:46:22 +0200 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin wrote: > > > On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: > >> Hello All, >> >> Just a reminder that we will be starting to implement steps to migrate >> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >> >> The plan is outlined in: >> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >> >> In summary, on Sunday, we plan to: >> - Move the master integration line for promotions to build containers >> and images on centos-8 stream nodes >> - Change the release files to bring down centos-8 stream repos for use in >> test jobs (test jobs will still start on centos-8 nodes - changing this >> nodeset will happen later) >> - Image build and container build check jobs will be moved to non-voting >> during this transition. >> > >> We have already run all the test jobs in RDO with centos-8 stream content >> running on centos-8 nodes to prequalify this transition. >> >> We will update this list with status as we go forward with next steps. >> >> Thanks! >> > > OK... status update. > > Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure > Monday wasn't a disaster upstream. TripleO master jobs have successfully > been migrated to CentOS-8-Stream today. You should see "8-stream" now in > /etc/yum.repos.d/tripleo-centos.* repos. > > \o/ this is fantastic! nice work all thanks to everyone involved for getting this done with minimal disruption tripleo-ci++ > Your CentOS-8-Stream Master hash is: > > edd46672cb9b7a661ecf061942d71a72 > > Your master repos are: > https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo > > Containers, and overcloud images should all be centos-8-stream. > > The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. > > Please open launchpad bugs w/ the "alert" tag if you are having any issues. > > Thanks and well done all! > > > >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danilo_dellorto at it.ibm.com Mon Mar 8 09:31:24 2021 From: danilo_dellorto at it.ibm.com (DANILO PAOLO DELL'ORTO) Date: Mon, 8 Mar 2021 10:31:24 +0100 Subject: Post to Openstack mail list Message-ID: Hi, I would like to post questions on the openstack mail lists, can you authorize me? best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 9685 bytes Desc: not available URL: From lyarwood at redhat.com Mon Mar 8 09:31:54 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 8 Mar 2021 09:31:54 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> Message-ID: On Thu, 4 Mar 2021 at 11:07, Lee Yarwood wrote: > > On Wed, 3 Mar 2021 at 22:41, Jeremy Stanley wrote: > > > > On 2021-03-04 09:31:08 +1100 (+1100), Ian Wienand wrote: > > [...] > > > I'd personally prefer to build it in a job and publish it so that you > > > can see what's been done. > > [...] > > > > Sure, all things being equal I'd also prefer a transparent automated > > build with a periodic refresh, but that seems like something we can > > make incremental improvement on. Of course, you're a more active > > reviewer on DevStack than I am, so I'm happy to follow your lead if > > it's something you feel strongly about. > > Agreed, if there's a need to build and cache another unreleased cirros > fix then I would be happy to look into automating the build somewhere > but for a one off I think this is just about acceptable. > > FWIW nova-next is passing with the image below: > > WIP: nova-next: Start testing the 'q35' machine type > https://review.opendev.org/c/openstack/nova/+/708701 > > I'll clean that change up later today. After all of this Cirros 0.5.2 was released on Friday after another colleague asked. I've reverted my original change to cache the dev build, introduced a change to cache 0.5.2 and switched devstack over to 0.5.2 below: Revert "Add custom cirros image with ahci module enabled to cache" https://review.opendev.org/c/openstack/project-config/+/779140 Add Cirros 0.5.2 to cache https://review.opendev.org/c/openstack/project-config/+/779178 Update Cirros to 0.5.2 https://review.opendev.org/c/openstack/devstack/+/779179 Apologies for all of the noise with this, hopefully that's it for now. Thanks again, Lee From zigo at debian.org Mon Mar 8 10:16:21 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 8 Mar 2021 11:16:21 +0100 Subject: [cinder] Inflated version dependency in os-brick Message-ID: Hi, As I've started packaging Wallaby for Debian, I noticed that os-brick has very inflated version dependencies: $ diff -u ../requirements.txt requirements.txt --- ../requirements.txt 2021-03-08 10:54:44.896134101 +0100 +++ requirements.txt 2021-03-08 10:54:48.848127942 +0100 @@ -2,17 +2,16 @@ # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. -pbr!=2.1.0,>=5.4.1 # Apache-2.0 -eventlet>=0.25.1 # MIT -oslo.concurrency>=3.26.0 # Apache-2.0 -oslo.context>=2.23.0 # Apache-2.0 -oslo.log>=3.44.0 # Apache-2.0 -oslo.i18n>=3.24.0 # Apache-2.0 -oslo.privsep>=1.32.0 # Apache-2.0 -oslo.serialization>=2.29.0 # Apache-2.0 -oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 -oslo.utils>=3.34.0 # Apache-2.0 -requests>=2.14.2 # Apache-2.0 -six>=1.10.0 # MIT -tenacity>=6.0.0 # Apache-2.0 -os-win>=3.0.0 # Apache-2.0 +pbr>=5.5.1 # Apache-2.0 +eventlet>=0.30.1 # MIT +oslo.concurrency>=4.4.0 # Apache-2.0 +oslo.context>=3.1.1 # Apache-2.0 +oslo.log>=4.4.0 # Apache-2.0 +oslo.i18n>=5.0.1 # Apache-2.0 +oslo.privsep>=2.4.0 # Apache-2.0 +oslo.serialization>=4.1.0 # Apache-2.0 +oslo.service>=2.5.0 # Apache-2.0 +oslo.utils>=4.8.0 # Apache-2.0 +requests>=2.25.1 # Apache-2.0 +tenacity>=6.3.1 # Apache-2.0 +os-win>=5.4.0 # Apache-2.0 Some of the above is clearly abusing. For example, I don't think Cinder really needs version 5.5.1 of PBR, and 5.5.0 should really be enough. I've traced it back to: https://review.opendev.org/c/openstack/os-brick/+/778807 If this is a consequence of the project stopping to test lower bounds of requirements, then this has gone really insane, and the bad practice must stop immediately to restore sanity, and we must revert. We're back 5 years ago where each projects was uselessly requiring the latest version of everything for no reason... From a downstream package maintainer perspective, this is a disaster move. Haven't we learned from our mistakes? Cheers, Thomas Goirand (zigo) From hberaud at redhat.com Mon Mar 8 10:38:25 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 8 Mar 2021 11:38:25 +0100 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Hello Everyone! Wanted to give you another reminder! Looking forward to see your highlights by the end of the week! Hervé Le ven. 26 févr. 2021 à 00:03, Kendall Nelson a écrit : > Hello Everyone! > > It's time to start thinking about calling out 'cycle-highlights' in your > deliverables! I have no idea how we are here AGAIN ALREADY, alas, here we > be. > > As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > the last few releases: Stein[1], Train[2]. > > We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. Looking through your release notes > might be a good place to start. > > *The deadline for cycle highlights is the end of the R-5 week [3] (next > week) on March 12th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the > openstack/releases repo like this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/$RELEASE/highlights.html > file or the output of the build-openstack-sphinx-docs job under > html/$RELEASE/highlights.html. > > Feel free to add me as a reviewer on your patches. > > Can't wait to see you all have accomplished this release! > > Thanks :) > > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/stein/highlights.html > [2] https://releases.openstack.org/train/highlights.html > [3] htt > > https://releases.openstack.org/wallaby/schedule.html > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 8 10:43:14 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 8 Mar 2021 11:43:14 +0100 Subject: [election][neutron] PTL Candidacy for Xena Message-ID: <20210308104314.ij2onqbattsfncsv@p1.localdomain> Hi, I want to propose my candidacy for Neutron PTL in the Xena cycle. Wallaby was my third cycle serving as Neutron PTL. I think I helped to keep the project in a healthy state and I would like to continue serving as the PTL and keep Neutron running well in the Xena cycle. In Wallaby we accomplished many important goals like e.g.: * finished migration to the engine facade, * improve our CI jobs and its overall stability, * and close of many feature parity gaps between OVN and OVS backends. But we also didn't finish some of the goals which were set for that cycle, like e.g. switching OVN to be the default Neutron backend in Devstack. If I will be elected, I would like to set it as my main goal for Xena cycle. We did a lot of work on adoption of the OVN backend in Neutron already and I think that we are now ready to move on and switch it to be the default backend in the Devstack. I want to focus on couple of things in the Xena cycle: * reduce our bug backlog as it is huge now, * maintenance and continue improvements of our CI, as this is "never ending story", * keep Neutron running in a smooth way. As I mentioned above, if I will be elected, Xena will be my 4th cycle as Neutron PTL and I would like to find some potential successors for the next cycles to help them onboard and understand what are the duties and responsibilities of the PTL. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From thierry at openstack.org Mon Mar 8 11:38:20 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 8 Mar 2021 12:38:20 +0100 Subject: [largescale-sig] Next meeting: March 10, 15utc Message-ID: Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210310T15 Belmiro Moreira will chair this meeting. A number of topics have already been added to the agenda, including discussing CentOS Stream, reflecting on last video meeting and pick a topic for the next one. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From skaplons at redhat.com Mon Mar 8 12:21:06 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 8 Mar 2021 13:21:06 +0100 Subject: [neutron] Bug deputy report - week of March 1st Message-ID: <20210308122106.upiztvi7ksybbbtx@p1.localdomain> Hi, I was Neutron's bug deputy last week. Below is my summary. **critical** https://bugs.launchpad.net/neutron/+bug/1917487 - [FT] "IpNetnsCommand.add" command fails frequently - Assigned to Rodolfo https://bugs.launchpad.net/nova/+bug/1917610 - Migration and resize tests from tempest.scenario.test_minbw_allocation_placement.MinBwAllocationPlacementTest failing in neutron-tempest-dvr-ha-multinode-full - fix done in tempest https://review.opendev.org/c/openstack/tempest/+/778451 https://bugs.launchpad.net/neutron/+bug/1917793 - [HA] keepalived_state_change does not finish "handle_initial_state"execution - gate failure, assigned to Rodolfo **high** https://bugs.launchpad.net/neutron/+bug/1917409 - neutron-l3-agents won't become active - needs assignment https://bugs.launchpad.net/neutron/+bug/1917508 - Router create fails when router with same name already exists - related to ovn l3 plugin, needs assignment https://bugs.launchpad.net/neutron/+bug/1917370 - [functional] ovn maintenance worker isn't mocked in functional tests - **High** as it's causing gate failures or timeouts, assigned, patch: https://review.opendev.org/c/openstack/neutron/+/778080 - already merged https://bugs.launchpad.net/neutron/+bug/1917393 - [L3][Port forwarding] admin state DOWN/UP router will lose all pf-floating-ips and nat rules - assigned, In progress, Patch: https://review.opendev.org/c/openstack/neutron/+/778126 https://bugs.launchpad.net/neutron/+bug/1918108 - [OVN] IGMP snooping traps IGMP messages - assigned to Lucas already, **medium** https://bugs.launchpad.net/neutron/+bug/1917448 - GRE tunnels over IPv6 have wrong packet_type set in OVS - set as **Medium**, assigned, patch: https://review.opendev.org/c/openstack/neutron/+/778178, already merged **Wishlist** https://bugs.launchpad.net/neutron/+bug/1917437 - Enable querier for multicast (IGMP) in OVN, Assigned, In progress https://bugs.launchpad.net/neutron/+bug/1917866 - No need to fetch whole network object on port create - assigned to Oleg, patch proposed already https://review.opendev.org/c/openstack/neutron/+/778881 https://bugs.launchpad.net/neutron/+bug/1904559 - Designate driver: allow_reverse_dns_lookup doesn't works if dns_domain zone wasn't created - Switched to be RFE - please, especially drivers team members, check and triage it so we can discuss that on the drivers meeting, **undecided** https://bugs.launchpad.net/networking-ovn/+bug/1914857 - AttributeError: 'NoneType' object has no attribute 'db_find_rows' - Rodolfo and Lucas are triaging it, **Old bugs which needs attention** https://bugs.launchpad.net/neutron/+bug/1752903 - Floating IPs should not allocate IPv6 addresses - L3 subteam should take a look at it. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From eblock at nde.ag Mon Mar 8 13:18:56 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 13:18:56 +0000 Subject: Cleanup database(s) Message-ID: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> Hi *, I have a quick question, last year we migrated our OpenStack to a highly available environment through a reinstall of all nodes. The migration went quite well, we're working happily in the new cloud but the databases still contain deprecated data. For example, the nova-scheduler logs lines like these on a regular basis: /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING nova.scheduler.host_manager [...] No compute service record found for host compute1 This is one of the old compute nodes that has been reinstalled and is now compute01. I tried to find the right spot to delete some lines in the DB but there are a couple of places so I wanted to check and ask you for some insights. The scheduler messages seem to originate in /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py ---snip--- for cell_uuid, computes in compute_nodes.items(): for compute in computes: service = services.get(compute.host) if not service: LOG.warning( "No compute service record found for host %(host)s", {'host': compute.host}) continue ---snip--- So I figured it could be this table in the nova DB: ---snip--- MariaDB [nova]> select host,deleted from compute_nodes; +-----------+---------+ | host | deleted | +-----------+---------+ | compute01 | 0 | | compute02 | 0 | | compute03 | 0 | | compute04 | 0 | | compute05 | 0 | | compute1 | 0 | | compute2 | 0 | | compute3 | 0 | | compute4 | 0 | +-----------+---------+ ---snip--- What would be the best approach here to clean up a little? I believe it would be safe to simply purge those lines containing the old compute node, but there might be a smoother way. Or maybe there are more places to purge old data from? I'd appreciate any ideas. Regards, Eugen From gmann at ghanshyammann.com Mon Mar 8 13:47:25 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 08 Mar 2021 07:47:25 -0600 Subject: [all][qa] Gate failure for <= stable/train and Tempest master gate (Do not recheck) Message-ID: <1781217b005.10f2576cb155502.3684013520139333947@ghanshyammann.com> Hello Everyone, get-pip.py url for py2.7 has been changed which causing failure on stable/train or older branches and Tempest master gate. Thanks, Dan, Elod for fixing and backporting those. Please wait for the below fixes to merge and do not recheck. https://review.opendev.org/q/Id62e91b1609db4b1d2fa425010bac1ce77e9fc51 -gmann From smooney at redhat.com Mon Mar 8 13:53:58 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 08 Mar 2021 13:53:58 +0000 Subject: Cleanup database(s) In-Reply-To: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> Message-ID: On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: > Hi *, > > I have a quick question, last year we migrated our OpenStack to a > highly available environment through a reinstall of all nodes. The > migration went quite well, we're working happily in the new cloud but > the databases still contain deprecated data. For example, the > nova-scheduler logs lines like these on a regular basis: > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING > nova.scheduler.host_manager [...] No compute service record found for > host compute1 > > This is one of the old compute nodes that has been reinstalled and is > now compute01. I tried to find the right spot to delete some lines in > the DB but there are a couple of places so I wanted to check and ask > you for some insights. > > The scheduler messages seem to originate in > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py > > ---snip--- >          for cell_uuid, computes in compute_nodes.items(): >              for compute in computes: >                  service = services.get(compute.host) > >                  if not service: >                      LOG.warning( >                          "No compute service record found for host %(host)s", >                          {'host': compute.host}) >                      continue > ---snip--- > > So I figured it could be this table in the nova DB: > > ---snip--- > MariaDB [nova]> select host,deleted from compute_nodes; > +-----------+---------+ > > host | deleted | > +-----------+---------+ > > compute01 | 0 | > > compute02 | 0 | > > compute03 | 0 | > > compute04 | 0 | > > compute05 | 0 | > > compute1 | 0 | > > compute2 | 0 | > > compute3 | 0 | > > compute4 | 0 | > +-----------+---------+ > ---snip--- > > What would be the best approach here to clean up a little? I believe > it would be safe to simply purge those lines containing the old > compute node, but there might be a smoother way. Or maybe there are > more places to purge old data from? so the step you porably missed was deleting the old compute service records so you need to do openstack compute service list to get teh compute service ids then do openstack compute service delete ... you need to make sure that you only remvoe the unused old serivces but i think that would fix your issue. > > I'd appreciate any ideas. > > Regards, > Eugen > > From eblock at nde.ag Mon Mar 8 14:18:36 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 14:18:36 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> Message-ID: <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Thank you, Sean. > so you need to do > openstack compute service list to get teh compute service ids > then do > openstack compute service delete ... > > you need to make sure that you only remvoe the unused old serivces > but i think that would fix your issue. That's the thing, they don't show up in the compute service list. But I also found them in the resource_providers table, only the old compute nodes appear here: MariaDB [nova]> select name from nova_api.resource_providers; +--------------------------+ | name | +--------------------------+ | compute1.fqdn | | compute2.fqdn | | compute3.fqdn | | compute4.fqdn | +--------------------------+ Zitat von Sean Mooney : > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> Hi *, >> >> I have a quick question, last year we migrated our OpenStack to a >> highly available environment through a reinstall of all nodes. The >> migration went quite well, we're working happily in the new cloud but >> the databases still contain deprecated data. For example, the >> nova-scheduler logs lines like these on a regular basis: >> >> /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >> nova.scheduler.host_manager [...] No compute service record found for >> host compute1 >> >> This is one of the old compute nodes that has been reinstalled and is >> now compute01. I tried to find the right spot to delete some lines in >> the DB but there are a couple of places so I wanted to check and ask >> you for some insights. >> >> The scheduler messages seem to originate in >> >> /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> >> ---snip--- >>          for cell_uuid, computes in compute_nodes.items(): >>              for compute in computes: >>                  service = services.get(compute.host) >> >>                  if not service: >>                      LOG.warning( >>                          "No compute service record found for host >> %(host)s", >>                          {'host': compute.host}) >>                      continue >> ---snip--- >> >> So I figured it could be this table in the nova DB: >> >> ---snip--- >> MariaDB [nova]> select host,deleted from compute_nodes; >> +-----------+---------+ >> > host | deleted | >> +-----------+---------+ >> > compute01 | 0 | >> > compute02 | 0 | >> > compute03 | 0 | >> > compute04 | 0 | >> > compute05 | 0 | >> > compute1 | 0 | >> > compute2 | 0 | >> > compute3 | 0 | >> > compute4 | 0 | >> +-----------+---------+ >> ---snip--- >> >> What would be the best approach here to clean up a little? I believe >> it would be safe to simply purge those lines containing the old >> compute node, but there might be a smoother way. Or maybe there are >> more places to purge old data from? > so the step you porably missed was deleting the old compute service records > > so you need to do > openstack compute service list to get teh compute service ids > then do > openstack compute service delete ... > > you need to make sure that you only remvoe the unused old serivces > but i think that would fix your issue. > >> >> I'd appreciate any ideas. >> >> Regards, >> Eugen >> >> From smooney at redhat.com Mon Mar 8 14:48:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 08 Mar 2021 14:48:41 +0000 Subject: Cleanup database(s) In-Reply-To: <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Message-ID: On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: > Thank you, Sean. > > > so you need to do > > openstack compute service list to get teh compute service ids > > then do > > openstack compute service delete ... > > > > you need to make sure that you only remvoe the unused old serivces > > but i think that would fix your issue. > > That's the thing, they don't show up in the compute service list. But > I also found them in the resource_providers table, only the old > compute nodes appear here: > > MariaDB [nova]> select name from nova_api.resource_providers; > +--------------------------+ > > name | > +--------------------------+ > > compute1.fqdn | > > compute2.fqdn | > > compute3.fqdn | > > compute4.fqdn | > +--------------------------+ ah in that case the compute service delete is ment to remove the RPs too but if the RP had stale allcoation at teh time of the delete the RP delete will fail what you proably need to do in this case is check if the RPs still have allocations and if so verify that the allocation are owned by vms that nolonger exist. if that is the case you should be able to delete teh allcaotion and then the RP if the allocations are related to active vms that are now on the rebuild nodes then you will have to try and heal the allcoations. there is a openstack client extention called osc-placement that you can install to help. we also have a heal allcoation command in nova-manage that may help but the next step would be to validate if the old RPs are still in use or not. from there you can then work to align novas and placment view with the real toplogy. that could invovle removing the old compute nodes form the compute_nodes table or marking them as deleted but both nova db and plamcent need to be kept in sysnc to correct your current issue. > > > Zitat von Sean Mooney : > > > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: > > > Hi *, > > > > > > I have a quick question, last year we migrated our OpenStack to a > > > highly available environment through a reinstall of all nodes. The > > > migration went quite well, we're working happily in the new cloud but > > > the databases still contain deprecated data. For example, the > > > nova-scheduler logs lines like these on a regular basis: > > > > > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING > > > nova.scheduler.host_manager [...] No compute service record found for > > > host compute1 > > > > > > This is one of the old compute nodes that has been reinstalled and is > > > now compute01. I tried to find the right spot to delete some lines in > > > the DB but there are a couple of places so I wanted to check and ask > > > you for some insights. > > > > > > The scheduler messages seem to originate in > > > > > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py > > > > > > ---snip--- > > >          for cell_uuid, computes in compute_nodes.items(): > > >              for compute in computes: > > >                  service = services.get(compute.host) > > > > > >                  if not service: > > >                      LOG.warning( > > >                          "No compute service record found for host > > > %(host)s", > > >                          {'host': compute.host}) > > >                      continue > > > ---snip--- > > > > > > So I figured it could be this table in the nova DB: > > > > > > ---snip--- > > > MariaDB [nova]> select host,deleted from compute_nodes; > > > +-----------+---------+ > > > > host | deleted | > > > +-----------+---------+ > > > > compute01 | 0 | > > > > compute02 | 0 | > > > > compute03 | 0 | > > > > compute04 | 0 | > > > > compute05 | 0 | > > > > compute1 | 0 | > > > > compute2 | 0 | > > > > compute3 | 0 | > > > > compute4 | 0 | > > > +-----------+---------+ > > > ---snip--- > > > > > > What would be the best approach here to clean up a little? I believe > > > it would be safe to simply purge those lines containing the old > > > compute node, but there might be a smoother way. Or maybe there are > > > more places to purge old data from? > > so the step you porably missed was deleting the old compute service records > > > > so you need to do > > openstack compute service list to get teh compute service ids > > then do > > openstack compute service delete ... > > > > you need to make sure that you only remvoe the unused old serivces > > but i think that would fix your issue. > > > > > > > > I'd appreciate any ideas. > > > > > > Regards, > > > Eugen > > > > > > > > > From eblock at nde.ag Mon Mar 8 15:28:19 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 15:28:19 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Message-ID: <20210308152819.Horde.3O8lkYNu58IYePW7cpYmVqS@webmail.nde.ag> Hi, > there is a openstack client extention called osc-placement that you > can install to help. > we also have a heal allcoation command in nova-manage that may help > but the next step would be to validate > if the old RPs are still in use or not. from there you can then work > to align novas and placment view with > the real toplogy. I read about that in the docs, but there's no RPM for our distro (openSUSE), I guess we'll have to build it from source. > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. Is this the right place to look at? MariaDB [nova]> select count(*) from nova_api.allocations; +----------+ | count(*) | +----------+ | 263 | +----------+ MariaDB [nova]> select resource_provider_id,consumer_id from nova_api.allocations limit 10; +----------------------+--------------------------------------+ | resource_provider_id | consumer_id | +----------------------+--------------------------------------+ | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | | 1 | 339d0585-b671-4afa-918b-a772bfc36da8 | +----------------------+--------------------------------------+ MariaDB [nova]> select name,id from nova_api.resource_providers; +--------------------------+----+ | name | id | +--------------------------+----+ | compute1.fqdn | 3 | | compute2.fqdn | 1 | | compute3.fqdn | 2 | | compute4.fqdn | 4 | +--------------------------+----+ I only checked four of those consumer_id entries and all are existing VMs, I'll need to check all of them tomorrow. So I guess we should try to get the osc-placement tool running for us. Thanks, that already helped a lot! Eugen Zitat von Sean Mooney : > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >> Thank you, Sean. >> >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> >> That's the thing, they don't show up in the compute service list. But >> I also found them in the resource_providers table, only the old >> compute nodes appear here: >> >> MariaDB [nova]> select name from nova_api.resource_providers; >> +--------------------------+ >> > name | >> +--------------------------+ >> > compute1.fqdn | >> > compute2.fqdn | >> > compute3.fqdn | >> > compute4.fqdn | >> +--------------------------+ > ah in that case the compute service delete is ment to remove the RPs too > but if the RP had stale allcoation at teh time of the delete the RP > delete will fail > > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. > if that is the case you should be able to delete teh allcaotion and > then the RP > if the allocations are related to active vms that are now on the > rebuild nodes then you will have to try and > heal the allcoations. > > there is a openstack client extention called osc-placement that you > can install to help. > we also have a heal allcoation command in nova-manage that may help > but the next step would be to validate > if the old RPs are still in use or not. from there you can then work > to align novas and placment view with > the real toplogy. > > that could invovle removing the old compute nodes form the > compute_nodes table or marking them as deleted but > both nova db and plamcent need to be kept in sysnc to correct your > current issue. > >> >> >> Zitat von Sean Mooney : >> >> > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> > > Hi *, >> > > >> > > I have a quick question, last year we migrated our OpenStack to a >> > > highly available environment through a reinstall of all nodes. The >> > > migration went quite well, we're working happily in the new cloud but >> > > the databases still contain deprecated data. For example, the >> > > nova-scheduler logs lines like these on a regular basis: >> > > >> > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >> > > nova.scheduler.host_manager [...] No compute service record found for >> > > host compute1 >> > > >> > > This is one of the old compute nodes that has been reinstalled and is >> > > now compute01. I tried to find the right spot to delete some lines in >> > > the DB but there are a couple of places so I wanted to check and ask >> > > you for some insights. >> > > >> > > The scheduler messages seem to originate in >> > > >> > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> > > >> > > ---snip--- >> > >          for cell_uuid, computes in compute_nodes.items(): >> > >              for compute in computes: >> > >                  service = services.get(compute.host) >> > > >> > >                  if not service: >> > >                      LOG.warning( >> > >                          "No compute service record found for host >> > > %(host)s", >> > >                          {'host': compute.host}) >> > >                      continue >> > > ---snip--- >> > > >> > > So I figured it could be this table in the nova DB: >> > > >> > > ---snip--- >> > > MariaDB [nova]> select host,deleted from compute_nodes; >> > > +-----------+---------+ >> > > > host | deleted | >> > > +-----------+---------+ >> > > > compute01 | 0 | >> > > > compute02 | 0 | >> > > > compute03 | 0 | >> > > > compute04 | 0 | >> > > > compute05 | 0 | >> > > > compute1 | 0 | >> > > > compute2 | 0 | >> > > > compute3 | 0 | >> > > > compute4 | 0 | >> > > +-----------+---------+ >> > > ---snip--- >> > > >> > > What would be the best approach here to clean up a little? I believe >> > > it would be safe to simply purge those lines containing the old >> > > compute node, but there might be a smoother way. Or maybe there are >> > > more places to purge old data from? >> > so the step you porably missed was deleting the old compute >> service records >> > >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> > >> > > >> > > I'd appreciate any ideas. >> > > >> > > Regards, >> > > Eugen >> > > >> > > >> >> >> From danilo_dellorto at it.ibm.com Mon Mar 8 16:19:26 2021 From: danilo_dellorto at it.ibm.com (DANILO PAOLO DELL'ORTO) Date: Mon, 8 Mar 2021 17:19:26 +0100 Subject: Openstack Cinder support matrix Message-ID: Hi, The Openstack Cinder driver support matrix (see link-1 below) lists the supported features for each storage and in the IBM Virtualize Family (SVC) section lacks support for 3 important features: Thin provisioning, Volume migration (storage assisted) and Active -Active high availability support. In the same site in section IBM Spectrum Virtualize volume driver (link-2 below), the functions described above seem to be supported and explained in more detail. Which of the two parts is true? Maybe the support matrix is out of date? best regards link-1 https://docs.openstack.org/cinder/latest/reference/support-matrix.html link-2 https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/ibm-storwize-svc-driver.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 9685 bytes Desc: not available URL: From gthiemonge at redhat.com Mon Mar 8 16:51:19 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Mon, 8 Mar 2021 17:51:19 +0100 Subject: [election][Octavia] PTL candidacy for Xena Message-ID: Hi everyone, I would like to propose my candidacy for Octavia PTL during the Xena cycle. I have been an Octavia contributor for 2 years, and a core reviewer for 2 releases. Since then, I have been working in many areas in Octavia: adding support for new protocols (SCTP), fixing CI jobs, improving the octavia-dashboard. My focus will be to continue the work that the team has accomplished these last years, particularly adding many new features in Octavia (active/active load balancers, leveraging features provided by HAProxy 2.0) and improving our CI coverage (restoring CentOS jobs). Thanks for your consideration, Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Mar 8 17:00:53 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 8 Mar 2021 19:00:53 +0200 Subject: [TripleO] stable/rocky End Of Life - closed for new code Message-ID: hello TripleO FYI we have now merged https://review.opendev.org/c/openstack/releases/+/774244 which tags stable/rocky for tripleo repos as End Of Life. *** WE ARE NO LONGER ACCEPTING PATCHES *** for tripleo stable/rocky [1]. In due course the rocky branch will be removed from all tripleO repos, however this will not be possible if there are pending patches open against any of those [1]. So please avoid posting anything to tripleo stable/rocky and if you see a review posted please help by commenting and informing the author that rocky is closed for tripleo. thank you for reading and your help in this matter regards, marios [1] https://releases.openstack.org/teams/tripleo.html#rocky -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Mar 8 17:01:48 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 8 Mar 2021 18:01:48 +0100 Subject: Openstack Cinder support matrix In-Reply-To: References: Message-ID: Hello Danilo, yes, maybe. I contacted several storage vendors for a tender and seems the support matrix is not very accurate under the point of view of most of them . I suggest to contact the storage vendor. Ignazio Il giorno lun 8 mar 2021 alle ore 17:28 DANILO PAOLO DELL'ORTO < danilo_dellorto at it.ibm.com> ha scritto: > Hi, > > The Openstack Cinder driver support matrix (see link-1 below) lists the > supported features for each storage and in the IBM Virtualize Family (SVC) > section > lacks support for 3 important features: Thin provisioning, Volume > migration (storage assisted) and Active -Active high availability support. > In the same site in section IBM Spectrum Virtualize volume driver (link-2 > below), the functions described above seem to be supported > and explained in more detail. > Which of the two parts is true? > Maybe the support matrix is out of date? > > best regards > > link-1 > https://docs.openstack.org/cinder/latest/reference/support-matrix.html > link-2 > https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/ibm-storwize-svc-driver.html > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: noname Type: image/gif Size: 9685 bytes Desc: not available URL: From mdemaced at redhat.com Mon Mar 8 17:26:36 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 8 Mar 2021 14:26:36 -0300 Subject: [election][kuryr] PTL Candidacy for Xena Message-ID: Greetings, I would like to continue serving as Kuryr PTL for the Xena cycle. I have been contributing to the OpenStack community since the Queens release and started serving as Kuryr PTL in the Wallaby cycle. It was a great opportunity to contribute back to the community as PTL and I would like to continue doing that. In wallaby we achieved the following goals: * Improved CI stability and fixed broken gates e.g. Network Policy e2e and OVN gates. * Added new testing scenarios for Services without selectors and Services with sctp. * Extended Kuryr functionalities - we started the dual-stack support and included the support for SCTP and Service without selectors. * Increased the contributor base with new contributors from the Outreachy program. For the next cycle, I propose the following goals for Kuryr: * Improve and extend CI: we already did great improvements, but we must continuously work on it to provide better and quicker feedback during the development process. As part of that, we plan to: update gates to start using Kubeadm and include a new cri-o gate. * Extend Kuryr functionalities: Support dual stack and Kubeadm for DevStack installations, and move Pools management to CRDs. * Continue growing the contributor base. Thanks, Maysa Macedo. IRC: maysams -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Mar 8 18:08:40 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 8 Mar 2021 10:08:40 -0800 Subject: [PTLs][All] vPTG April 2021 Team Signup Message-ID: Greetings! As you hopefully already know, our next PTG will be virtual again, and held from Monday, April 19 to Friday, April 23. We will have the same schedule set up available as last time with three windows of time spread across the day to cover all timezones with breaks in between. *To signup your team, you must complete **BOTH** the survey[1] AND reserve time in the ethercalc[2] by March 25 at 7:00 UTC.* We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions in with 4 rules/guidelines. 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first. 2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. Again, you need to fill out BOTH the ethercalc AND the survey to complete your team's sign up. If you have any issues with signing up your team, due to conflict or otherwise, please let me know! While we are trying to empower you to make your own decisions as to when you meet and for how long (after all, you know your needs and teams timezones better than we do), we are here to help! Once your team is signed up, please register! And remind your team to register! Registration is free, but since it will be how we contact you with passwords, event details, etc. it is still important! Continue to check back for updates at openstack.org/ptg. -the Kendalls (diablo_rojo & wendallkaters) [1] Team Survey: https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey [2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi [3] PTG Registration: https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Mar 8 18:47:02 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 8 Mar 2021 12:47:02 -0600 Subject: [community/MLs] Measuring ML Success Message-ID: Hi All - Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. Open to other ideas/thoughts :) Cheers! Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Mar 8 19:21:58 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 9 Mar 2021 04:21:58 +0900 Subject: [election][tacker] PTL candidacy for Xena Message-ID: <39b6bce5-5b9e-9362-2136-a27ef05f46a5@gmail.com> Hi, I'd like to propose my candidacy for Tacker PTL in Xena cycle. In Wallaby release, we have released several features for the latest ETSI NFV standard while largely updating infras, such as moving to Ubuntu 20.04, supporting redhat distros again and dropping python2 completely for not only Tacker but also related projects such as tosca-parser or heat-translator or so [1]. In addition, we have fixed instability in unit and functional tests for which we have troubles several times. As Tacker PTL, I've led the team by proposing not only new features, but also things for driving the team such as documentation or bug tracking. In Xena cycle, I would like to continue to make Tacker be more useful product for users interested in NFV. I believe Tacker will be a good reference implementation for NFV standard. We have planed to make Tacker more feasible not only for VM environment, but also container to meet requirements from industries. - Continue to implement the latest container technology with ETSI NFV standard. - Introduce multi API versions to meet the requirements for operators enable to deploy mixed environment of multi-vendor products in which some products provide a stable version APIs while other products adopt move advanced ones. - Proceed to design and implement test framework under development in ETSI NFV TST to improve the quality of the product, not only unit tests and functional tests, but also introduce more sophisticated scheme such as robot framework. [1] https://docs.openstack.org/releasenotes/tacker/unreleased.html Regards, Yasufumi Ogawa From anost1986 at gmail.com Mon Mar 8 19:56:43 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Mon, 8 Mar 2021 13:56:43 -0600 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Hi Jimmy, Stackalytics [0] currently tracks emails identifying and grouping them by author, company, module (with some success). To answer your challenge we'll need to add a grouping by thread, but still need some criteria to mark thread as possibly unanswered. E.g. threads having a single message or only messages from a single author, or if the last message in a thread contains a question mark. With no additional information marking thread as closed explicitly, all this will still be a guessing, producing candidates for unanswered threads. Thank you for bringing this up! [0] https://www.stackalytics.io/?metric=emails On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > Hi All - > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > Open to other ideas/thoughts :) > > Cheers! > Jimmy From eblock at nde.ag Mon Mar 8 20:57:35 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 20:57:35 +0000 Subject: Cleanup database(s) In-Reply-To: <20210308152819.Horde.3O8lkYNu58IYePW7cpYmVqS@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> <20210308152819.Horde.3O8lkYNu58IYePW7cpYmVqS@webmail.nde.ag> Message-ID: <20210308205735.Horde.RP7Fip19F6rkzcTUwX0p6ca@webmail.nde.ag> > I read about that in the docs, but there's no RPM for our distro > (openSUSE), I guess we'll have to build it from source. I should have read the docs more carefully, I installed the osc-placement plug-in on a test machine and will play around with the options. Thanks again! Zitat von Eugen Block : > Hi, > >> there is a openstack client extention called osc-placement that you >> can install to help. >> we also have a heal allcoation command in nova-manage that may help >> but the next step would be to validate >> if the old RPs are still in use or not. from there you can then >> work to align novas and placment view with >> the real toplogy. > > I read about that in the docs, but there's no RPM for our distro > (openSUSE), I guess we'll have to build it from source. > >> what you proably need to do in this case is check if the RPs still >> have allocations and if so >> verify that the allocation are owned by vms that nolonger exist. > > Is this the right place to look at? > > MariaDB [nova]> select count(*) from nova_api.allocations; > +----------+ > | count(*) | > +----------+ > | 263 | > +----------+ > > > MariaDB [nova]> select resource_provider_id,consumer_id from > nova_api.allocations limit 10; > +----------------------+--------------------------------------+ > | resource_provider_id | consumer_id | > +----------------------+--------------------------------------+ > | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | > | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | > | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | > | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | > | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | > | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | > | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | > | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | > | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | > | 1 | 339d0585-b671-4afa-918b-a772bfc36da8 | > +----------------------+--------------------------------------+ > > MariaDB [nova]> select name,id from nova_api.resource_providers; > +--------------------------+----+ > | name | id | > +--------------------------+----+ > | compute1.fqdn | 3 | > | compute2.fqdn | 1 | > | compute3.fqdn | 2 | > | compute4.fqdn | 4 | > +--------------------------+----+ > > I only checked four of those consumer_id entries and all are > existing VMs, I'll need to check all of them tomorrow. So I guess we > should try to get the osc-placement tool running for us. > > Thanks, that already helped a lot! > > Eugen > > > Zitat von Sean Mooney : > >> On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >>> Thank you, Sean. >>> >>>> so you need to do >>>> openstack compute service list to get teh compute service ids >>>> then do >>>> openstack compute service delete ... >>>> >>>> you need to make sure that you only remvoe the unused old serivces >>>> but i think that would fix your issue. >>> >>> That's the thing, they don't show up in the compute service list. But >>> I also found them in the resource_providers table, only the old >>> compute nodes appear here: >>> >>> MariaDB [nova]> select name from nova_api.resource_providers; >>> +--------------------------+ >>>> name | >>> +--------------------------+ >>>> compute1.fqdn | >>>> compute2.fqdn | >>>> compute3.fqdn | >>>> compute4.fqdn | >>> +--------------------------+ >> ah in that case the compute service delete is ment to remove the RPs too >> but if the RP had stale allcoation at teh time of the delete the RP >> delete will fail >> >> what you proably need to do in this case is check if the RPs still >> have allocations and if so >> verify that the allocation are owned by vms that nolonger exist. >> if that is the case you should be able to delete teh allcaotion and >> then the RP >> if the allocations are related to active vms that are now on the >> rebuild nodes then you will have to try and >> heal the allcoations. >> >> there is a openstack client extention called osc-placement that you >> can install to help. >> we also have a heal allcoation command in nova-manage that may help >> but the next step would be to validate >> if the old RPs are still in use or not. from there you can then >> work to align novas and placment view with >> the real toplogy. >> >> that could invovle removing the old compute nodes form the >> compute_nodes table or marking them as deleted but >> both nova db and plamcent need to be kept in sysnc to correct your >> current issue. >> >>> >>> >>> Zitat von Sean Mooney : >>> >>>> On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >>>> > Hi *, >>>> > >>>> > I have a quick question, last year we migrated our OpenStack to a >>>> > highly available environment through a reinstall of all nodes. The >>>> > migration went quite well, we're working happily in the new cloud but >>>> > the databases still contain deprecated data. For example, the >>>> > nova-scheduler logs lines like these on a regular basis: >>>> > >>>> > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >>>> > nova.scheduler.host_manager [...] No compute service record found for >>>> > host compute1 >>>> > >>>> > This is one of the old compute nodes that has been reinstalled and is >>>> > now compute01. I tried to find the right spot to delete some lines in >>>> > the DB but there are a couple of places so I wanted to check and ask >>>> > you for some insights. >>>> > >>>> > The scheduler messages seem to originate in >>>> > >>>> > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >>>> > >>>> > ---snip--- >>>> >          for cell_uuid, computes in compute_nodes.items(): >>>> >              for compute in computes: >>>> >                  service = services.get(compute.host) >>>> > >>>> >                  if not service: >>>> >                      LOG.warning( >>>> >                          "No compute service record found for host >>>> > %(host)s", >>>> >                          {'host': compute.host}) >>>> >                      continue >>>> > ---snip--- >>>> > >>>> > So I figured it could be this table in the nova DB: >>>> > >>>> > ---snip--- >>>> > MariaDB [nova]> select host,deleted from compute_nodes; >>>> > +-----------+---------+ >>>> > > host | deleted | >>>> > +-----------+---------+ >>>> > > compute01 | 0 | >>>> > > compute02 | 0 | >>>> > > compute03 | 0 | >>>> > > compute04 | 0 | >>>> > > compute05 | 0 | >>>> > > compute1 | 0 | >>>> > > compute2 | 0 | >>>> > > compute3 | 0 | >>>> > > compute4 | 0 | >>>> > +-----------+---------+ >>>> > ---snip--- >>>> > >>>> > What would be the best approach here to clean up a little? I believe >>>> > it would be safe to simply purge those lines containing the old >>>> > compute node, but there might be a smoother way. Or maybe there are >>>> > more places to purge old data from? >>>> so the step you porably missed was deleting the old compute >>>> service records >>>> >>>> so you need to do >>>> openstack compute service list to get teh compute service ids >>>> then do >>>> openstack compute service delete ... >>>> >>>> you need to make sure that you only remvoe the unused old serivces >>>> but i think that would fix your issue. >>>> >>>> > >>>> > I'd appreciate any ideas. >>>> > >>>> > Regards, >>>> > Eugen >>>> > >>>> > >>> >>> >>> From vhariria at redhat.com Mon Mar 8 21:15:49 2021 From: vhariria at redhat.com (Vida Haririan) Date: Mon, 8 Mar 2021 16:15:49 -0500 Subject: [Manila ] Bug squash happening next week starting Monday March 15th Message-ID: Hi all, We are planning a new Bug Squash event for next week. The event will be held from 15th through 18th March, 2021, providing plenty of time for all interested to participate. There will be a synchronous bug triage/review call held simultaneously on our Freenode channel #openstack-manila on 18th March, 2021 at 15:00 UTC and on this Jitsi bridge [1]. A list of selected bugs will be provided in advance here [2]. As always, please feel free to update the list with any bugs that you would like us to focus on during this period. Looking forward to you joining us for this event and many thanks in advance for your participation. :) Regards, Vida [1] https://meetpad.opendev.org/ManilaW-ReleaseBugSquash-II [2] https://ethercalc.openstack.org/hrs8m6sqpmaz -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Mon Mar 8 21:45:25 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 8 Mar 2021 18:45:25 -0300 Subject: [cinder] Bug deputy report for week of 2021-03-01 Message-ID: This is a bug report from 2021-03-03 to 2021-03-08. Looks like we have a lot of things going on. Some of these bugs were discussed at the Cinder meeting last Wednesday 2021-02-24. Critical: - High: - https://bugs.launchpad.net/cinder/+bug/1916980: "Cinder sends old db object when delete an attachment ''. Assigned to Gorka Eguileor (gorka). - https://bugs.launchpad.net/cinder/+bug/1917450: "Automatic quota refresh counting twice migrating volumes". Assigned to Gorka Eguileor (gorka). - https://bugs.launchpad.net/cinder/+bug/1917287: "Ambiguous error logs for retype of inspur storage volume". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917353: "image_conversion_cpu_limit and image_conversion_address_space_limit settings ignored in /etc/cinder/cinder.conf". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917574: "Cannot show volumes with name for non-admins". Assigned to Rajat Dhasmana. Medium: - https://bugs.launchpad.net/cinder/+bug/1918099: " Nimble revert to snapshot bug". Assigned to Ajitha Robert (ajitharobert01). Low: - https://bugs.launchpad.net/cinder/+bug/1917293: "Scheduling is not even among multiple thin provisioning pools which have different sizes". Unassigned. Incomplete: - https://bugs.launchpad.net/cinder/+bug/1916843: " Backup create failed: RBD volume flatten too long causing mq to timed out." Unassigned. Undecided/Unconfirmed: - https://bugs.launchpad.net/cinder/+bug/1918119: "Volume backup timeout for large volumes". Assigned to Kiran Pawar (kiranpawar89). - https://bugs.launchpad.net/cinder/+bug/1918102: "Cinder-backup progress notification has incorrect percentage." Assigned to Jon Cui (czl389). - https://bugs.launchpad.net/cinder/+bug/1917797: "Cinder request to glance does not support TLS". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917795: "Cinder ignores reader role conventions in default policies". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917750: " Running parallel iSCSI/LVM c-vol backends is causing random failures in CI". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917605: "Bulk create Hyperswap volume is failing". Assigned to Girish Chilukur. Not a bug:- Feel free to reply/reach me if I missed something. Regards Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Mar 8 21:59:21 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 8 Mar 2021 13:59:21 -0800 Subject: [election][designate] PTL candidacy for Xena Message-ID: Hello OpenStack community, I would like to announce my candidacy for PTL of Designate for the Xena cycle. The Wallaby release cycle seems like it went by very quickly and I would like to continue to support the Designate team during the Xena release. Thank you for your support and your consideration for Xena, Michael Johnson (johnsom) From gmann at ghanshyammann.com Mon Mar 8 23:24:55 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 08 Mar 2021 17:24:55 -0600 Subject: [all][qa] Gate failure for <= stable/train and Tempest master gate (Do not recheck) In-Reply-To: <1781217b005.10f2576cb155502.3684013520139333947@ghanshyammann.com> References: <1781217b005.10f2576cb155502.3684013520139333947@ghanshyammann.com> Message-ID: <178142866c5.e2a5adb3182208.8713510303708224396@ghanshyammann.com> ---- On Mon, 08 Mar 2021 07:47:25 -0600 Ghanshyam Mann wrote ---- > Hello Everyone, > > get-pip.py url for py2.7 has been changed which causing failure on stable/train or older > branches and Tempest master gate. > > Thanks, Dan, Elod for fixing and backporting those. Please wait for the below fixes to merge > and do not recheck. > > https://review.opendev.org/q/Id62e91b1609db4b1d2fa425010bac1ce77e9fc51 All fixes are merged, you can recheck now. -gmann > > -gmann > > From kennelson11 at gmail.com Tue Mar 9 00:55:17 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 8 Mar 2021 16:55:17 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations Last Days Message-ID: Hello All! A quick reminder that we are in the last hours for declaring PTL and TC candidacies. Nominations are open until Mar 09, 2021 23:45 UTC. If you want to stand for election, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: Nominations started @ 2021-03-02 23:45:00 UTC Nominations end @ 2021-03-09 23:45:00 UTC Nominations duration : 7 days, 0:00:00 Nominations remaining : 1 day, 0:48:52 Nominations progress : 85.23% --------------------------------------------------- Projects[1] : 49 Projects with candidates : 22 ( 44.90%) Projects with election : 0 ( 0.00%) --------------------------------------------------- Need election : 0 () Need appointment : 27 (Adjutant Barbican Blazar Cinder Cloudkitty Cyborg Designate Horizon Keystone Kolla Magnum Masakari Mistral Monasca OpenStackAnsible OpenStack_Charms Openstack_Chef Quality_Assurance Rally Requirements Senlin Storlets Swift Telemetry Vitrage Zaqar Zun) =================================================== Stats gathered @ 2021-03-08 22:56:08 UTC This means that with approximately 2 days left, 27 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. Thank you, -Kendall Nelson (diablo_rojo) & the Election Officials [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy [2] Any open reviews at https://review.openstack.org/#/q/is:open+project:openstack/election have not been factored into these stats. [3] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html __________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Mar 9 00:55:30 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 8 Mar 2021 16:55:30 -0800 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Just an FYI, stackalytics hasn't updated since January, so it's not going to be a good, current, source of information. Michael On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > Hi Jimmy, > > Stackalytics [0] currently tracks emails identifying and grouping them > by author, company, module (with some success). > To answer your challenge we'll need to add a grouping by thread, but > still need some criteria to mark thread as possibly unanswered. E.g. > threads having a single message or only messages from a single author, > or if the last message in a thread contains a question mark. > > With no additional information marking thread as closed explicitly, > all this will still be a guessing, producing candidates for unanswered > threads. > > Thank you for bringing this up! > > [0] https://www.stackalytics.io/?metric=emails > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > Hi All - > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > Open to other ideas/thoughts :) > > > > Cheers! > > Jimmy > From anost1986 at gmail.com Tue Mar 9 01:08:23 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Mon, 8 Mar 2021 19:08:23 -0600 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Michael, https://www.stackalytics.io is updated daily and maintained. On Mon, Mar 8, 2021 at 6:55 PM Michael Johnson wrote: > > Just an FYI, stackalytics hasn't updated since January, so it's not > going to be a good, current, source of information. > > Michael > > On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > > > Hi Jimmy, > > > > Stackalytics [0] currently tracks emails identifying and grouping them > > by author, company, module (with some success). > > To answer your challenge we'll need to add a grouping by thread, but > > still need some criteria to mark thread as possibly unanswered. E.g. > > threads having a single message or only messages from a single author, > > or if the last message in a thread contains a question mark. > > > > With no additional information marking thread as closed explicitly, > > all this will still be a guessing, producing candidates for unanswered > > threads. > > > > Thank you for bringing this up! > > > > [0] https://www.stackalytics.io/?metric=emails > > > > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > > > Hi All - > > > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > > > Open to other ideas/thoughts :) > > > > > > Cheers! > > > Jimmy > > From johnsomor at gmail.com Tue Mar 9 01:17:05 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 8 Mar 2021 17:17:05 -0800 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Oh, nice. I missed the memo on the URL change. Thanks, Michael On Mon, Mar 8, 2021 at 5:08 PM Andrii Ostapenko wrote: > > Michael, > > https://www.stackalytics.io is updated daily and maintained. > > On Mon, Mar 8, 2021 at 6:55 PM Michael Johnson wrote: > > > > Just an FYI, stackalytics hasn't updated since January, so it's not > > going to be a good, current, source of information. > > > > Michael > > > > On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > > > > > Hi Jimmy, > > > > > > Stackalytics [0] currently tracks emails identifying and grouping them > > > by author, company, module (with some success). > > > To answer your challenge we'll need to add a grouping by thread, but > > > still need some criteria to mark thread as possibly unanswered. E.g. > > > threads having a single message or only messages from a single author, > > > or if the last message in a thread contains a question mark. > > > > > > With no additional information marking thread as closed explicitly, > > > all this will still be a guessing, producing candidates for unanswered > > > threads. > > > > > > Thank you for bringing this up! > > > > > > [0] https://www.stackalytics.io/?metric=emails > > > > > > > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > > > > > Hi All - > > > > > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > > > > > Open to other ideas/thoughts :) > > > > > > > > Cheers! > > > > Jimmy > > > From whayutin at redhat.com Tue Mar 9 01:42:46 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 8 Mar 2021 18:42:46 -0700 Subject: [TripleO] stable/rocky End Of Life - closed for new code In-Reply-To: References: Message-ID: Nice work Marios!! Thank you On Mon, Mar 8, 2021 at 10:03 AM Marios Andreou wrote: > hello TripleO > > FYI we have now merged > https://review.opendev.org/c/openstack/releases/+/774244 which tags > stable/rocky for tripleo repos as End Of Life. > > *** WE ARE NO LONGER ACCEPTING PATCHES *** for tripleo stable/rocky [1]. > > In due course the rocky branch will be removed from all tripleO repos, > however this will not be possible if there are pending patches open against > any of those [1]. > > So please avoid posting anything to tripleo stable/rocky and if you see a > review posted please help by commenting and informing the author that rocky > is closed for tripleo. > > thank you for reading and your help in this matter > > regards, marios > > [1] https://releases.openstack.org/teams/tripleo.html#rocky > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Tue Mar 9 04:19:14 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 9 Mar 2021 17:19:14 +1300 Subject: [election][trove] PTL candidacy for Xena Message-ID: Hi community, I am still very glad to announce my PTL candidacy for Trove, the database-as-a-service in OpenStack. I've been served as Trove PTL since Train release and I spent more than 50% of my working time on this project. I'm so lucky to work for a company that built a public cloud based on OpenStack and has been investing heavily in Open Source since day one. We also deployed Trove in our production, we report bug and fix in the upstream, we don't maintain private changes, we keep growing together with the community. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Tue Mar 9 04:41:18 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Tue, 9 Mar 2021 10:11:18 +0530 Subject: [election][horizon] PTL candidacy for Xena Message-ID: Hi everyone, I would like to announce my candidacy for PTL of Horizon for Xena release. I have been actively contributing to Horizon from stein release [1] and become horizon core reviewer in early train release. During these years, my focus was on filing horizon feature-gap [2], bug-fixing, and stabilizing horizon and its plugins. I am very grateful to all the people that have mentored me and helped me throughout these years. As a PTL I will focus on the following areas: * Migrate horizon & its plugins to the next LTS version of both Django and Nodejs. * Focus on specific set of features-gap instead of targeting all or many at a time. For example, 'System Scope Support' is one of the highest priority we should do in Xena cycle. * Reduce the number of New/Open bugs which is high currently. Something we can do weekly basis in rotation basis or together in meeting etc. * Help new contributors to work on the horizon. I am looking forward to working together with all of you on the Xena release. Thank you, Vishal Manchanda(irc: vishalmanchanda) [1] https://www.stackalytics.io/?metric=commits&release=all&user_id=vishalmanchanda [2] https://etherpad.opendev.org/p/horizon-feature-gap -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Mar 9 05:13:00 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 9 Mar 2021 00:13:00 -0500 Subject: [election][cinder] PTL candidacy for Xena Message-ID: Hello everyone, I'd like to announce my candidacy for Cinder PTL for the Xena cycle. The primary challenge we face in the Cinder community is that our reviewing bandwidth has declined at the same time third-party driver contributions--new drivers, new features for existing drivers, and driver bugfixes--have been increasing. Similarly, we've been adding better CI coverage (good) while at the same time our gate jobs have become increasingly unstable (not due to the new tests, there are some old failures which seem to be occurring more often). We need to add some new core reviewers in Xena. Luckily, some people have been increasing their review participation recently, so there are community members getting themselves into a position to help the project in this capacity. (And anyone else currently working on cinder project who's interested becoming a cinder core, please contact me (or any of the current cores) to discuss what the expectations are.) We'll also be making it a priority to improve the gate jobs (or else we'll never be able to get anything merged). As far as community activity goes, the multiple virtual mid-cycle meetings have continued to be productive, and the cinder meeting in videoconference that we've been having once a month seems popular and gives us a break from the strict IRC meeting format. There's support to make the Festival of XS Reviews a recurring event, and Sofia has proposed holding a separate (short) bug meeting which we'll start doing soon. Hopefully all these events will help keep current contributors engaged and make it easier for other people to participate more fully. With respect to the details of Cinder development in Xena, I expect those to emerge from our virtual PTG discussions in April. You can help set the agenda here: https://etherpad.opendev.org/p/xena-ptg-cinder-planning Thanks for reading this far, and thank you for your consideration. Brian Rosmaita (rosmaita) https://review.opendev.org/c/openstack/election/+/779425 From chris.macnaughton at canonical.com Tue Mar 9 07:32:09 2021 From: chris.macnaughton at canonical.com (Chris MacNaughton) Date: Tue, 9 Mar 2021 08:32:09 +0100 Subject: [election][charms] PTL candidacy for Xena Message-ID: <44ee559d-09bb-8376-975a-b58be563a11a@canonical.com> Hello all, I would like to announce my candidacy for PTL of the OpenStack Charms project[1] for the Xena cycle. Through my time contributing to the OpenStack Charms project as a core team member, I have experienced working on many of the charms in both a bug-fix and new feature capacity. Additionally, I have made upstream contributions as needed. The OpenStack Charms project has an increasing community both consuming the charms, as well as contributing to them, and I think that it is important to actively nurture this community. In addition to our user community, it is important for OpenStack Charms members to integrate more widely in the wider OpenStack community. [1]: https://review.openstack.org/#/c/641571/ -- Chris MacNaughton -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From themasch at gmx.net Tue Mar 9 07:57:19 2021 From: themasch at gmx.net (MaSch) Date: Tue, 9 Mar 2021 08:57:19 +0100 Subject: How to Handle RabbitMQ peak after restarting neutron-openvswitch-agent Message-ID: <63b3a5ef-c8e4-d03e-0b0b-0b0b3eb92e35@gmx.net> Hello at all. After adding an new direct attached VLAN to our computeNodes and restarting neutron-openvswitch-agent, we detected a huge peak in RabbitMQ Messages. Mostly "neutron-vo-Port-1.1_fanout" these messages seem every node produced  5000+ Messages. Due to a bridge_mapping mismatch, the agent repeatedly restarted and produced more and more Messages. At the peak there were about 2Million messages in the queue. As we ran into Network issues caused by Messaging timeouts i would like to know if there is a procedure to handle this messages. Is it save to delete the queue as those messages? It seems they disappeared after a timeout period of about 30 Minutes. are currently using OpenStack Queens release. Thanks a lot in advance. Best regards MaSch From eblock at nde.ag Tue Mar 9 09:20:54 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 09 Mar 2021 09:20:54 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Message-ID: <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> Hi again, I just wanted to get some clarification on how to proceed. > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. > if that is the case you should be able to delete teh allcaotion and > then the RP > if the allocations are related to active vms that are now on the > rebuild nodes then you will have to try and > heal the allcoations. I checked all allocations for the old compute nodes, those are all existing VMs. So simply deleting the allocations won't do any good, I guess. From [1] I understand that I should overwrite all allocations (we're on Train so there's no "unset" available yet) for those VMs to point to the new compute nodes (resource_providers). After that I should delete the resource providers, correct? I ran "heal_allocations" for one uncritical instance, but it didn't have any visible effect, the allocations still show one of the old compute nodes. What I haven't tried yet is to delete allocations for an instance and then try to heal it as the docs also mention. Do I understand that correctly or am I still missing something? Regards, Eugen [1] https://docs.openstack.org/nova/latest/admin/troubleshooting/orphaned-allocations.html Zitat von Sean Mooney : > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >> Thank you, Sean. >> >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> >> That's the thing, they don't show up in the compute service list. But >> I also found them in the resource_providers table, only the old >> compute nodes appear here: >> >> MariaDB [nova]> select name from nova_api.resource_providers; >> +--------------------------+ >> > name | >> +--------------------------+ >> > compute1.fqdn | >> > compute2.fqdn | >> > compute3.fqdn | >> > compute4.fqdn | >> +--------------------------+ > ah in that case the compute service delete is ment to remove the RPs too > but if the RP had stale allcoation at teh time of the delete the RP > delete will fail > > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. > if that is the case you should be able to delete teh allcaotion and > then the RP > if the allocations are related to active vms that are now on the > rebuild nodes then you will have to try and > heal the allcoations. > > there is a openstack client extention called osc-placement that you > can install to help. > we also have a heal allcoation command in nova-manage that may help > but the next step would be to validate > if the old RPs are still in use or not. from there you can then work > to align novas and placment view with > the real toplogy. > > that could invovle removing the old compute nodes form the > compute_nodes table or marking them as deleted but > both nova db and plamcent need to be kept in sysnc to correct your > current issue. > >> >> >> Zitat von Sean Mooney : >> >> > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> > > Hi *, >> > > >> > > I have a quick question, last year we migrated our OpenStack to a >> > > highly available environment through a reinstall of all nodes. The >> > > migration went quite well, we're working happily in the new cloud but >> > > the databases still contain deprecated data. For example, the >> > > nova-scheduler logs lines like these on a regular basis: >> > > >> > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >> > > nova.scheduler.host_manager [...] No compute service record found for >> > > host compute1 >> > > >> > > This is one of the old compute nodes that has been reinstalled and is >> > > now compute01. I tried to find the right spot to delete some lines in >> > > the DB but there are a couple of places so I wanted to check and ask >> > > you for some insights. >> > > >> > > The scheduler messages seem to originate in >> > > >> > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> > > >> > > ---snip--- >> > >          for cell_uuid, computes in compute_nodes.items(): >> > >              for compute in computes: >> > >                  service = services.get(compute.host) >> > > >> > >                  if not service: >> > >                      LOG.warning( >> > >                          "No compute service record found for host >> > > %(host)s", >> > >                          {'host': compute.host}) >> > >                      continue >> > > ---snip--- >> > > >> > > So I figured it could be this table in the nova DB: >> > > >> > > ---snip--- >> > > MariaDB [nova]> select host,deleted from compute_nodes; >> > > +-----------+---------+ >> > > > host | deleted | >> > > +-----------+---------+ >> > > > compute01 | 0 | >> > > > compute02 | 0 | >> > > > compute03 | 0 | >> > > > compute04 | 0 | >> > > > compute05 | 0 | >> > > > compute1 | 0 | >> > > > compute2 | 0 | >> > > > compute3 | 0 | >> > > > compute4 | 0 | >> > > +-----------+---------+ >> > > ---snip--- >> > > >> > > What would be the best approach here to clean up a little? I believe >> > > it would be safe to simply purge those lines containing the old >> > > compute node, but there might be a smoother way. Or maybe there are >> > > more places to purge old data from? >> > so the step you porably missed was deleting the old compute >> service records >> > >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> > >> > > >> > > I'd appreciate any ideas. >> > > >> > > Regards, >> > > Eugen >> > > >> > > >> >> >> From mark at stackhpc.com Tue Mar 9 09:53:07 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Mar 2021 09:53:07 +0000 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: On Tue, 9 Mar 2021 at 01:17, Michael Johnson wrote: > > Oh, nice. I missed the memo on the URL change. If stackalytics.com is less well maintained, should it be abandoned? Or redirect to stackalytics.io? Mark > > Thanks, > Michael > > On Mon, Mar 8, 2021 at 5:08 PM Andrii Ostapenko wrote: > > > > Michael, > > > > https://www.stackalytics.io is updated daily and maintained. > > > > On Mon, Mar 8, 2021 at 6:55 PM Michael Johnson wrote: > > > > > > Just an FYI, stackalytics hasn't updated since January, so it's not > > > going to be a good, current, source of information. > > > > > > Michael > > > > > > On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > > > > > > > Hi Jimmy, > > > > > > > > Stackalytics [0] currently tracks emails identifying and grouping them > > > > by author, company, module (with some success). > > > > To answer your challenge we'll need to add a grouping by thread, but > > > > still need some criteria to mark thread as possibly unanswered. E.g. > > > > threads having a single message or only messages from a single author, > > > > or if the last message in a thread contains a question mark. > > > > > > > > With no additional information marking thread as closed explicitly, > > > > all this will still be a guessing, producing candidates for unanswered > > > > threads. > > > > > > > > Thank you for bringing this up! > > > > > > > > [0] https://www.stackalytics.io/?metric=emails > > > > > > > > > > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > > > > > > > Hi All - > > > > > > > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > > > > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > > > > > > > Open to other ideas/thoughts :) > > > > > > > > > > Cheers! > > > > > Jimmy > > > > > From mark at stackhpc.com Tue Mar 9 10:48:15 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Mar 2021 10:48:15 +0000 Subject: [election][kolla] PTL candidacy for Xena Message-ID: Hi, I'd like to nominate myself to serve as the Kolla PTL for the Xena cycle. I have been PTL for the last 4 cycles, and would like the opportunity to continue to lead the team. Overall I think that the project is moving in the right direction. Some things that I think we should focus on in the next cycle are: * remove our dependency on Dockerhub in CI testing to avoid pull limits * improve the documentation * expand the core team Thanks for reading, Mark Goddard (mgoddard) From tkajinam at redhat.com Tue Mar 9 10:48:22 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 9 Mar 2021 19:48:22 +0900 Subject: [election][storlets] PTL Candidacy for Xena Message-ID: Hi All, I'd like to announce my candidacy to continue the PTL role for Storlets project in the Xena cycle. Because we currently don't have very active development in Storlets project, I'll propose we mainly focus on the following two items during the next cycle. - Improve stability of our code - Keep compatibility with the latest items - Latest Swift - Latest Ubuntu LTS In addition, I'll initiate some discussion about our future release and management model, considering decreasing resources for reviews and developments. Thank you for your consideration. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Mar 9 11:17:08 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 9 Mar 2021 12:17:08 +0100 Subject: [election][blazar] PTL candidacy for Xena Message-ID: Hi, I would like to self-nominate for the role of PTL of Blazar for the Xena release cycle. I have been PTL since the Stein cycle and I am willing to continue in this role. I will keep working with existing contributors to maintain the Blazar software for their use cases and encourage the addition of new functionalities. Thank you for your support, Pierre Riteau (priteau) From balazs.gibizer at est.tech Tue Mar 9 11:25:21 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 09 Mar 2021 12:25:21 +0100 Subject: [nova][placement] Xena PTG Message-ID: <929PPQ.V3OOB6GJ47RA3@est.tech> Hi, As you probably already know that the next PTG will be held between Apr 19 - 23. To organize the gathering I need your help: 1) Please fill the doodle[1] with timeslots when you have time to join to our sessions. Please do this before end of 21st of March. 2) Please add your PTG topic in the etherpad[2]. If you feel your topic needs a cross project cooperation please note that in the etherpad which other teams are needed. Cheers, gibi [1] https://doodle.com/poll/ib2eu3c4346iqii3 [2] https://etherpad.opendev.org/p/nova-xena-ptg From thierry at openstack.org Tue Mar 9 11:39:57 2021 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 9 Mar 2021 12:39:57 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: Message-ID: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> We got two similar release processing failures: > - tag-releases https://zuul.opendev.org/t/openstack/build/707bb50b4bf645f8903d8ca59b43abbf : FAILURE in 2m 31s This one was an error running the tag-releases post-merge job after https://review.opendev.org/c/openstack/releases/+/779217 merged. It failed with: Running: git push --dry-run ssh://release at review.openstack.org:29418/openstack/tripleo-ipsec.git --all ssh://release at review.openstack.org:29418/openstack/tripleo-ipsec.git did not work. Description: Host key verification failed. > - tag-releases https://zuul.opendev.org/t/openstack/build/655e8980a53c4e829de25f0b8507b5f6 : FAILURE in 2m 57s Similar failure while running the tag-releases post-merge job after https://review.opendev.org/c/openstack/releases/+/779218 merged. Since this "host key verification failed" error hit tripleo-ipsec three times in a row, I suspect we have something stuck (or we always hit the same cache/worker). -- Thierry Carrez (ttx) From christian.rohmann at inovex.de Tue Mar 9 13:17:16 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Tue, 9 Mar 2021 14:17:16 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> Message-ID: <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> Hello again, I just ran into the openstack-resource-manager (https://github.com/SovereignCloudStack/openstack-resource-manager) apparently part of the SovereignCloudStack - SCS effort out of the Gaia-X initiative. That tool seems to solve once again the need to clean up prior to a projects deletion. Regards Christian From artem.goncharov at gmail.com Tue Mar 9 13:23:47 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 9 Mar 2021 14:23:47 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> Message-ID: <095A8B69-9EDC-4AF7-90AC-D16B0C484361@gmail.com> Hi, This is just a tiny subset of OpenStack resources with no possible flexibility vs native implementation in SDK/CLI Artem > On 9. Mar 2021, at 14:17, Christian Rohmann wrote: > > Hello again, > > I just ran into the openstack-resource-manager (https://github.com/SovereignCloudStack/openstack-resource-manager) > apparently part of the SovereignCloudStack - SCS effort out of the Gaia-X initiative. > > That tool seems to solve once again the need to clean up prior to a projects deletion. > > > Regards > > > Christian > > > From smooney at redhat.com Tue Mar 9 13:43:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 09 Mar 2021 13:43:41 +0000 Subject: Cleanup database(s) In-Reply-To: <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> Message-ID: On Tue, 2021-03-09 at 09:20 +0000, Eugen Block wrote: > Hi again, > > I just wanted to get some clarification on how to proceed. > > > what you proably need to do in this case is check if the RPs still > > have allocations and if so > > verify that the allocation are owned by vms that nolonger exist. > > if that is the case you should be able to delete teh allcaotion and > > then the RP > > if the allocations are related to active vms that are now on the > > rebuild nodes then you will have to try and > > heal the allcoations. > > I checked all allocations for the old compute nodes, those are all > existing VMs. So simply deleting the allocations won't do any good, I > guess. From [1] I understand that I should overwrite all allocations > (we're on Train so there's no "unset" available yet) for those VMs to > point to the new compute nodes (resource_providers). After that I > should delete the resource providers, correct? > I ran "heal_allocations" for one uncritical instance, but it didn't > have any visible effect, the allocations still show one of the old > compute nodes. > What I haven't tried yet is to delete allocations for an instance and > then try to heal it as the docs also mention. > > Do I understand that correctly or am I still missing something? i think the problem is you reinstalled the cloud with exisitng instances and change the hostnames of the compute nodes which is not a supported operations.(specifically changing the hostname of a computenode with vms is not supported) so in doing so that would cause all the compute service to be recreated for the new compute nodes and create new RPs in placment. the existing instnace however would still have there allocation on the old RPs and the old hostnames woudl be set in the instnace.host can you confirm that? in this case you dont actully have orphaned allocation exactly you have allcoation against the incorrect RP but if the instnace.host does not match the hypervisor hostname that its on then heal allocations will not be able to fix that. just looking at your orginal message you said "last year we migrated our OpenStack to a highly available environment through a reinstall of all nodes" i had assumed you have no instnace form the orignial enviornment with the old names if you had exising instnaces with the old name then you would have had to ensure the host names did not change to do that correctly without breaking the resouce tracking in nova. can you clarify those point. e.g. were all the workload removed before the reinstall? if not did the host name change? that is harder probelm to fix unless you can restore the old host name but i suspect you likely have booted new vms if this even has been runing for a year. > > Regards, > Eugen > > > [1] > https://docs.openstack.org/nova/latest/admin/troubleshooting/orphaned-allocations.html > > Zitat von Sean Mooney : > > > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: > > > Thank you, Sean. > > > > > > > so you need to do > > > > openstack compute service list to get teh compute service ids > > > > then do > > > > openstack compute service delete ... > > > > > > > > you need to make sure that you only remvoe the unused old serivces > > > > but i think that would fix your issue. > > > > > > That's the thing, they don't show up in the compute service list. But > > > I also found them in the resource_providers table, only the old > > > compute nodes appear here: > > > > > > MariaDB [nova]> select name from nova_api.resource_providers; > > > +--------------------------+ > > > > name | > > > +--------------------------+ > > > > compute1.fqdn | > > > > compute2.fqdn | > > > > compute3.fqdn | > > > > compute4.fqdn | > > > +--------------------------+ > > ah in that case the compute service delete is ment to remove the RPs too > > but if the RP had stale allcoation at teh time of the delete the RP > > delete will fail > > > > what you proably need to do in this case is check if the RPs still > > have allocations and if so > > verify that the allocation are owned by vms that nolonger exist. > > if that is the case you should be able to delete teh allcaotion and > > then the RP > > if the allocations are related to active vms that are now on the > > rebuild nodes then you will have to try and > > heal the allcoations. > > > > there is a openstack client extention called osc-placement that you > > can install to help. > > we also have a heal allcoation command in nova-manage that may help > > but the next step would be to validate > > if the old RPs are still in use or not. from there you can then work > > to align novas and placment view with > > the real toplogy. > > > > that could invovle removing the old compute nodes form the > > compute_nodes table or marking them as deleted but > > both nova db and plamcent need to be kept in sysnc to correct your > > current issue. > > > > > > > > > > > Zitat von Sean Mooney : > > > > > > > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: > > > > > Hi *, > > > > > > > > > > I have a quick question, last year we migrated our OpenStack to a > > > > > highly available environment through a reinstall of all nodes. The > > > > > migration went quite well, we're working happily in the new cloud but > > > > > the databases still contain deprecated data. For example, the > > > > > nova-scheduler logs lines like these on a regular basis: > > > > > > > > > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING > > > > > nova.scheduler.host_manager [...] No compute service record found for > > > > > host compute1 > > > > > > > > > > This is one of the old compute nodes that has been reinstalled and is > > > > > now compute01. I tried to find the right spot to delete some lines in > > > > > the DB but there are a couple of places so I wanted to check and ask > > > > > you for some insights. > > > > > > > > > > The scheduler messages seem to originate in > > > > > > > > > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py > > > > > > > > > > ---snip--- > > > > >          for cell_uuid, computes in compute_nodes.items(): > > > > >              for compute in computes: > > > > >                  service = services.get(compute.host) > > > > > > > > > >                  if not service: > > > > >                      LOG.warning( > > > > >                          "No compute service record found for host > > > > > %(host)s", > > > > >                          {'host': compute.host}) > > > > >                      continue > > > > > ---snip--- > > > > > > > > > > So I figured it could be this table in the nova DB: > > > > > > > > > > ---snip--- > > > > > MariaDB [nova]> select host,deleted from compute_nodes; > > > > > +-----------+---------+ > > > > > > host | deleted | > > > > > +-----------+---------+ > > > > > > compute01 | 0 | > > > > > > compute02 | 0 | > > > > > > compute03 | 0 | > > > > > > compute04 | 0 | > > > > > > compute05 | 0 | > > > > > > compute1 | 0 | > > > > > > compute2 | 0 | > > > > > > compute3 | 0 | > > > > > > compute4 | 0 | > > > > > +-----------+---------+ > > > > > ---snip--- > > > > > > > > > > What would be the best approach here to clean up a little? I believe > > > > > it would be safe to simply purge those lines containing the old > > > > > compute node, but there might be a smoother way. Or maybe there are > > > > > more places to purge old data from? > > > > so the step you porably missed was deleting the old compute > > > service records > > > > > > > > so you need to do > > > > openstack compute service list to get teh compute service ids > > > > then do > > > > openstack compute service delete ... > > > > > > > > you need to make sure that you only remvoe the unused old serivces > > > > but i think that would fix your issue. > > > > > > > > > > > > > > I'd appreciate any ideas. > > > > > > > > > > Regards, > > > > > Eugen > > > > > > > > > > > > > > > > > > > > > > From eblock at nde.ag Tue Mar 9 14:47:06 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 09 Mar 2021 14:47:06 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> Message-ID: <20210309144706.Horde.06Gm_1Yqz34Y3EPFacqnoVG@webmail.nde.ag> Hi, > i think the problem is you reinstalled the cloud with exisitng > instances and change the hostnames of the > compute nodes which is not a supported operations.(specifically > changing the hostname of a computenode with vms is not supported) > so in doing so that would cause all the compute service to be > recreated for the new compute nodes and create new RPs in placment. > the existing instnace however would still have there allocation on > the old RPs and the old hostnames woudl be set in the instnace.host > can you confirm that? this environment grew from being just an experiment to our production cloud, so there might be a couple of unsupported things, but it still works fine, so that's something. ;-) I'll try to explain and hopefully clarify some things. We upgraded the databases on a virtual machine prior to the actual cloud upgrade. Since the most important services successfully started we went ahead and installed two control nodes with pacemaker and imported the already upgraded databases. Then we started to evacuate the compute nodes one by one and added them to the new cloud environment while the old one was still up and running. To launch existing instances in the new cloud we had to experiment a little, but from previous troubleshooting sessions we knew which tables we had to change in order to bring the instances up on the new compute nodes. Basically, we changed instances.host and instances.node to reflect one of the new compute nodes. So the answer to your question would probably be "no", the instances.host don't have the old hostnames. > can you clarify those point. e.g. were all the workload removed > before the reinstall? if not did the host name change? > that is harder probelm to fix unless you can restore the old host > name but i suspect you likely have booted new vms if this even has > been runing > for a year. I understand, it seems as I'll have to go through the resource allocations one by one and update them in order to be able to remove the old RPs. One final question though, is there anything risky about updating the allocations to match the actual RP? I tested that for an uncritical instance, shut it down and booted it again, all without an issue, it seems. If I do that for the rest, ist there anything I should be aware of? From what I saw so far all new instances are allocated properly, so the placement itself seems to be working well, right? Thanks! Eugen Zitat von Sean Mooney : > On Tue, 2021-03-09 at 09:20 +0000, Eugen Block wrote: >> Hi again, >> >> I just wanted to get some clarification on how to proceed. >> >> > what you proably need to do in this case is check if the RPs still >> > have allocations and if so >> > verify that the allocation are owned by vms that nolonger exist. >> > if that is the case you should be able to delete teh allcaotion and >> > then the RP >> > if the allocations are related to active vms that are now on the >> > rebuild nodes then you will have to try and >> > heal the allcoations. >> >> I checked all allocations for the old compute nodes, those are all >> existing VMs. So simply deleting the allocations won't do any good, I >> guess. From [1] I understand that I should overwrite all allocations >> (we're on Train so there's no "unset" available yet) for those VMs to >> point to the new compute nodes (resource_providers). After that I >> should delete the resource providers, correct? >> I ran "heal_allocations" for one uncritical instance, but it didn't >> have any visible effect, the allocations still show one of the old >> compute nodes. >> What I haven't tried yet is to delete allocations for an instance and >> then try to heal it as the docs also mention. >> >> Do I understand that correctly or am I still missing something? > > i think the problem is you reinstalled the cloud with exisitng > instances and change the hostnames of the > compute nodes which is not a supported operations.(specifically > changing the hostname of a computenode with vms is not supported) > so in doing so that would cause all the compute service to be > recreated for the new compute nodes and create new RPs in placment. > the existing instnace however would still have there allocation on > the old RPs and the old hostnames woudl be set in the instnace.host > can you confirm that? > > in this case you dont actully have orphaned allocation exactly you > have allcoation against the incorrect RP but if the instnace.host > does not > match the hypervisor hostname that its on then heal allocations will > not be able to fix that. > > just looking at your orginal message you said "last year we migrated > our OpenStack to a highly available environment through a reinstall > of all nodes" > > i had assumed you have no instnace form the orignial enviornment > with the old names if you had exising instnaces with the old name > then you would > have had to ensure the host names did not change to do that > correctly without breaking the resouce tracking in nova. > > can you clarify those point. e.g. were all the workload removed > before the reinstall? if not did the host name change? > that is harder probelm to fix unless you can restore the old host > name but i suspect you likely have booted new vms if this even has > been runing > for a year. >> >> Regards, >> Eugen >> >> >> [1] >> https://docs.openstack.org/nova/latest/admin/troubleshooting/orphaned-allocations.html >> >> Zitat von Sean Mooney : >> >> > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >> > > Thank you, Sean. >> > > >> > > > so you need to do >> > > > openstack compute service list to get teh compute service ids >> > > > then do >> > > > openstack compute service delete ... >> > > > >> > > > you need to make sure that you only remvoe the unused old serivces >> > > > but i think that would fix your issue. >> > > >> > > That's the thing, they don't show up in the compute service list. But >> > > I also found them in the resource_providers table, only the old >> > > compute nodes appear here: >> > > >> > > MariaDB [nova]> select name from nova_api.resource_providers; >> > > +--------------------------+ >> > > > name | >> > > +--------------------------+ >> > > > compute1.fqdn | >> > > > compute2.fqdn | >> > > > compute3.fqdn | >> > > > compute4.fqdn | >> > > +--------------------------+ >> > ah in that case the compute service delete is ment to remove the RPs too >> > but if the RP had stale allcoation at teh time of the delete the RP >> > delete will fail >> > >> > what you proably need to do in this case is check if the RPs still >> > have allocations and if so >> > verify that the allocation are owned by vms that nolonger exist. >> > if that is the case you should be able to delete teh allcaotion and >> > then the RP >> > if the allocations are related to active vms that are now on the >> > rebuild nodes then you will have to try and >> > heal the allcoations. >> > >> > there is a openstack client extention called osc-placement that you >> > can install to help. >> > we also have a heal allcoation command in nova-manage that may help >> > but the next step would be to validate >> > if the old RPs are still in use or not. from there you can then work >> > to align novas and placment view with >> > the real toplogy. >> > >> > that could invovle removing the old compute nodes form the >> > compute_nodes table or marking them as deleted but >> > both nova db and plamcent need to be kept in sysnc to correct your >> > current issue. >> > >> > > >> > > >> > > Zitat von Sean Mooney : >> > > >> > > > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> > > > > Hi *, >> > > > > >> > > > > I have a quick question, last year we migrated our OpenStack to a >> > > > > highly available environment through a reinstall of all nodes. The >> > > > > migration went quite well, we're working happily in the new >> cloud but >> > > > > the databases still contain deprecated data. For example, the >> > > > > nova-scheduler logs lines like these on a regular basis: >> > > > > >> > > > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 >> 23540 WARNING >> > > > > nova.scheduler.host_manager [...] No compute service record >> found for >> > > > > host compute1 >> > > > > >> > > > > This is one of the old compute nodes that has been >> reinstalled and is >> > > > > now compute01. I tried to find the right spot to delete >> some lines in >> > > > > the DB but there are a couple of places so I wanted to check and ask >> > > > > you for some insights. >> > > > > >> > > > > The scheduler messages seem to originate in >> > > > > >> > > > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> > > > > >> > > > > ---snip--- >> > > > >          for cell_uuid, computes in compute_nodes.items(): >> > > > >              for compute in computes: >> > > > >                  service = services.get(compute.host) >> > > > > >> > > > >                  if not service: >> > > > >                      LOG.warning( >> > > > >                          "No compute service record found for host >> > > > > %(host)s", >> > > > >                          {'host': compute.host}) >> > > > >                      continue >> > > > > ---snip--- >> > > > > >> > > > > So I figured it could be this table in the nova DB: >> > > > > >> > > > > ---snip--- >> > > > > MariaDB [nova]> select host,deleted from compute_nodes; >> > > > > +-----------+---------+ >> > > > > > host | deleted | >> > > > > +-----------+---------+ >> > > > > > compute01 | 0 | >> > > > > > compute02 | 0 | >> > > > > > compute03 | 0 | >> > > > > > compute04 | 0 | >> > > > > > compute05 | 0 | >> > > > > > compute1 | 0 | >> > > > > > compute2 | 0 | >> > > > > > compute3 | 0 | >> > > > > > compute4 | 0 | >> > > > > +-----------+---------+ >> > > > > ---snip--- >> > > > > >> > > > > What would be the best approach here to clean up a little? I believe >> > > > > it would be safe to simply purge those lines containing the old >> > > > > compute node, but there might be a smoother way. Or maybe there are >> > > > > more places to purge old data from? >> > > > so the step you porably missed was deleting the old compute >> > > service records >> > > > >> > > > so you need to do >> > > > openstack compute service list to get teh compute service ids >> > > > then do >> > > > openstack compute service delete ... >> > > > >> > > > you need to make sure that you only remvoe the unused old serivces >> > > > but i think that would fix your issue. >> > > > >> > > > > >> > > > > I'd appreciate any ideas. >> > > > > >> > > > > Regards, >> > > > > Eugen >> > > > > >> > > > > >> > > >> > > >> > > >> >> >> From gchamoul at redhat.com Tue Mar 9 14:53:22 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Tue, 9 Mar 2021 15:53:22 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core Message-ID: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Hi TripleO Devs, David is already a key member of our team since a long time now, he provided all the needed ansible roles for the Validation Framework into tripleo-ansible-operator. He continuously provides excellent code reviews and he is a source of great ideas for the future of the Validation Framework. That's why we would highly benefit from his addition to the core reviewer team. Assuming that there are no objections, we will add David to the core team next week. Thanks, David, for your excellent work! -- Gaël Chamoulaud - (He/Him/His) .::. Red Hat .::. OpenStack .::. .::. DFG:DF Squad:VF .::. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jbuchta at redhat.com Tue Mar 9 14:56:51 2021 From: jbuchta at redhat.com (Jan Buchta) Date: Tue, 9 Mar 2021 15:56:51 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: Not that it would technically matter, but +1. Jan Buchta Manager, Openstack Engineering Red Hat Czech, s.r.o. Purkyňova 3080/97b 61200 Brno, Czech Republic jbuchta at redhat.com T: +420-532294903 M: +420-603949107 IM: jbuchta On Tue, Mar 9, 2021 at 3:53 PM Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews > and he > is a source of great ideas for the future of the Validation Framework. > That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team > next > week. > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Mar 9 15:03:00 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 9 Mar 2021 20:33:00 +0530 Subject: [OSSN-0088] Some of the Glance metadef APIs likely to leak resources Message-ID: Some of the Glance metadef APIs likely to leak resources -------------------------------------------------------- ### Summary ### Metadef APIs are vulnerable and potentially leaking information to unauthorized users and also there is currently no limit on creation of metadef namespaces, objects, properties, resources and tags. This can be abused by malicious users to fill the Glance database resulting in a Denial of Service (DoS) condition. ### Affected Services / Software ### Glance ### Discussion ### There is no restriction on creation of metadef namespaces, objects, properties, resources and tags as well as it could also leak the information to unauthorized users or to the users outside of the project. By taking advantage of this lack of restrictions around metadef APIs, a a single user could fill the Glance database by creating unlimited resources, resulting in a Denial Of Service (DoS) style attack. Glance does allow metadef APIs to be controlled by policy. However, the default policy setting for metadef APIs allows all users to create or read the metadef information. Because metadef resources are not properly isolated to the owner, any use of them with potentially sensitive names (such as internal infrastructure details, customer names, etc) could unintentionally expose that information to a malicious user. ### Recommended Actions ### Since these fundamental issues have been present since the API was introduced, the Glance project is recommending operators disable all metadef APIs by default in their deployments. Here is an example of disabling the metadef APIs in the deployments for current stable OpenStack releases either in policy.json or policy.yaml. ---- begin example policy.json/policy.yaml snippet ---- "metadef_default": "!", "get_metadef_namespace": "rule:metadef_default", "get_metadef_namespaces": "rule:metadef_default", "modify_metadef_namespace": "rule:metadef_default", "add_metadef_namespace": "rule:metadef_default", "get_metadef_object": "rule:metadef_default", "get_metadef_objects": "rule:metadef_default", "modify_metadef_object": "rule:metadef_default", "add_metadef_object": "rule:metadef_default", "list_metadef_resource_types": "rule:metadef_default", "get_metadef_resource_type": "rule:metadef_default", "add_metadef_resource_type_association": "rule:metadef_default", "get_metadef_property": "rule:metadef_default", "get_metadef_properties": "rule:metadef_default", "modify_metadef_property": "rule:metadef_default", "add_metadef_property": "rule:metadef_default", "get_metadef_tag": "rule:metadef_default", "get_metadef_tags": "rule:metadef_default", "modify_metadef_tag": "rule:metadef_default", "add_metadef_tag": "rule:metadef_default", "add_metadef_tags": "rule:metadef_default" ---- end example policy.json/policy.yaml snippet ---- To re-enable metadef policies to be allowed to be admin only, operator(s) can make a change in respective policy.json or policy.yaml as shown below; (assuming all metadef policies are configured to use rule:metadeta_default as shown in above example) ---- begin example policy.json/policy.yaml snippet ---- "metadef_default": "rule:admin", ---- begin example policy.json/policy.yaml snippet ---- Operators with users that depend on metadef APIs may choose to leave these accessible to all users. In that case, education of users about the potential for information leakage in the resource names is advisable so that vulnerable practices can be altered as mitigation. To re-enable metadef policies to all users, operator(s) can make a change in respective policy.json or policy.yaml as shown below; (assuming all metadef policies are configured to use rule:metadeta_default as shown in above example) ---- begin example policy.json/policy.yaml snippet ---- "metadef_default": "", ---- begin example policy.json/policy.yaml snippet ---- ### Contacts / References ### Author: Abhishek Kekane, Red Hat Author: Lance Bragstad, Red Hat This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0088 Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1545702 Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1916926 Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1916922 Mailing List : [Security] openstack-security at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Tue Mar 9 15:10:46 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 9 Mar 2021 16:10:46 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: <7cbffe4c-17e1-37c2-c099-a0622ed218f5@redhat.com> of course +1! Even +42 On 3/9/21 3:53 PM, Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews and he > is a source of great ideas for the future of the Validation Framework. That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team next > week. > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From marios at redhat.com Tue Mar 9 15:46:46 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 17:46:46 +0200 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews > and he > is a source of great ideas for the future of the Validation Framework. > That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team > next > week. > o/ Gael so it is clear and fair for everyone (e.g. I've been approached by others about candidates for tripleo-core) I'd like to be clear on your proposal here because I don't think we have a 'validation framework core' group in gerrit - do we? Is your proposal that David is added to the tripleo-core group [1] with the understanding that voting rights will be exercised only in the following repos: tripleo-validations, validations-common and validations-libs? thanks, marios [1] https://review.opendev.org/admin/groups/0319cee8020840a3016f46359b076fa6b6ea831a > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchamoul at redhat.com Tue Mar 9 15:52:19 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Tue, 9 Mar 2021 16:52:19 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> On 09/Mar/2021 17:46, Marios Andreou wrote: > > > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud wrote: > > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews > and he > is a source of great ideas for the future of the Validation Framework. > That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team > next > week. > > > o/ Gael > > so it is clear and fair for everyone (e.g. I've been approached by others about > candidates for tripleo-core) > > I'd like to be clear on your proposal here because I don't think we have a > 'validation framework core' group in gerrit - do we? > > Is your proposal that David is added to the tripleo-core group [1] with the > understanding that voting rights will be exercised only in the following repos: > tripleo-validations, validations-common and validations-libs? Yes exactly! Sorry for the confusion. > thanks, marios > > [1] https://review.opendev.org/admin/groups/ > 0319cee8020840a3016f46359b076fa6b6ea831a > > >   > > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud  -  (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::.     DFG:DF Squad:VF    .::. > -- Gaël Chamoulaud - (He/Him/His) .::. Red Hat .::. OpenStack .::. .::. DFG:DF Squad:VF .::. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Tue Mar 9 16:00:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 18:00:07 +0200 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> Message-ID: On Tue, Mar 9, 2021 at 5:52 PM Gaël Chamoulaud wrote: > On 09/Mar/2021 17:46, Marios Andreou wrote: > > > > > > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud > wrote: > > > > Hi TripleO Devs, > > > > David is already a key member of our team since a long time now, he > > provided all the needed ansible roles for the Validation Framework > into > > tripleo-ansible-operator. He continuously provides excellent code > reviews > > and he > > is a source of great ideas for the future of the Validation > Framework. > > That's > > why we would highly benefit from his addition to the core reviewer > team. > > > > Assuming that there are no objections, we will add David to the core > team > > next > > week. > > > > > > o/ Gael > > > > so it is clear and fair for everyone (e.g. I've been approached by > others about > > candidates for tripleo-core) > > > > I'd like to be clear on your proposal here because I don't think we have > a > > 'validation framework core' group in gerrit - do we? > > > > Is your proposal that David is added to the tripleo-core group [1] with > the > > understanding that voting rights will be exercised only in the following > repos: > > tripleo-validations, validations-common and validations-libs? > > Yes exactly! Sorry for the confusion. > ACK no problem ;) As I said we need to be transparent and fair towards everyone. +1 from me to your proposal. Being obligated to do so at PTL ;) I did a quick review of activities. I can see that David has been particularly active in Wallaby [1] but has made tripleo contributions going back to 2017 [2] - I cannot see some reason to object to the proposal! regards, marios [1] https://www.stackalytics.io/?module=tripleo-group&project_type=openstack&user_id=davidjpeacock&metric=marks&release=wallaby [2] https://review.opendev.org/q/owner:davidjpeacock > > > thanks, marios > > > > [1] https://review.opendev.org/admin/groups/ > > 0319cee8020840a3016f46359b076fa6b6ea831a > > > > > > > > > > > > Thanks, David, for your excellent work! > > > > -- > > Gaël Chamoulaud - (He/Him/His) > > .::. Red Hat .::. OpenStack .::. > > .::. DFG:DF Squad:VF .::. > > > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Mar 9 16:10:01 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 9 Mar 2021 13:10:01 -0300 Subject: [election][cloudkitty] PTL candidacy for Xena Message-ID: Hello guys, I would like to self-nominate for the role of PTL of CloudKitty for the Xena release cycle. I am the current PTL for the Wallaby cycle and I am willing to continue in this role. I will keep working with existing contributors to maintain CloudKitty for their use cases and encourage the addition of new functionalities. Thank you for your support, -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Mar 9 16:35:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 18:35:07 +0200 Subject: [TripleO] Xena PTG - registration and reserved time slots Message-ID: Hello TripleO gentle early reminder that PTG is coming soon-ish... 12 April is not *that* far away ;) As requested by [1] I just booked some slots for us at https://ethercalc.net/oz7q0gds9zfi . I selected the same times we used for the Wallaby PTG - 1300-1700 UTC for Monday/Tuesday/Wednesday/Thursday. Obviously we are a massively distributed team so there is no good time that will suit everyone. However I think this serves well enough for the majority of our contributors? We may not use Thursday but it was pretty close last time with respect to the number of sessions requested so I booked it just in case. Please speak up if you disagree with the times and days I have booked. Finally *** REMEMBER TO REGISTER *** for the PTG at https://april2021-ptg.eventbrite.com regards, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020915.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Tue Mar 9 16:49:11 2021 From: mthode at mthode.org (Matthew Thode) Date: Tue, 9 Mar 2021 10:49:11 -0600 Subject: [election][requirements] PTL candidacy for Xena Message-ID: <20210309164911.7ygs6t6uliweessf@mthode.org> Hi all, I would like to self-nominate for the role of PTL of Requirements for the Xena release cycle. I am the current PTL for the Wallaby cycle and I am willing to continue in this role. The following will be my goals for the cycle, in order of importance: 1. The primary goal is to keep a tight rein on global-requirements and upper-constraints updates. (Keep things working well) 2. Un-cap requirements where possible (stuff like prettytable). 3. Fix some automation of generate contstraints (pkg-resources being added and setuptools being dropped) 4. Audit global-requirements and upper-constraints for redundancies. One of the rules we have for new entrants to global-requirements and/or upper-constraints is that they be non-redundant. Keeping that rule in mind, audit the list of requirements for possible redundancies and if possible, reduce the number of requirements we manage. I look forward to continue working with you in this cycle, as your PTL or not. Thank you for your support, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Tue Mar 9 16:54:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 18:54:07 +0200 Subject: [TripleO] Xena PTG - registration and reserved time slots In-Reply-To: References: Message-ID: On Tue, Mar 9, 2021 at 6:35 PM Marios Andreou wrote: > Hello TripleO > > gentle early reminder that PTG is coming soon-ish... 12 April is not > *that* far away ;) > *19* April even ;) thanks > > As requested by [1] I just booked some slots for us at > https://ethercalc.net/oz7q0gds9zfi . > > I selected the same times we used for the Wallaby PTG - 1300-1700 UTC for > Monday/Tuesday/Wednesday/Thursday. Obviously we are a massively distributed > team so there is no good time that will suit everyone. However I think this > serves well enough for the majority of our contributors? We may not use > Thursday but it was pretty close last time with respect to the number of > sessions requested so I booked it just in case. > > Please speak up if you disagree with the times and days I have booked. > > Finally *** REMEMBER TO REGISTER *** for the PTG at > https://april2021-ptg.eventbrite.com > > regards, marios > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020915.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 9 17:19:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 17:19:17 +0000 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> Message-ID: <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > We got two similar release processing failures: [...] > Since this "host key verification failed" error hit tripleo-ipsec > three times in a row, I suspect we have something stuck (or we > always hit the same cache/worker). The errors happened when run on nodes in different providers. Looking at the build logs, I also notice that they fail for the stable/rocky branch but succeeded for other branches of the same repository. They're specifically erroring when trying to reach the Gerrit server over SSH, the connection details for which are encoded on the .gitreview file in each branch. This leads me to wonder whether there's something about the fact that the stable/rocky branch of tripleo-ipsec is still using the old review.openstack.org hostname to push, and maybe we're not pre-seeding an appropriate hostkey entry for that in the known_hosts file? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Tue Mar 9 17:28:09 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 9 Mar 2021 10:28:09 -0700 Subject: [tripleo][ci] stein and quay.io Message-ID: Greetings, FYI... At this time the accounts used to push containers in quay.io has been locked out. We are working with quay support to resolve the issue. It's not 100% clear if there is an issue w/ our account or more generally w/ quay [2] At this time we know that the stable/stein branch in tripleo is specifically configured to pull containers from quay and has an open issue [1]. Merging patches on stable/stein is likely blocked unless we move all those jobs to non-voting at this time. Is there any interest in moving the jobs to non-voting? Please respond if this issue is blocking you. Thanks all! [1] https://bugs.launchpad.net/tripleo/+bug/1915921 [2] https://status.quay.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Mar 9 17:28:56 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 9 Mar 2021 18:28:56 +0100 Subject: [election][qa] PTL candidacy for Xena Message-ID: Hi everyone, I would like to nominate myself for the PTL role of the QA in the Xena cycle. This would be my first PTL service, however, I have been contributing to Tempest for several years and have been a core contributor for over a year now. I'm also a core in refstack projects (python-tempestconf, refstack and refstack-client). A few things I would like to focus on in the Xena cycle: * Get Tempest scenario manager effort [2] finished - we started actively working on this ~2 cycles ago and we have made significant progress. After it's done we will declare the interface stable which will help to decrease the code duplicity among tempest plugins which is directly related to the maintenance difficulty. * Keep decreasing the bug counts * Decrease the number of open patches. Although many projects suffer from not enough personal resources, we can't let the contributors go out of their steam by making their patches to wait in the queue for too long. * Complete priority items from the previous cycles [1] https://www.stackalytics.com/?user_id=mkopec [2] https://etherpad.opendev.org/p/tempest-scenario-manager Thanks for your consideration! -- Martin Kopec irc: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Tue Mar 9 18:34:26 2021 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 9 Mar 2021 19:34:26 +0100 Subject: [election][magnum] Xena PTL candidacy Message-ID: Hello OpenStack community, I would like to continue serving as Magnum PTL. In wallaby we mostly maintained and updated our templates for kubernetes and related addons along with the CI maintenance. In this release, our focus continues to be making the cloud operators easier with: * control-plane on operators tenant * performance improvements for API calls * cluster node replacement * finish transition to helm3 for addons See you in gerrit, Spyros Trigazis -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy6768 at gmail.com Tue Mar 9 07:19:20 2021 From: sandy6768 at gmail.com (sandeep singh parihar) Date: Tue, 9 Mar 2021 12:49:20 +0530 Subject: openstack:How to create "admin read only " role in openstack Message-ID: Hi, can someone please help to "How to create admin read only role " in openstack? Thanks sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcr1br24 at gmail.com Tue Mar 9 14:37:00 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Tue, 9 Mar 2021 22:37:00 +0800 Subject: [ISSUE]Openstack create VNF instance ERROR Message-ID: Dear Sir, This our testbed: Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) We have trouble when creating vnf on compute node using OpenStack CLI: #openstack vnf create vnfd1 vnf1 Here are the log message and ovs information. COMPUTE NODE1: nova-compute.log [image: error1.png] [image: list1.png] [image: vsctl1.png] COMPUTE NODE 2: nova-compute.log [image: error2.png] [image: list2.png] [image: vsctl2.png] We don't use DPDK on our Openvswitch. Can anyone give us some advice on this issue? Thanks for helping us. [image: Mailtrack] Sender notified by Mailtrack 03/09/21, 10:20:09 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error1.png Type: image/png Size: 279362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list1.png Type: image/png Size: 51268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl1.png Type: image/png Size: 44054 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error2.png Type: image/png Size: 260121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list2.png Type: image/png Size: 52846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl2.png Type: image/png Size: 35301 bytes Desc: not available URL: From lyarwood at redhat.com Tue Mar 9 22:08:18 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 9 Mar 2021 22:08:18 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI Message-ID: Hello all, I reported the following bug last week but I've yet to get any real feedback after asking a few times in irc. Running parallel iSCSI/LVM c-vol backends is causing random failures in CI https://bugs.launchpad.net/cinder/+bug/1917750 AFAICT tgtadm is causing this behaviour. As I've stated in the bug with Fedora 32 and lioadm I don't see the WWN conflict between the two backends. Does anyone know if using lioadm is an option on Focal? Thanks in advance, Lee From fungi at yuggoth.org Tue Mar 9 22:18:12 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 22:18:12 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: References: Message-ID: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > I reported the following bug last week but I've yet to get any real > feedback after asking a few times in irc. > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > https://bugs.launchpad.net/cinder/+bug/1917750 > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > with Fedora 32 and lioadm I don't see the WWN conflict between the two > backends. Does anyone know if using lioadm is an option on Focal? https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html seems to indicate that you just need to set it in configuration. The package that document mentions looks like a distro package recommendation (does not exist under that name on PyPI) and the equivalent lib on PyPI is included in cinder's requirements.txt file, but I don't see either mentioned in the devstack source tree so maybe that needs to be installed for DevStack-based jobs to take advantage of it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Mar 9 22:19:48 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 22:19:48 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> Message-ID: <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > I reported the following bug last week but I've yet to get any real > > feedback after asking a few times in irc. > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > backends. Does anyone know if using lioadm is an option on Focal? > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > seems to indicate that you just need to set it in configuration. The > package that document mentions looks like a distro package > recommendation (does not exist under that name on PyPI) and the > equivalent lib on PyPI is included in cinder's requirements.txt > file, but I don't see either mentioned in the devstack source tree > so maybe that needs to be installed for DevStack-based jobs to take > advantage of it. Oh, and this would be the package to add from Ubuntu Focal: https://packages.ubuntu.com/focal/python3-rtslib-fb -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Mar 9 22:21:33 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 22:21:33 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> Message-ID: <20210309222133.ux5gtobgujtep3so@yuggoth.org> On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > I reported the following bug last week but I've yet to get any real > > > feedback after asking a few times in irc. > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > > seems to indicate that you just need to set it in configuration. The > > package that document mentions looks like a distro package > > recommendation (does not exist under that name on PyPI) and the > > equivalent lib on PyPI is included in cinder's requirements.txt > > file, but I don't see either mentioned in the devstack source tree > > so maybe that needs to be installed for DevStack-based jobs to take > > advantage of it. > > Oh, and this would be the package to add from Ubuntu Focal: > > https://packages.ubuntu.com/focal/python3-rtslib-fb Nevermind, DevStack installs the projects with pip, so the one in cinder's requirements.txt should already be present. In that case, yeah, just set it in the config? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zigo at debian.org Tue Mar 9 22:26:15 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 9 Mar 2021 23:26:15 +0100 Subject: [election][requirements] PTL candidacy for Xena In-Reply-To: <20210309164911.7ygs6t6uliweessf@mthode.org> References: <20210309164911.7ygs6t6uliweessf@mthode.org> Message-ID: <5851aff9-56ac-f5e3-34fe-e8a1e1782380@debian.org> Matthew, My reply is not at all a comment on your candidacy, just a few remarks on the topics you've raised (and on which I agree with you). On 3/9/21 5:49 PM, Matthew Thode wrote: > 2. Un-cap requirements where possible (stuff like prettytable). Yeah, it's kind of amazing we're relying all our command line outputs on a release of prettytable that is from 2013! Lucky, nobody in Debian complained that it hasn't been updated... > 4. Audit global-requirements and upper-constraints for redundancies. > One of the rules we have for new entrants to global-requirements > and/or upper-constraints is that they be non-redundant. Keeping that > rule in mind, audit the list of requirements for possible redundancies > and if possible, reduce the number of requirements we manage. We're currently using: - anyjson - ujson - jsonpatch - jsonschema - jsonpath - jsonpath-rw - jsonpath-rw-ext - simplejson - jsonpointer I'm not sure what they all do, but it does feel like there's room for improvement... :) Cheers, Thomas Goirand (zigo) From lyarwood at redhat.com Tue Mar 9 22:40:08 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 9 Mar 2021 22:40:08 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <20210309222133.ux5gtobgujtep3so@yuggoth.org> References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> <20210309222133.ux5gtobgujtep3so@yuggoth.org> Message-ID: On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > I reported the following bug last week but I've yet to get any real > > > > feedback after asking a few times in irc. > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > > > seems to indicate that you just need to set it in configuration. The > > > package that document mentions looks like a distro package > > > recommendation (does not exist under that name on PyPI) and the > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > file, but I don't see either mentioned in the devstack source tree > > > so maybe that needs to be installed for DevStack-based jobs to take > > > advantage of it. > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > Nevermind, DevStack installs the projects with pip, so the one in > cinder's requirements.txt should already be present. In that case, > yeah, just set it in the config? Yes correct, my question was more to see if there were any known issues with using lioadm on Focal. Anyway I've pushed the following WIP change for devstack to switch over to lioadm when using Focal: WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu https://review.opendev.org/c/openstack/devstack/+/779624 From cboylan at sapwetik.org Tue Mar 9 22:48:50 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 09 Mar 2021 14:48:50 -0800 Subject: =?UTF-8?Q?Re:_[cinder][nova]_Running_parallel_iSCSI/LVM_c-vol_backends_i?= =?UTF-8?Q?s_causing_random_failures_in_CI?= In-Reply-To: References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> <20210309222133.ux5gtobgujtep3so@yuggoth.org> Message-ID: On Tue, Mar 9, 2021, at 2:40 PM, Lee Yarwood wrote: > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > I reported the following bug last week but I've yet to get any real > > > > > feedback after asking a few times in irc. > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > > > > seems to indicate that you just need to set it in configuration. The > > > > package that document mentions looks like a distro package > > > > recommendation (does not exist under that name on PyPI) and the > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > file, but I don't see either mentioned in the devstack source tree > > > > so maybe that needs to be installed for DevStack-based jobs to take > > > > advantage of it. > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > Nevermind, DevStack installs the projects with pip, so the one in > > cinder's requirements.txt should already be present. In that case, > > yeah, just set it in the config? > > Yes correct, my question was more to see if there were any known > issues with using lioadm on Focal. Anyway I've pushed the following > WIP change for devstack to switch over to lioadm when using Focal: > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > https://review.opendev.org/c/openstack/devstack/+/779624 https://blog.e0ne.info/post/using-openstack-cinder-with-lio-target/ says the major issue with switching in cinder has been figuring out upgrade testing of the change. I don't know what that entails or why it might be a problem though. From ltoscano at redhat.com Tue Mar 9 22:50:38 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 09 Mar 2021 23:50:38 +0100 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: References: <20210309222133.ux5gtobgujtep3so@yuggoth.org> Message-ID: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > I reported the following bug last week but I've yet to get any real > > > > > feedback after asking a few times in irc. > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures > > > > > in CI > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the > > > > > two > > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-> > > > support.html seems to indicate that you just need to set it in > > > > configuration. The package that document mentions looks like a distro > > > > package > > > > recommendation (does not exist under that name on PyPI) and the > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > file, but I don't see either mentioned in the devstack source tree > > > > so maybe that needs to be installed for DevStack-based jobs to take > > > > advantage of it. > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > Nevermind, DevStack installs the projects with pip, so the one in > > cinder's requirements.txt should already be present. In that case, > > yeah, just set it in the config? > > Yes correct, my question was more to see if there were any known > issues with using lioadm on Focal. Anyway I've pushed the following > WIP change for devstack to switch over to lioadm when using Focal: > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > https://review.opendev.org/c/openstack/devstack/+/779624 For the record, we use the default (tgt) on the main cinder gates, as well as a lioadm job (defined in cinder-tempest-plugin). If I read the code correctly, that change would break tgt for everyone on ubuntu. Please raise this in the cinder meeting (Wednesday). -- Luigi From smooney at redhat.com Tue Mar 9 23:00:29 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 09 Mar 2021 23:00:29 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> References: <20210309222133.ux5gtobgujtep3so@yuggoth.org> <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> Message-ID: On Tue, 2021-03-09 at 23:50 +0100, Luigi Toscano wrote: > On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > > I reported the following bug last week but I've yet to get any real > > > > > > feedback after asking a few times in irc. > > > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures > > > > > > in CI > > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the > > > > > > two > > > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-> > > > support.html seems to indicate that you just need to set it in > > > > > configuration. The package that document mentions looks like a distro > > > > > package > > > > > recommendation (does not exist under that name on PyPI) and the > > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > > file, but I don't see either mentioned in the devstack source tree > > > > > so maybe that needs to be installed for DevStack-based jobs to take > > > > > advantage of it. > > > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > > > Nevermind, DevStack installs the projects with pip, so the one in > > > cinder's requirements.txt should already be present. In that case, > > > yeah, just set it in the config? > > > > Yes correct, my question was more to see if there were any known > > issues with using lioadm on Focal. Anyway I've pushed the following > > WIP change for devstack to switch over to lioadm when using Focal: > > > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > > https://review.opendev.org/c/openstack/devstack/+/779624 > > For the record, we use the default (tgt) on the main cinder gates, as well as > a lioadm job (defined in cinder-tempest-plugin). If I read the code correctly, > that change would break tgt for everyone on ubuntu. > > Please raise this in the cinder meeting (Wednesday). i dont think we can wait that long im pretty sure this is causing this error form talking to lee earlier to day http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Unable%20to%20detach%20the%20device%20from%20the%20live%20config%5C%22%20AND%20loglevel:%20ERROR i guess it almost wednesday already but its startin to casue issues in multiple poject gates right before code freeze so we need to adress this before we end up rechecking things over and over on many patches. > From ltoscano at redhat.com Tue Mar 9 23:07:52 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 10 Mar 2021 00:07:52 +0100 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: References: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> Message-ID: <7488096.MhkbZ0Pkbq@whitebase.usersys.redhat.com> On Wednesday, 10 March 2021 00:00:29 CET Sean Mooney wrote: > On Tue, 2021-03-09 at 23:50 +0100, Luigi Toscano wrote: > > On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > > > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > > > I reported the following bug last week but I've yet to get any > > > > > > > real > > > > > > > feedback after asking a few times in irc. > > > > > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random > > > > > > > failures > > > > > > > in CI > > > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the > > > > > > > bug > > > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between > > > > > > > the > > > > > > > two > > > > > > > backends. Does anyone know if using lioadm is an option on > > > > > > > Focal? > > > > > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-is > > > > > > csi-> > > > support.html seems to indicate that you just need to > > > > > > set it in configuration. The package that document mentions looks > > > > > > like a distro package > > > > > > recommendation (does not exist under that name on PyPI) and the > > > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > > > file, but I don't see either mentioned in the devstack source tree > > > > > > so maybe that needs to be installed for DevStack-based jobs to > > > > > > take > > > > > > advantage of it. > > > > > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > > > > > Nevermind, DevStack installs the projects with pip, so the one in > > > > cinder's requirements.txt should already be present. In that case, > > > > yeah, just set it in the config? > > > > > > Yes correct, my question was more to see if there were any known > > > issues with using lioadm on Focal. Anyway I've pushed the following > > > WIP change for devstack to switch over to lioadm when using Focal: > > > > > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > > > https://review.opendev.org/c/openstack/devstack/+/779624 > > > > For the record, we use the default (tgt) on the main cinder gates, as well > > as a lioadm job (defined in cinder-tempest-plugin). If I read the code > > correctly, that change would break tgt for everyone on ubuntu. > > > > Please raise this in the cinder meeting (Wednesday). > > i dont think we can wait that long im pretty sure this is causing this error > form talking to lee earlier to day > http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message: > %5C%22Unable%20to%20detach%20the%20device%20from%20the%20live%20config%5C%22 > %20AND%20loglevel:%20ERROR > > i guess it almost wednesday already but its startin to casue issues in > multiple poject gates right before code freeze so we need to adress this > before we end up rechecking things over and over on many patches. But then don't error out if someone tries to use tgtadm, which is what would happen if that patch was merged (if I didn't misread the code). -- Luigi From raubvogel at gmail.com Tue Mar 9 23:19:27 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Tue, 9 Mar 2021 18:19:27 -0500 Subject: [nova?] kvm/libvirt options Message-ID: Which kvm options can I specify to an instance when I create it? I am looking at https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html and it has a handful of CPU ones (based on https://docs.openstack.org/nova/latest/configuration/config.html#libvirt), making me think the rest is some kind of openstack default which is a subset of what I can specify using, say, virsh define. From kennelson11 at gmail.com Tue Mar 9 23:52:01 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 9 Mar 2021 15:52:01 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations March 2021 End Message-ID: Hello! The PTL and TC Nomination period is now over. The official candidate lists for PTLs [0] and TC seats [1] are available on the election website. -- PTL Election Details -- There are 8 projects without candidates, so according to this resolution[2], the TC will have to decide how the following projects will proceed: Barbican, Cyborg, Keystone, Mistral, Monasca, Senlin, Zaqar, Zun -- TC Election Details -- There are 0 projects that will have elections. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start Mar 12, 2021 23:45 UTC. Thank you, -Kendall Nelson (diablo_rojo) & the Election Officials [0] https://governance.openstack.org/election/#xena-ptl-candidates [1] https://governance.openstack.org/election/#xena-tc-candidates [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Mar 10 00:13:44 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 9 Mar 2021 17:13:44 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Mon, Mar 8, 2021 at 12:46 AM Marios Andreou wrote: > > > On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin wrote: > >> >> >> On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: >> >>> Hello All, >>> >>> Just a reminder that we will be starting to implement steps to migrate >>> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >>> >>> The plan is outlined in: >>> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >>> >>> In summary, on Sunday, we plan to: >>> - Move the master integration line for promotions to build containers >>> and images on centos-8 stream nodes >>> - Change the release files to bring down centos-8 stream repos for use >>> in test jobs (test jobs will still start on centos-8 nodes - changing this >>> nodeset will happen later) >>> - Image build and container build check jobs will be moved to >>> non-voting during this transition. >>> >> >>> We have already run all the test jobs in RDO with centos-8 stream >>> content running on centos-8 nodes to prequalify this transition. >>> >>> We will update this list with status as we go forward with next steps. >>> >>> Thanks! >>> >> >> OK... status update. >> >> Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure >> Monday wasn't a disaster upstream. TripleO master jobs have successfully >> been migrated to CentOS-8-Stream today. You should see "8-stream" now in >> /etc/yum.repos.d/tripleo-centos.* repos. >> >> > > \o/ this is fantastic! > > nice work all thanks to everyone involved for getting this done with > minimal disruption > > tripleo-ci++ > > > > > >> Your CentOS-8-Stream Master hash is: >> >> edd46672cb9b7a661ecf061942d71a72 >> >> Your master repos are: >> https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo >> >> Containers, and overcloud images should all be centos-8-stream. >> >> The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. >> >> Please open launchpad bugs w/ the "alert" tag if you are having any issues. >> >> Thanks and well done all! >> >> >> >>> >>> >>> >>> >> OK.... stable/victoria will start to migrate this evening to centos-8-stream We are looking to promote the following [1]. Again if you hit any issues, please just file a launchpad bug w/ the "alert" tag. Thanks [1] https://trunk.rdoproject.org/api-centos8-victoria/api/civotes_agg_detail.html?ref_hash=457ea897ac3b7552b82c532adcea63f0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 10 00:30:55 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 10 Mar 2021 00:30:55 +0000 Subject: [nova?] kvm/libvirt options In-Reply-To: References: Message-ID: On Tue, 2021-03-09 at 18:19 -0500, Mauricio Tavares wrote: > Which kvm options can I specify to an instance when I create it? I am > looking at https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html > and it has a handful of CPU ones (based on > https://docs.openstack.org/nova/latest/configuration/config.html#libvirt), > making me think the rest is some kind of openstack default which is a > subset of what I can specify using, say, virsh define. > you cannot file any of the directly. opentack is not a virutalaition plathform its an infrasture as a service cloud plathform. https://docs.openstack.org/nova/latest/contributor/project-scope.html nova provide an abstraction layer over multile hyperviorus including libvirt via flavor and flavor extra specs https://docs.openstack.org/nova/latest/user/flavors.html as an opterator of openstack you are expect to know the low level details of your cloud but as an end user you should not be abel to tell directly which hypervior is in use. generally you can work it out by looking at the flavor and connecting to the instances but we do not provide a way to directly maniulate the xml. > From ignaziocassano at gmail.com Wed Mar 10 06:57:21 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 10 Mar 2021 07:57:21 +0100 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> Message-ID: Hello All, please, are there news about bug 1815989 ? On stein I modified code as suggested in the patches. I am worried when I will upgrade to train: wil this bug persist ? On which openstack version this bug is resolved ? Ignazio Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello, I tried to update to last stein packages on yum and seems this bug > still exists. > Before the yum update I patched some files as suggested and and ping to vm > worked fine. > After yum update the issue returns. > Please, let me know If I must patch files by hand or some new parameters > in configuration can solve and/or the issue is solved in newer openstack > versions. > Thanks > Ignazio > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney ha scritto: > >> On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: >> > Many thanks. >> > Please keep in touch. >> here are the two patches. >> the first https://review.opendev.org/#/c/724386/ is the actual change to >> add the new config opition >> this needs a release note and some tests but it shoudl be functional >> hence the [WIP] >> i have not enable the workaround in any job in this patch so the ci run >> will assert this does not break >> anything in the default case >> >> the second patch is https://review.opendev.org/#/c/724387/ which enables >> the workaround in the multi node ci jobs >> and is testing that live migration exctra works when the workaround is >> enabled. >> >> this should work as it is what we expect to happen if you are using a >> moderne nova with an old neutron. >> its is marked [DNM] as i dont intend that patch to merge but if the >> workaround is useful we migth consider enableing >> it for one of the jobs to get ci coverage but not all of the jobs. >> >> i have not had time to deploy a 2 node env today but ill try and test >> this locally tomorow. >> >> >> >> > Ignazio >> > >> > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < >> smooney at redhat.com> >> > ha scritto: >> > >> > > so bing pragmatic i think the simplest path forward given my other >> patches >> > > have not laned >> > > in almost 2 years is to quickly add a workaround config option to >> disable >> > > mulitple port bindign >> > > which we can backport and then we can try and work on the actual fix >> after. >> > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that >> shoudl >> > > serve as a workaround >> > > for thos that hav this issue but its a regression in functionality. >> > > >> > > i can create a patch that will do that in an hour or so and submit a >> > > followup DNM patch to enabel the >> > > workaound in one of the gate jobs that tests live migration. >> > > i have a meeting in 10 mins and need to finish the pacht im currently >> > > updating but ill submit a poc once that is done. >> > > >> > > im not sure if i will be able to spend time on the actul fix which i >> > > proposed last year but ill see what i can do. >> > > >> > > >> > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: >> > > > PS >> > > > I have testing environment on queens,rocky and stein and I can >> make test >> > > > as you need. >> > > > Ignazio >> > > > >> > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < >> > > > ignaziocassano at gmail.com> ha scritto: >> > > > >> > > > > Hello Sean, >> > > > > the following is the configuration on my compute nodes: >> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt >> > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 >> > > > > libvirt-libs-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 >> > > > > libvirt-client-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 >> > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 >> > > > > libvirt-python-4.5.0-1.el7.x86_64 >> > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 >> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu >> > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 >> > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 >> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >> > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch >> > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch >> > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 >> > > > > >> > > > > >> > > > > As far as firewall driver >> > > >> > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: >> > > > > >> > > > > firewall_driver = iptables_hybrid >> > > > > >> > > > > I have same libvirt/qemu version on queens, on rocky and on stein >> > > >> > > testing >> > > > > environment and the >> > > > > same firewall driver. >> > > > > Live migration on provider network on queens works fine. >> > > > > It does not work fine on rocky and stein (vm lost connection >> after it >> > > >> > > is >> > > > > migrated and start to respond only when the vm send a network >> packet , >> > > >> > > for >> > > > > example when chrony pools the time server). >> > > > > >> > > > > Ignazio >> > > > > >> > > > > >> > > > > >> > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < >> > > >> > > smooney at redhat.com> >> > > > > ha scritto: >> > > > > >> > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: >> > > > > > > Hello, some updated about this issue. >> > > > > > > I read someone has got same issue as reported here: >> > > > > > > >> > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 >> > > > > > > >> > > > > > > If you read the discussion, someone tells that the garp must >> be >> > > >> > > sent by >> > > > > > > qemu during live miration. >> > > > > > > If this is true, this means on rocky/stein the qemu/libvirt >> are >> > > >> > > bugged. >> > > > > > >> > > > > > it is not correct. >> > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve >> as >> > > >> > > its mac >> > > > > > learning frames >> > > > > > instead >> > > >> > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol >> > > > > > >> https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html >> > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 >> > > > > > >> https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html >> > > > > > but was fixed by >> > > > > > >> > > >> > > >> https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b >> > > > > > can you confirm you are not using the broken 2.6.0 release and >> are >> > > >> > > using >> > > > > > 2.7 or newer or 2.4 and older. >> > > > > > >> > > > > > >> > > > > > > So I tried to use stein and rocky with the same version of >> > > >> > > libvirt/qemu >> > > > > > > packages I installed on queens (I updated compute and >> controllers >> > > >> > > node >> > > > > > >> > > > > > on >> > > > > > > queens for obtaining same libvirt/qemu version deployed on >> rocky >> > > >> > > and >> > > > > > >> > > > > > stein). >> > > > > > > >> > > > > > > On queens live migration on provider network continues to work >> > > >> > > fine. >> > > > > > > On rocky and stein not, so I think the issue is related to >> > > >> > > openstack >> > > > > > > components . >> > > > > > >> > > > > > on queens we have only a singel prot binding and nova blindly >> assumes >> > > > > > that the port binding details wont >> > > > > > change when it does a live migration and does not update the >> xml for >> > > >> > > the >> > > > > > netwrok interfaces. >> > > > > > >> > > > > > the port binding is updated after the migration is complete in >> > > > > > post_livemigration >> > > > > > in rocky+ neutron optionally uses the multiple port bindings >> flow to >> > > > > > prebind the port to the destiatnion >> > > > > > so it can update the xml if needed and if post copy live >> migration is >> > > > > > enable it will asyconsly activate teh dest port >> > > > > > binding before post_livemigration shortenting the downtime. >> > > > > > >> > > > > > if you are using the iptables firewall os-vif will have >> precreated >> > > >> > > the >> > > > > > ovs port and intermediate linux bridge before the >> > > > > > migration started which will allow neutron to wire it up (put >> it on >> > > >> > > the >> > > > > > correct vlan and install security groups) before >> > > > > > the vm completes the migraton. >> > > > > > >> > > > > > if you are using the ovs firewall os-vif still precreates teh >> ovs >> > > >> > > port >> > > > > > but libvirt deletes it and recreats it too. >> > > > > > as a result there is a race when using openvswitch firewall >> that can >> > > > > > result in the RARP packets being lost. >> > > > > > >> > > > > > > >> > > > > > > Best Regards >> > > > > > > Ignazio Cassano >> > > > > > > >> > > > > > > >> > > > > > > >> > > > > > > >> > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < >> > > > > > >> > > > > > smooney at redhat.com> >> > > > > > > ha scritto: >> > > > > > > >> > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: >> > > > > > > > > Hello, I have this problem with rocky or newer with >> > > >> > > iptables_hybrid >> > > > > > > > > firewall. >> > > > > > > > > So, can I solve using post copy live migration ??? >> > > > > > > > >> > > > > > > > so this behavior has always been how nova worked but rocky >> the >> > > > > > > > >> > > > > > > > >> > > > > > >> > > > > > >> > > >> > > >> https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html >> > > > > > > > spec intoduced teh ablity to shorten the outage by pre >> biding the >> > > > > > >> > > > > > port and >> > > > > > > > activating it when >> > > > > > > > the vm is resumed on the destiation host before we get to >> pos >> > > >> > > live >> > > > > > >> > > > > > migrate. >> > > > > > > > >> > > > > > > > this reduces the outage time although i cant be fully >> elimiated >> > > >> > > as >> > > > > > >> > > > > > some >> > > > > > > > level of packet loss is >> > > > > > > > always expected when you live migrate. >> > > > > > > > >> > > > > > > > so yes enabliy post copy live migration should help but be >> aware >> > > >> > > that >> > > > > > >> > > > > > if a >> > > > > > > > network partion happens >> > > > > > > > during a post copy live migration the vm will crash and >> need to >> > > >> > > be >> > > > > > > > restarted. >> > > > > > > > it is generally safe to use and will imporve the migration >> > > >> > > performace >> > > > > > >> > > > > > but >> > > > > > > > unlike pre copy migration if >> > > > > > > > the guess resumes on the dest and the mempry page has not >> been >> > > >> > > copied >> > > > > > >> > > > > > yet >> > > > > > > > then it must wait for it to be copied >> > > > > > > > and retrive it form the souce host. if the connection too >> the >> > > >> > > souce >> > > > > > >> > > > > > host >> > > > > > > > is intrupted then the vm cant >> > > > > > > > do that and the migration will fail and the instance will >> crash. >> > > >> > > if >> > > > > > >> > > > > > you >> > > > > > > > are using precopy migration >> > > > > > > > if there is a network partaion during the migration the >> > > >> > > migration will >> > > > > > > > fail but the instance will continue >> > > > > > > > to run on the source host. >> > > > > > > > >> > > > > > > > so while i would still recommend using it, i it just good >> to be >> > > >> > > aware >> > > > > > >> > > > > > of >> > > > > > > > that behavior change. >> > > > > > > > >> > > > > > > > > Thanks >> > > > > > > > > Ignazio >> > > > > > > > > >> > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney >> ha >> > > > > > >> > > > > > scritto: >> > > > > > > > > >> > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano >> wrote: >> > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm >> migrate >> > > > > > >> > > > > > from one >> > > > > > > > >> > > > > > > > node >> > > > > > > > > > > to another I cannot ping it for several minutes. If >> in the >> > > >> > > vm I >> > > > > > >> > > > > > put a >> > > > > > > > > > > script that ping the gateway continously, the live >> > > >> > > migration >> > > > > > >> > > > > > works >> > > > > > > > >> > > > > > > > fine >> > > > > > > > > > >> > > > > > > > > > and >> > > > > > > > > > > I can ping it. Why this happens ? I read something >> about >> > > > > > >> > > > > > gratuitous >> > > > > > > > >> > > > > > > > arp. >> > > > > > > > > > >> > > > > > > > > > qemu does not use gratuitous arp but instead uses an >> older >> > > > > > >> > > > > > protocal >> > > > > > > > >> > > > > > > > called >> > > > > > > > > > RARP >> > > > > > > > > > to do mac address learning. >> > > > > > > > > > >> > > > > > > > > > what release of openstack are you using. and are you >> using >> > > > > > >> > > > > > iptables >> > > > > > > > > > firewall of openvswitch firewall. >> > > > > > > > > > >> > > > > > > > > > if you are using openvswtich there is is nothing we can >> do >> > > >> > > until >> > > > > > >> > > > > > we >> > > > > > > > > > finally delegate vif pluging to os-vif. >> > > > > > > > > > currently libvirt handels interface plugging for kernel >> ovs >> > > >> > > when >> > > > > > >> > > > > > using >> > > > > > > > >> > > > > > > > the >> > > > > > > > > > openvswitch firewall driver >> > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress >> that >> > > >> > > but it >> > > > > > >> > > > > > and >> > > > > > > > >> > > > > > > > the >> > > > > > > > > > neutron patch are >> > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. >> > > >> > > while >> > > > > > >> > > > > > libvirt >> > > > > > > > >> > > > > > > > is >> > > > > > > > > > pluging the vif there will always be >> > > > > > > > > > a race condition where the RARP packets sent by qemu and >> > > >> > > then mac >> > > > > > > > >> > > > > > > > learning >> > > > > > > > > > packets will be lost. >> > > > > > > > > > >> > > > > > > > > > if you are using the iptables firewall and you have >> opnestack >> > > > > > >> > > > > > rock or >> > > > > > > > > > later then if you enable post copy live migration >> > > > > > > > > > it should reduce the downtime. in this conficution we >> do not >> > > >> > > have >> > > > > > >> > > > > > the >> > > > > > > > >> > > > > > > > race >> > > > > > > > > > betwen neutron and libvirt so the rarp >> > > > > > > > > > packets should not be lost. >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > > Please, help me ? >> > > > > > > > > > > Any workaround , please ? >> > > > > > > > > > > >> > > > > > > > > > > Best Regards >> > > > > > > > > > > Ignazio >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > >> > > > > > > > >> > > > > > >> > > > > > >> > > >> > > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoshito.itou.dr at hco.ntt.co.jp Wed Mar 10 07:00:17 2021 From: yoshito.itou.dr at hco.ntt.co.jp (Yoshito Ito) Date: Wed, 10 Mar 2021 16:00:17 +0900 Subject: [heat-translator] Ask new release 2.2.1 to fix Zuul jobs Message-ID: <3c44044b-487b-0e6f-1889-358e5eafb74d@hco.ntt.co.jp_1> Hi heat-translator core members, I'd like you to review the following patches [1][2], and to ask if we can release 2.2.1 with these commits to fix our Zuul jobs. In release 2.2.0, our Zuul jobs are broken [3] because of new release of tosca-parser 2.3.0, which provides strict validation of required attributes in [4]. The patch [1] fix this issue by updating our wrong test samples. The other [2] is now blocked by this issue and was better to be merged in 2.2.0. I missed the 2.2.0 release patch [5] because none of us was added as reviewers. So after merging [1] and [2], I will submit a patch to make new 2.2.1 release. [1] https://review.opendev.org/c/openstack/heat-translator/+/779642 [2] https://review.opendev.org/c/openstack/heat-translator/+/778612 [3] https://bugs.launchpad.net/heat-translator/+bug/1918360 [4] https://opendev.org/openstack/tosca-parser/commit/00d3a394d5a3bc13ed7d2f1d71affd9ab71e4318 [5] https://review.opendev.org/c/openstack/releases/+/777964 Best regards, Yoshito Ito From stig.openstack at telfer.org Wed Mar 10 08:15:18 2021 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 10 Mar 2021 08:15:18 +0000 Subject: [scientific-sig] IRC meeting today - Jupyter notebook platforms Message-ID: <6E5BE5BC-3F4C-4BC5-A52F-9D65F4E4A99B@telfer.org> Hi All - We have a Scientific SIG meeting today at 1100 UTC in channel #openstack-meeting. Everyone is welcome. Today's agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_March_10th_2021 We'll be discussing experiences and best practices for providing JupyterHub and Jupyter notebook platforms on OpenStack. Cheers, Stig From lyarwood at redhat.com Wed Mar 10 09:46:26 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 10 Mar 2021 09:46:26 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <7488096.MhkbZ0Pkbq@whitebase.usersys.redhat.com> References: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> <7488096.MhkbZ0Pkbq@whitebase.usersys.redhat.com> Message-ID: On Tue, 9 Mar 2021 at 23:11, Luigi Toscano wrote: > On Wednesday, 10 March 2021 00:00:29 CET Sean Mooney wrote: > > On Tue, 2021-03-09 at 23:50 +0100, Luigi Toscano wrote: > > > On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > > > > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > > > > I reported the following bug last week but I've yet to get any > > > > > > > > real > > > > > > > > feedback after asking a few times in irc. > > > > > > > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random > > > > > > > > failures > > > > > > > > in CI > > > > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the > > > > > > > > bug > > > > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between > > > > > > > > the > > > > > > > > two > > > > > > > > backends. Does anyone know if using lioadm is an option on > > > > > > > > Focal? > > > > > > > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-is > > > > > > > csi-> > > > support.html seems to indicate that you just need to > > > > > > > set it in configuration. The package that document mentions looks > > > > > > > like a distro package > > > > > > > recommendation (does not exist under that name on PyPI) and the > > > > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > > > > file, but I don't see either mentioned in the devstack source tree > > > > > > > so maybe that needs to be installed for DevStack-based jobs to > > > > > > > take > > > > > > > advantage of it. > > > > > > > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > > > > > > > Nevermind, DevStack installs the projects with pip, so the one in > > > > > cinder's requirements.txt should already be present. In that case, > > > > > yeah, just set it in the config? > > > > > > > > Yes correct, my question was more to see if there were any known > > > > issues with using lioadm on Focal. Anyway I've pushed the following > > > > WIP change for devstack to switch over to lioadm when using Focal: > > > > > > > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > > > > https://review.opendev.org/c/openstack/devstack/+/779624 > > > > > > For the record, we use the default (tgt) on the main cinder gates, as well > > > as a lioadm job (defined in cinder-tempest-plugin). If I read the code > > > correctly, that change would break tgt for everyone on ubuntu. > > > > > > Please raise this in the cinder meeting (Wednesday). > > > > i dont think we can wait that long im pretty sure this is causing this error > > form talking to lee earlier to day > > http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message: > > %5C%22Unable%20to%20detach%20the%20device%20from%20the%20live%20config%5C%22 > > %20AND%20loglevel:%20ERROR > > > > i guess it almost wednesday already but its startin to casue issues in > > multiple poject gates right before code freeze so we need to adress this > > before we end up rechecking things over and over on many patches. If M3 wasn't *tomorrow* I'd agree but I don't think anyone wants to change the default iSCSI target this close to feature freeze. I've also been unable to reproduce the above failure with a multi LVM/iSCSI + tgtadm setup so I'm not entirely confident about the switch to lioadm actually resolving that issue at the moment. > But then don't error out if someone tries to use tgtadm, which is what would > happen if that patch was merged (if I didn't misread the code). There's nothing stopping someone from declaring CINDER_ISCSI_HELPER=tgtadm to override the default so I'm not sure what you're suggesting. To that end I've posted the following to ensure the single host tgtadm jobs in the cinder-tempest-plugin use the correct target: Set CINDER_ISCSI_HELPER explicitly for tgtadm job https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/779697 If you know of anymore please let me know! Cheers, Lee From hberaud at redhat.com Wed Mar 10 09:52:41 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 10 Mar 2021 10:52:41 +0100 Subject: [release] Xena PTG Message-ID: Oï releasers, The PTG is fast approaching (Apr 19 - 23). To help to organize the gathering: 1) please fill the doodle[1] with the time slots that fit well for you; 2) please add your PTG topics in our etherpad[2]. Voting will be closed on the 21st of March. Thanks for your reading, [1] https://doodle.com/poll/8d8n2picqnhchhsv [2] https://etherpad.opendev.org/p/xena-ptg-os-relmgt -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Mar 10 10:23:48 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 10 Mar 2021 11:23:48 +0100 Subject: [neutron] Xena PTG Message-ID: <20210310102348.pcltupec4waqt6hp@p1.localdomain> Hi, If You plan to attend Neutron sessions during the Xena PTG, please fill in doodle [1] with the time slots which are good for You. Please do that before 19.03.2021 (next Friday) so I will have time to book slots which are the best for most of us. Please also add any topics You want to be discussed to the etherpad [2] [1] https://doodle.com/poll/cc2ste3emzw7ekrh?utm_source=poll&utm_medium=link [2] https://etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tcr1br24 at gmail.com Wed Mar 10 08:11:06 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Wed, 10 Mar 2021 16:11:06 +0800 Subject: [ISSUE]Openstack create VNF instance error Message-ID: Dear Sir, This our testbed: Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) We have trouble when creating vnf on compute node using OpenStack CLI: #openstack vnf create vnfd1 vnf1 Here are the log message and ovs information.*COMPUTE NODE1:* nova-compute.log ============================= Attempting claim on node cmp001: memory 2048 MB, disk 15 GB, vcpus 1 CPU Total memory: 7976 MB, used: 6656.00 MB memory limit not specified, defaulting to unlimited Total disk: 96 GB, used: 45.00 GB disk limit not specified, defaulting to unlimited Total vcpu: 4 VCPU, used: 3.00 VCPU vcpu limit not specified, defaulting to unlimited Claim successful on node cmp001 Creating image ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-a514ba2a-7ada-4a35-944e-0c861055112e eacae20dfeb9442eb8643bb8784b03d1 caee3c14634f42438ec8479eb49ba388 - default default] Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--may-exist', 'add-br', 'br-int', '--', 'set', 'Bridge', 'br-int', 'datapath_type=netdev']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-br br-int -- set Bridge br-int datapath_type=netdev Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. ERROR vif_plug_ovs.ovsdb.impl_vsctl Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--may-exist', 'add-port', 'br-int', 'vhu5e8b5246-cd', '--', 'set', 'Interface', 'vhu5e8b5246-cd', 'external_ids:iface-id=5e8b5246-cd05-4735-9692-29d51db6e72d', 'external_ids:iface-status=active', 'external_ids:attached-mac=fa:16:3e:8f:b1:a1', 'external_ids:vm-uuid=30675d24-bd74-409c-ba36-4c59a72283ec', 'type=dpdkvhostuser']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-port br-int vhu5e8b5246-cd -- set Interface vhu5e8b5246-cd external_ids:iface-id=5e8b5246-cd05-4735-9692-29d51db6e72d external_ids:iface-status=active external_ids:attached-mac=fa:16:3e:8f:b1:a1 external_ids:vm-uuid=30675d24-bd74-409c-ba36-4c59a72283ec type=dpdkvhostuser Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2021-03-09 06:30:36.594 3282 ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-a514ba2a-7ada-4a35-944e-0c861055112e eacae20dfeb9442eb8643bb8784b03d1 caee3c14634f42438ec8479eb49ba388 - default default] Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--columns=mtu_request', 'list', 'Interface']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --columns=mtu_request list Interface Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2021-03-09 06:30:36.594 3282 ERROR os_vif [req-a514ba2a-7ada-4a35-944e-0c861055112e eacae20dfeb9442eb8643bb8784b03d1 caee3c14634f42438ec8479eb49ba388 - default default] Failed to plug vif VIFVHostUser(active=False,address=fa:16:3e:8f:b1:a1,has_traffic_filtering=False,id=5e8b5246-cd05-4735-9692-29d51db6e72d,mode='client',network=Network(6be19cbf-c6a7-4735-a0ee-42e1c5a664e2),path='/var/run/openvswitch/vhu5e8b5246-cd',plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='vhu5e8b5246-cd'): oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --columns=mtu_request list Interface Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n' ============================= ovs information: =========================== _uuid : 4a56f11f-f839-4032-ba85-2eec47260144 bridges : [1a600d85-9849-459a-8453-9a28eee63dc4, f21a1111-3c01-4988-99b7-ca5ab4a73051] cur_cfg : 653 datapath_types : [netdev, system] db_version : "7.16.1" dpdk_initialized : false dpdk_version : none external_ids : {hostname="cmp001", "odl_os_hostconfig_config_odl_l2"="{\"allowed_network_types\": [\"local\", \"flat\", \"vlan\", \"vxlan\", \"gre\"], \"bridge_mappings\": {}, \"datapath_type\": \"netdev\", \"supported_vnic_types\": [{\"vif_details\": {\"uuid\": \"4a56f11f-f839-4032-ba85-2eec47260144\", \"host_addresses\": [\"cmp001\"], \"has_datapath_type_netdev\": true, \"support_vhost_user\": true, \"port_prefix\": \"vhu\", \"vhostuser_socket_dir\": \"/var/run/openvswitch\", \"vhostuser_ovs_plug\": true, \"vhostuser_mode\": \"client\", \"vhostuser_socket\": \"/var/run/openvswitch/vhu$PORT_ID\"}, \"vif_type\": \"vhostuser\", \"vnic_type\": \"normal\"}]}", odl_os_hostconfig_hostid="cmp001", rundir="/var/run/openvswitch", system-id="ec007ae5-4b61-4f8e-af2b-b1b97413d20d"} iface_types : [erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lisp, patch, stt, system, tap, vxlan] manager_options : [11b3ada9-d9b3-42bb-97fc-d254364acecd] next_cfg : 653 other_config : {local_ip="10.1.0.5", provider_mappings="physnet1:br-floating"} ovs_version : "2.11.1" ssl : [] statistics : {} system_type : ubuntu system_version : "18.04" =========================== # ovs-vsctl get bridge br-int datapath_type system =========================== *COMPUTE NODE 2:* nova-compute.log =========================== ERROR nova.compute.manager Failed to build and run instance: libvirt.libvirtError: internal error: process exited while connecting to monitor: qemu-system-x86_64: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhub701eeae-c0: Failed to connect socket /var/run/openvswitch/vhub701eeae-c0: No such file or directory Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2353, in _build_and_run_instance block_device_info=block_device_info) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3204, in spawn destroy_disks_on_failure=True) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5724, in _create_domain_and_network destroy_disks_on_failure) File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5693, in _create_domain_and_network post_xml_callback=post_xml_callback) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5627, in _create_domain guest.launch(pause=pause) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 144, in launch self._encoded_xml, errors='ignore') File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 139, in launch return self._domain.createWithFlags(flags) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 190, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 148, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 129, in execute six.reraise(c, e, tb) File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "/usr/lib/python3/dist-packages/libvirt.py", line 1110, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirt.libvirtError: internal error: process exited while connecting to monitor: 2021-03-10T07:59:04.064555Z qemu-system-x86_64: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhub701eeae-c0: Failed to connect socket /var/run/openvswitch/vhub701eeae-c0: No such file or directory INFO nova.compute.manager Took 0.68 seconds to deallocate network for instance. INFO nova.scheduler.client.report Deleted allocation for instance 308de941-1192-4dbd-8c4d-d7860139d6d4 =========================== ovs information: =========================== _uuid : b8f5b869-e794-4052-8896-3ee5d323a0b7 bridges : [5b6ab0c7-d8f1-43b0-9ab7-1e77280cd3ff, 70810190-6796-4500-bda4-dae856bcede1] cur_cfg : 1267 datapath_types : [netdev, system] db_version : "7.16.1" dpdk_initialized : false dpdk_version : none external_ids : {hostname="cmp002", "odl_os_hostconfig_config_odl_l2"="{\"allowed_network_types\": [\"local\", \"flat\", \"vlan\", \"vxlan\", \"gre\"], \"bridge_mappings\": {}, \"datapath_type\": \"netdev\", \"supported_vnic_types\": [{\"vif_details\": {\"uuid\": \"b8f5b869-e794-4052-8896-3ee5d323a0b7\", \"host_addresses\": [\"cmp002\"], \"has_datapath_type_netdev\": true, \"support_vhost_user\": true, \"port_prefix\": \"vhu\", \"vhostuser_socket_dir\": \"/var/run/openvswitch\", \"vhostuser_ovs_plug\": true, \"vhostuser_mode\": \"client\", \"vhostuser_socket\": \"/var/run/openvswitch/vhu$PORT_ID\"}, \"vif_type\": \"vhostuser\", \"vnic_type\": \"normal\"}]}", odl_os_hostconfig_hostid="cmp002", rundir="/var/run/openvswitch", system-id="dddc6d6c-0e7c-4813-97c9-7c72eaf9f46b"} iface_types : [erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lisp, patch, stt, system, tap, vxlan] manager_options : [29b4ef4b-e2ba-43f3-aa42-b1bacaf9dd19, 75ae95f6-d251-4152-9e0d-5b65e5f7bb77] next_cfg : 1267 other_config : {local_ip="10.1.0.6", provider_mappings="physnet1:br-floating"} ovs_version : "2.11.1" ssl : [] statistics : {} system_type : ubuntu system_version : "18.04" =========================== # ovs-vsctl get bridge br-int datapath_type netdev =========================== We don't use DPDK on our Openvswitch. Can anyone give us some advice on this issue? Thanks for helping us. [image: Mailtrack] Sender notified by Mailtrack 03/10/21, 04:09:46 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Mar 10 08:33:39 2021 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 10 Mar 2021 09:33:39 +0100 Subject: [ISSUE]Openstack create VNF instance ERROR In-Reply-To: References: Message-ID: Hi, Based on the screenshots ovsdb and libvirt seems to be unable to connect to ovsdb. I would check if configuration for ovsdb connection (like ovsdb_connection for neutron), and it is really possible to use those addresses to connect manually to ovsdb. Not sure if Openstack stein supported/tested together with neon ODL, I can't find the page where these are listed. Regards Lajos (lajoskatona) Jhen-Hao Yu ezt írta (időpont: 2021. márc. 9., K, 22:56): > Dear Sir, > > This our testbed: > Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) > > We have trouble when creating vnf on compute node using OpenStack CLI: > #openstack vnf create vnfd1 vnf1 > > Here are the log message and ovs information. > COMPUTE NODE1: > nova-compute.log > [image: error1.png] > [image: list1.png] > [image: vsctl1.png] > > COMPUTE NODE 2: > nova-compute.log > [image: error2.png] > [image: list2.png] > [image: vsctl2.png] > > We don't use DPDK on our Openvswitch. > Can anyone give us some advice on this issue? > > Thanks for helping us. > > [image: Mailtrack] > Sender > notified by > Mailtrack > 03/09/21, > 10:20:09 PM > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error1.png Type: image/png Size: 279362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list1.png Type: image/png Size: 51268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl1.png Type: image/png Size: 44054 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error2.png Type: image/png Size: 260121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list2.png Type: image/png Size: 52846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl2.png Type: image/png Size: 35301 bytes Desc: not available URL: From bekir.fajkovic at citynetwork.eu Wed Mar 10 12:04:58 2021 From: bekir.fajkovic at citynetwork.eu (Bekir Fajkovic) Date: Wed, 10 Mar 2021 13:04:58 +0100 Subject: Question regarding the production status of DB2 in Trove Message-ID: Hello! One question regarding the development status of DB2 inside Trove. I see that DB2 is still in experimental phase, when could it be expected for that datastore type to get production ready status? Best Regards. Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcr1br24 at gmail.com Wed Mar 10 12:39:48 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Wed, 10 Mar 2021 20:39:48 +0800 Subject: [ISSUE]Openstack create VNF instance ERROR In-Reply-To: References: Message-ID: Hi, Lajos, We use OPNFV-9.0.0 to deploy Openstack, ODL and vswitch. ovsdb_connection in neutron.conf is set "tcp:127.0.0.1:6639" The vswitch created tapXXXXXXX for instance attach to it before but somehow create vhuXXXXXXX which fail to create the instance. Thank you. [image: Mailtrack] Sender notified by Mailtrack 03/10/21, 08:30:41 PM Lajos Katona 於 2021年3月10日 週三 下午4:33寫道: > Hi, > Based on the screenshots ovsdb and libvirt seems to be unable to connect > to ovsdb. > I would check if configuration for ovsdb connection (like ovsdb_connection > for neutron), and it is > really possible to use those addresses to connect manually to ovsdb. > Not sure if Openstack stein supported/tested together with neon ODL, I > can't find the page where > these are listed. > > Regards > Lajos (lajoskatona) > > Jhen-Hao Yu ezt írta (időpont: 2021. márc. 9., K, > 22:56): > >> Dear Sir, >> >> This our testbed: >> Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) >> >> We have trouble when creating vnf on compute node using OpenStack CLI: >> #openstack vnf create vnfd1 vnf1 >> >> Here are the log message and ovs information. >> COMPUTE NODE1: >> nova-compute.log >> [image: error1.png] >> [image: list1.png] >> [image: vsctl1.png] >> >> COMPUTE NODE 2: >> nova-compute.log >> [image: error2.png] >> [image: list2.png] >> [image: vsctl2.png] >> >> We don't use DPDK on our Openvswitch. >> Can anyone give us some advice on this issue? >> >> Thanks for helping us. >> >> [image: Mailtrack] >> Sender >> notified by >> Mailtrack >> 03/09/21, >> 10:20:09 PM >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error1.png Type: image/png Size: 279362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list1.png Type: image/png Size: 51268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl1.png Type: image/png Size: 44054 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error2.png Type: image/png Size: 260121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list2.png Type: image/png Size: 52846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl2.png Type: image/png Size: 35301 bytes Desc: not available URL: From christian.rohmann at inovex.de Wed Mar 10 14:21:27 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 10 Mar 2021 15:21:27 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <095A8B69-9EDC-4AF7-90AC-D16B0C484361@gmail.com> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> <095A8B69-9EDC-4AF7-90AC-D16B0C484361@gmail.com> Message-ID: <2ca7b032-dc3a-2648-a08c-1df8cc7c0542@inovex.de> Hey Artem, On 09/03/2021 14:23, Artem Goncharov wrote: > This is just a tiny subset of OpenStack resources with no possible flexibility vs native implementation in SDK/CLI I totally agree. I was just saying that by not having a cleanup approach provided by OpenStack there will just be more and more tools popping up, each solving the same issue ... and never being tested and updated along with OpenStack releases and new project / resources being added. So thanks again for your work on https://review.opendev.org/c/openstack/python-openstackclient/+/734485 ! Regards Christian From smooney at redhat.com Wed Mar 10 14:42:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 10 Mar 2021 14:42:38 +0000 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: <4299490.gzMO86gykG@p1> References: <2609049.HbdjPCY3gI@p1> <4299490.gzMO86gykG@p1> Message-ID: <41fa3457970dcd126798f90e163aebbcde3f14c0.camel@redhat.com> ok since there appear to be no stong objection form neutron we proably should deprecate the policy in nova then so we can either remove it next cycle or change its default value melanie are your plannign to propose a patch to do that? On Sat, 2021-03-06 at 17:00 +0100, Slawek Kaplonski wrote: > Hi, > > Dnia sobota, 6 marca 2021 13:53:02 CET Sean Mooney pisze: > > On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: > > > Hi, > > > > > > Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > > > > Hello all, > > > > > > > > I'm seeking input from the neutron and nova teams regarding policy > > > > enforcement for allowing attachment to external networks. Details below. > > > > > > > > Recently we've been looking at an issue that was reported quite a long > > > > time ago (2017) [1] where we have a policy check in nova-compute that > > > > controls whether to allow users to attach an external network to their > > > > instances. > > > > > > > > This has historically been a pain point for operators as (1) it goes > > > > against convention of having policy checks in nova-api only and (2) > > > > setting the policy to anything other than the default requires deploying > > > > a policy file change to all of the compute hosts in the deployment. > > > > > > > > The launchpad bug report mentions neutron refactoring work that was > > > > happening at the time, which was thought might make the > > > > 'network:attach_external_network' policy check on the nova side > > > > redundant. > > > > > > > > Years have passed since then and customers are still running into this > > > > problem, so we are thinking, can this policy check be removed on the > > > > nova-compute side now? > > > > > > > > I did a local test with devstack to verify what the behavior is if we > > > > were to remove the 'network:attach_external_network' policy check > > > > entirely [2] and found that neutron appears to properly enforce > > > > permission to attach to external networks itself. It appears that the > > > > enforcement on the neutron side makes the nova policy check redundant. > > > > > > > > When I tried to boot an instance to attach to an external network, > > > > neutron API returned the following: > > > > > > > > INFO neutron.pecan_wsgi.hooks.translation > > > > [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > > > > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > > > > port on this network > > > > [...] > > > > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > > > > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 > > > > > > I just checked in Neutron code and we don't have any policy rule related > > > directly to the creation of ports on the external network. > > > Probably what You had there is the fact that Your router:external network > > > was owned by other tenant and due to that You wasn't able to create port > > > directly on it. If as an admin You would create external network which > > > would belong to Your tenant, You would be allowed to create port there. > > > > > > > Can anyone from the neutron team confirm whether it would be OK for us > > > > to remove our nova-compute policy check for external network attach > > > > permission and let neutron take care of the check? > > > > > > I don't know exactly the reasons why it is forbiden on Nova's side but TBH > > > I don't see any reason why we should forbid pluging instances directly to > > > the network marked as router:external=True. > > > > i have listed the majority of my consers in > > https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the > > downstream bug related to this. > > there are a number of issue sthat i was concerd about but tl;dr > > - booting ip form external network consumes ip from the floating ip subnet > > withtout using quota - by default neutron upstream and downstream is > > configured to provide nova metadata api access via the neutron router not > > the dhcp server so by default the metadata api will not work with external > > network. that would require neueton to be configre to use the dhcp server > > for metadta or config driver or else insance wont get ssh keys ingject by > > cloud init. > > - there might be security considertaions. typeically external networks are > > vlan or flat networks and in some cases operators may not want tenats to be > > able to boot on such networks expsially with vnic-type=driect-physical > > since that might allow them to violate tenant isolation if the top of rack > > switch was not configured by a heracical port binding driver to provide > > adiquite isolation in that case. this is not so much because this is an > > external network and more a concern anytime you do PF passtough but there > > may be other implication to allowing this by default. that said if neutron > > has a way to express policy in this regard nova does not have too. > > Those are all valid points, true. But TBH, if administrator created such > network as pool of FIPs for the user, then users will not be able to plug vms > directly to that network as they aren't owners of the network so neutron will > forbid that. > > > > > router:external=True is really used to mark a network as providing > > connectivity such that it can be used for the gateway port of neutron > > routers. the workaroud that i have come up with currently is to mark the > > network as shared and then use neturon rbac to only share it with the teant > > that owns it. > > > > i assigning external network to speficic tenat being useful when you want to > > provde a specific ip allocation pool to them or just a set of ips. i > > understand that the current motivation for this request is commign form > > some edge deployments. in general i dont thinkthis would be widely used but > > for those that need its better ux then marking it as shared. > > > > > > And on the nova side, I assume we would need a deprecation cycle before > > > > removing the 'network:attach_external_network' policy. If we can get > > > > confirmation from the neutron team, is anyone opposed to the idea of > > > > deprecating the 'network:attach_external_network' policy in the Wallaby > > > > cycle, to be removed in the Xena release? > > > > > > > > I would appreciate your thoughts. > > > > > > > > Cheers, > > > > -melanie > > > > > > > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > > > > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 > > From akekane at redhat.com Wed Mar 10 14:47:39 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 10 Mar 2021 20:17:39 +0530 Subject: [glance] Xena PTG (Apr 19 - 23) Message-ID: Hello All, Greetings!!! Xena PTG is announced and if you haven't already registered, please do so as soon as possible [1]. I have created a Virtual PTG planning etherpad [2]. Please add your PTG topic in the etherpad [2]. If you feel your topic needs cross project cooperation please note that in the etherpad which other teams are needed. I am planning to reserve time slots for the PTG between 1400 UTC to 1700 UTC. Please let me know if you have any concerns or suggestions with the given time slots. [1] https://www.openstack.org/ptg/ [2] https://etherpad.opendev.org/p/xena-ptg-glance-planning Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From foundjem at ieee.org Wed Mar 10 15:51:57 2021 From: foundjem at ieee.org (Armstrong Foundjem) Date: Wed, 10 Mar 2021 10:51:57 -0500 Subject: [release][heat][ironic][requirements][swift][OpenStackSDK] Cycle With Intermediary Unreleased Deliverables References: <8FD9C7CB-96FE-4970-B290-432ABCCC8FCF.ref@ieee.org> Message-ID: <8FD9C7CB-96FE-4970-B290-432ABCCC8FCF@ieee.org> Hello! Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Wallaby cycle: heat-agents ironic-prometheus-exporter ironic-ui ovn-octavia-provider python-openstackclient requirements swift Those should be released ASAP, and in all cases before 22 March, 2021, so that we have a release to include in the final Wallaby release. - Armstrong Foundjem (armstrong) -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Wed Mar 10 17:06:41 2021 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Wed, 10 Mar 2021 18:06:41 +0100 Subject: [largescale-sig] Next meeting: March 10, 15utc In-Reply-To: References: Message-ID: Hi, we had the Large Scale SIG meeting today. Meeting logs are available at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-03-10-15.00.log.html We discussed topics for a new video meeting in 2 weeks. Details will be sent later. regards, Belmiro On Mon, Mar 8, 2021 at 12:43 PM Thierry Carrez wrote: > Hi everyone, > > Our next Large Scale SIG meeting will be this Wednesday in > #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it > translates locally at: > > https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210310T15 > > Belmiro Moreira will chair this meeting. A number of topics have already > been added to the agenda, including discussing CentOS Stream, reflecting > on last video meeting and pick a topic for the next one. > > Feel free to add other topics to our agenda at: > https://etherpad.openstack.org/p/large-scale-sig-meeting > > Regards, > > -- > Thierry Carrez > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Mar 10 17:36:44 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 10 Mar 2021 10:36:44 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Tue, Mar 9, 2021 at 5:13 PM Wesley Hayutin wrote: > > > On Mon, Mar 8, 2021 at 12:46 AM Marios Andreou wrote: > >> >> >> On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin >> wrote: >> >>> >>> >>> On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: >>> >>>> Hello All, >>>> >>>> Just a reminder that we will be starting to implement steps to migrate >>>> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >>>> >>>> The plan is outlined in: >>>> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >>>> >>>> In summary, on Sunday, we plan to: >>>> - Move the master integration line for promotions to build containers >>>> and images on centos-8 stream nodes >>>> - Change the release files to bring down centos-8 stream repos for use >>>> in test jobs (test jobs will still start on centos-8 nodes - changing this >>>> nodeset will happen later) >>>> - Image build and container build check jobs will be moved to >>>> non-voting during this transition. >>>> >>> >>>> We have already run all the test jobs in RDO with centos-8 stream >>>> content running on centos-8 nodes to prequalify this transition. >>>> >>>> We will update this list with status as we go forward with next steps. >>>> >>>> Thanks! >>>> >>> >>> OK... status update. >>> >>> Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure >>> Monday wasn't a disaster upstream. TripleO master jobs have successfully >>> been migrated to CentOS-8-Stream today. You should see "8-stream" now in >>> /etc/yum.repos.d/tripleo-centos.* repos. >>> >>> >> >> \o/ this is fantastic! >> >> nice work all thanks to everyone involved for getting this done with >> minimal disruption >> >> tripleo-ci++ >> >> >> >> >> >>> Your CentOS-8-Stream Master hash is: >>> >>> edd46672cb9b7a661ecf061942d71a72 >>> >>> Your master repos are: >>> https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo >>> >>> Containers, and overcloud images should all be centos-8-stream. >>> >>> The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. >>> >>> Please open launchpad bugs w/ the "alert" tag if you are having any issues. >>> >>> Thanks and well done all! >>> >>> >>> >>>> >>>> >>>> >>>> >>> > OK.... stable/victoria will start to migrate this evening to > centos-8-stream > > We are looking to promote the following [1]. Again if you hit any issues, > please just file a launchpad bug w/ the "alert" tag. > > Thanks > > > [1] > https://trunk.rdoproject.org/api-centos8-victoria/api/civotes_agg_detail.html?ref_hash=457ea897ac3b7552b82c532adcea63f0 > > OK... stable/victoria is now on centos-8-stream. Holler via launchpad if you hit something... now we're working on stable/ussuri :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Mar 10 17:41:28 2021 From: melwittt at gmail.com (melanie witt) Date: Wed, 10 Mar 2021 09:41:28 -0800 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: <41fa3457970dcd126798f90e163aebbcde3f14c0.camel@redhat.com> References: <2609049.HbdjPCY3gI@p1> <4299490.gzMO86gykG@p1> <41fa3457970dcd126798f90e163aebbcde3f14c0.camel@redhat.com> Message-ID: <5ebd4340-16b4-5e0c-cd35-f70bbf000cbc@gmail.com> On 3/10/21 06:42, Sean Mooney wrote: > ok since there appear to be no stong objection form neutron > we proably should deprecate the policy in nova then so we can either remove it next cycle > or change its default value melanie are your plannign to propose a patch to do that? Sorry I hadn't replied again yet but I wanted to check on the history of network:attach_external_network policy before going ahead. I found the commit that introduced restrictions on the nova side [3], the commit message reads: "Require admin context for interfaces on ext network Currently any user can attach an interface to a neutron external network, if the neutron plugin supports the port binding extension. In this case, nova will create neutron ports using the admin client, thus bypassing neutron authZ checks for creating ports on external networks. This patch adds a check in nova to verify the API request has an admin context when a request for an interface is made on a neutron external network." and that was converted into a policy check not long afterward [4]. The restriction was added to fix a bug [5] where users could get into a situation where they've been allowed to attach to an external network and then were unable to delete their instance later because the port create was done as admin and the port delete was done as the user. I'm not quite sure if years later, we're in the clear now with regard to the idea of removing this policy check. I will be looking through the code to see if I can figure out whether [5] could become a problem again or if things have changed in a way that it cannot be a problem. I'll go ahead and send this now while I look so that everyone can lend their thoughts on ^ in the meantime. Cheers, -melanie [3] https://github.com/openstack/nova/commit/7d1b4117fda7709307a35e56625cfa7709a6b795 [4] https://github.com/openstack/nova/commit/674954f731bf4b66356fadaa5baaeb58279c5832 [5] https://bugs.launchpad.net/nova/+bug/1284718 > On Sat, 2021-03-06 at 17:00 +0100, Slawek Kaplonski wrote: >> Hi, >> >> Dnia sobota, 6 marca 2021 13:53:02 CET Sean Mooney pisze: >>> On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: >>>> Hi, >>>> >>>> Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: >>>>> Hello all, >>>>> >>>>> I'm seeking input from the neutron and nova teams regarding policy >>>>> enforcement for allowing attachment to external networks. Details below. >>>>> >>>>> Recently we've been looking at an issue that was reported quite a long >>>>> time ago (2017) [1] where we have a policy check in nova-compute that >>>>> controls whether to allow users to attach an external network to their >>>>> instances. >>>>> >>>>> This has historically been a pain point for operators as (1) it goes >>>>> against convention of having policy checks in nova-api only and (2) >>>>> setting the policy to anything other than the default requires deploying >>>>> a policy file change to all of the compute hosts in the deployment. >>>>> >>>>> The launchpad bug report mentions neutron refactoring work that was >>>>> happening at the time, which was thought might make the >>>>> 'network:attach_external_network' policy check on the nova side >>>>> redundant. >>>>> >>>>> Years have passed since then and customers are still running into this >>>>> problem, so we are thinking, can this policy check be removed on the >>>>> nova-compute side now? >>>>> >>>>> I did a local test with devstack to verify what the behavior is if we >>>>> were to remove the 'network:attach_external_network' policy check >>>>> entirely [2] and found that neutron appears to properly enforce >>>>> permission to attach to external networks itself. It appears that the >>>>> enforcement on the neutron side makes the nova policy check redundant. >>>>> >>>>> When I tried to boot an instance to attach to an external network, >>>>> neutron API returned the following: >>>>> >>>>> INFO neutron.pecan_wsgi.hooks.translation >>>>> [req-58fdb103-cd20-48c9-b73b-c9074061998c >>>>> req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client >>>>> error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create >>>>> port on this network >>>>> [...] >>>>> INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c >>>>> req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST >>>>> /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 >>>> >>>> I just checked in Neutron code and we don't have any policy rule related >>>> directly to the creation of ports on the external network. >>>> Probably what You had there is the fact that Your router:external network >>>> was owned by other tenant and due to that You wasn't able to create port >>>> directly on it. If as an admin You would create external network which >>>> would belong to Your tenant, You would be allowed to create port there. >>>> >>>>> Can anyone from the neutron team confirm whether it would be OK for us >>>>> to remove our nova-compute policy check for external network attach >>>>> permission and let neutron take care of the check? >>>> >>>> I don't know exactly the reasons why it is forbiden on Nova's side but TBH >>>> I don't see any reason why we should forbid pluging instances directly to >>>> the network marked as router:external=True. >>> >>> i have listed the majority of my consers in >>> https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the >>> downstream bug related to this. >>> there are a number of issue sthat i was concerd about but tl;dr >>> - booting ip form external network consumes ip from the floating ip subnet >>> withtout using quota - by default neutron upstream and downstream is >>> configured to provide nova metadata api access via the neutron router not >>> the dhcp server so by default the metadata api will not work with external >>> network. that would require neueton to be configre to use the dhcp server >>> for metadta or config driver or else insance wont get ssh keys ingject by >>> cloud init. >>> - there might be security considertaions. typeically external networks are >>> vlan or flat networks and in some cases operators may not want tenats to be >>> able to boot on such networks expsially with vnic-type=driect-physical >>> since that might allow them to violate tenant isolation if the top of rack >>> switch was not configured by a heracical port binding driver to provide >>> adiquite isolation in that case. this is not so much because this is an >>> external network and more a concern anytime you do PF passtough but there >>> may be other implication to allowing this by default. that said if neutron >>> has a way to express policy in this regard nova does not have too. >> >> Those are all valid points, true. But TBH, if administrator created such >> network as pool of FIPs for the user, then users will not be able to plug vms >> directly to that network as they aren't owners of the network so neutron will >> forbid that. >> >>> >>> router:external=True is really used to mark a network as providing >>> connectivity such that it can be used for the gateway port of neutron >>> routers. the workaroud that i have come up with currently is to mark the >>> network as shared and then use neturon rbac to only share it with the teant >>> that owns it. >>> >>> i assigning external network to speficic tenat being useful when you want to >>> provde a specific ip allocation pool to them or just a set of ips. i >>> understand that the current motivation for this request is commign form >>> some edge deployments. in general i dont thinkthis would be widely used but >>> for those that need its better ux then marking it as shared. >>> >>>>> And on the nova side, I assume we would need a deprecation cycle before >>>>> removing the 'network:attach_external_network' policy. If we can get >>>>> confirmation from the neutron team, is anyone opposed to the idea of >>>>> deprecating the 'network:attach_external_network' policy in the Wallaby >>>>> cycle, to be removed in the Xena release? >>>>> >>>>> I would appreciate your thoughts. >>>>> >>>>> Cheers, >>>>> -melanie >>>>> >>>>> [1] https://bugs.launchpad.net/nova/+bug/1675486 >>>>> [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 >> >> > > > From anlin.kong at gmail.com Wed Mar 10 19:51:53 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 11 Mar 2021 08:51:53 +1300 Subject: Question regarding the production status of DB2 in Trove In-Reply-To: References: Message-ID: Hi Bekir, DB2 datastore driver has not been maintained for a long time, to make it work: 1. We need some volunteer. 2. The drive needs to be refactored. 3. CI job needs to be added for the datastore. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Thu, Mar 11, 2021 at 3:22 AM Bekir Fajkovic < bekir.fajkovic at citynetwork.eu> wrote: > Hello! > > One question regarding the development status of DB2 inside Trove. I see > that DB2 is still in experimental > phase, when could it be expected for that datastore type to get production > ready status? > > Best Regards. > > *Bekir Fajkovic* > Senior DBA > Mobile: +46 70 019 48 47 > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Mar 10 20:37:13 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 10 Mar 2021 15:37:13 -0500 Subject: [cinder] review priorities for the next few days Message-ID: <16bfaaa8-5761-eb83-efda-cf5030b10a49@gmail.com> Here's the list of cinder and driver features that haven't yet merged: https://etherpad.opendev.org/p/cinder-wallaby-features Please make reviewing these your top priorities. We'll get back to reviewing bug fixes next week. cheers, brian From dmendiza at redhat.com Wed Mar 10 21:47:55 2021 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Wed, 10 Mar 2021 15:47:55 -0600 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: References: Message-ID: <50af2372-3f3f-e549-f0e9-295b03b8b1e3@redhat.com> On 3/8/21 12:08 PM, Kendall Nelson wrote: > Greetings! > > As you hopefully already know, our next PTG will be virtual again, and > held from Monday, April 19 to Friday, April 23. We will have the same > schedule set up available as last time with three windows of time spread > across the day to cover all timezones with breaks in between. > > *To signup your team, you must complete **_BOTH_** the survey[1] AND > reserve time in the ethercalc[2] by March 25 at 7:00 UTC.* > > We ask that the PTL/SIG Chair/Team lead sign up for time to have their > discussions in with 4 rules/guidelines. > > 1. Cross project discussions (like SIGs or support project teams) should > be scheduled towards the start of the week so that any discussions that > might shape those of other teams happen first. > 2. No team should sign up for more than 4 hours per UTC day to help keep > participants actively engaged. > 3. No team should sign up for more than 16 hours across all time slots > to avoid burning out our contributors and to enable participation in > multiple teams discussions. > > Again, you need to fill out BOTH the ethercalc AND the survey to > complete your team's sign up. > > If you have any issues with signing up your team, due to conflict or > otherwise, please let me know! While we are trying to empower you to > make your own decisions as to when you meet and for how long (after all, > you know your needs and teams timezones better than we do), we are here > to help! > > Once your team is signed up, please register! And remind your team to > register! Registration is free, but since it will be how we contact you > with passwords, event details, etc. it is still important! > > Continue to check back for updates at openstack.org/ptg > . > > -the Kendalls (diablo_rojo & wendallkaters) > > > [1] Team Survey: > https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey > > [2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi > > [3] PTG Registration: https://april2021-ptg.eventbrite.com > > Is the Ethercalc link correct? I'm getting 502 Bad Gateway errors trying to load it in my browser. - Douglas Mendizábal (redrobot) From fungi at yuggoth.org Wed Mar 10 22:02:08 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Mar 2021 22:02:08 +0000 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: <50af2372-3f3f-e549-f0e9-295b03b8b1e3@redhat.com> References: <50af2372-3f3f-e549-f0e9-295b03b8b1e3@redhat.com> Message-ID: <20210310220208.566ffhqtowb3ena6@yuggoth.org> On 2021-03-10 15:47:55 -0600 (-0600), Douglas Mendizabal wrote: [...] > Is the Ethercalc link correct? I'm getting 502 Bad Gateway errors > trying to load it in my browser. I got the same error briefly too when I tried it, but then reloaded a few minutes later and it came up fine. May have been a transient problem now corrected, or could be something iffy with a load balancer or... -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gagehugo at gmail.com Wed Mar 10 22:07:52 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 10 Mar 2021 16:07:52 -0600 Subject: [security][security-sig] No meeting tomorrow March 11th Message-ID: The security sig meeting tomorrow has been cancelled. We will meet again at the regular time next week. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From shengqin922 at 163.com Thu Mar 11 02:40:59 2021 From: shengqin922 at 163.com (ZTE) Date: Thu, 11 Mar 2021 10:40:59 +0800 (CST) Subject: [election][zun] PTL candidacy for Xena Message-ID: <7b159b78.ff6.1781f289f98.Coremail.shengqin922@163.com> Hi! I'd like to propose my candidacy and continue serving as Zun PTL in the Xena cycle. Over the Wallaby release, the Zun team continues to keep the Zun stable and reliable, and also some great works have been done. My goals for the Xena cycle are to continue the progress made in the following areas: * zun-conductor: Add a service called zun-conductor * CRI: Use the CRI interface to get inventory when cri driver configed for Capsule. * Affinity&anti-affinity: Add the affinity and anti-affinity policies to zun. Thank you for taking the time to consider me for Xena PTL. Best regards, Shengqin Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasemin.demiral at tubitak.gov.tr Thu Mar 11 08:14:57 2021 From: yasemin.demiral at tubitak.gov.tr (Yasemin =?utf-8?Q?DEM=C4=B0RAL_=28B=C4=B0LGEM_BTE=29?=) Date: Thu, 11 Mar 2021 11:14:57 +0300 (EET) Subject: [trove] trove-agent can't connect postgresql container In-Reply-To: References: Message-ID: <565885711.55511185.1615450497419.JavaMail.zimbra@tubitak.gov.tr> Hi, I work on postgresql 12.4 datastore at OpenStack Victoria. Postgresql can't create user and database automatically with trove, but when i created database and user manually, i can connect to psql. I think postgresql container can't communicate the trove agent. How can I fix this? Thank you Yasemin DEMİRAL Araştırmacı Bulut Bilişim ve Büyük Veri Araştırma Lab. B ilişim Teknolojileri Enstitüsü TÜBİTAK BİLGEM 41470 Gebze, KOCAELİ T +90 262 675 2417 F +90 262 646 3187 [ http://bilgem.tubitak.gov.tr/ | www.bilgem.tubitak.gov.tr ] [ mailto:yasemin.demiral at tubitak.gov.tr | yasemin.demiral at tubitak.gov.tr ] [ mailto:ozgur.gun at tubitak.gov.tr | ................................................................ ] [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi | Sorumluluk Reddi ] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bilgem.jpg Type: image/jpeg Size: 3031 bytes Desc: not available URL: From marios at redhat.com Thu Mar 11 10:12:52 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 11 Mar 2021 12:12:52 +0200 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley wrote: > On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > > We got two similar release processing failures: > [...] > > Since this "host key verification failed" error hit tripleo-ipsec > > three times in a row, I suspect we have something stuck (or we > > always hit the same cache/worker). > > The errors happened when run on nodes in different providers. > Looking at the build logs, I also notice that they fail for the > stable/rocky branch but succeeded for other branches of the same > repository. They're specifically erroring when trying to reach the > Gerrit server over SSH, the connection details for which are encoded > on the .gitreview file in each branch. This leads me to wonder > whether there's something about the fact that the stable/rocky > branch of tripleo-ipsec is still using the old review.openstack.org > hostname to push, and maybe we're not pre-seeding an appropriate > hostkey entry for that in the known_hosts file? > Hi Thierry, Jeremy is there something tripleo can do to help here? Should I update that https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 to use opendev.org? Though I know Elod would have words with me if I did ;) as we declared it EOL @ https://review.opendev.org/c/openstack/releases/+/779218 regards, marios > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 11 10:32:16 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 11:32:16 +0100 Subject: [release][oslo][monasca][keystone] Moving projects to independent Message-ID: Hello teams, We are about to move a couple of projects under the independent release models [1]. Those did not have any change merged over the cycle, we think that it is the right time to transition them to independent. Oslo: - futurist - debtcollector - osprofiler Keystone: - pycadf Monasca: - monasca-statsd Before we push the button we want to give ourselves a last chance to raise the hand if you think that it's an issue. Concerning Oslo's projects we already had a related discussion a few months ago and all participants agreed with that [2]. Please let us know ASAP if you disagree with that choice. Thanks for your attention. [1] https://review.opendev.org/q/topic:%22move-to-independent%22+(status:open%20OR%20status:merged) [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018527.html -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Mar 11 11:39:26 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Mar 2021 12:39:26 +0100 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: References: Message-ID: We had a release job failure during the processing of the tag event when 1.3.0 was (successfully) pushed to openstack/monasca-grafana-datasource. Tags on this repository trigger the release-openstack-javascript job, which failed during pre playbook when trying to run yarn --version with the following error: /usr/share/yarn/lib/cli.js:46100 let { ^ SyntaxError: Unexpected token { at exports.runInThisContext (vm.js:53:16) at Module._compile (module.js:373:25) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Module.require (module.js:353:17) at require (internal/module.js:12:17) at Object. (/usr/share/yarn/bin/yarn.js:24:13) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) See https://zuul.opendev.org/t/openstack/build /cdffd2a26a0d4a5b8137edb392fa5971 This prevented the job from running (likely resulting in nothing being uploaded to NPM? Not a JS job specialist), which in turn prevented announce-release job from announcing it. -- Thierry Carrez (ttx) From thierry at openstack.org Thu Mar 11 11:42:37 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Mar 2021 12:42:37 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: Marios Andreou wrote: > > > On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley > wrote: > > On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > > We got two similar release processing failures: > [...] > > Since this "host key verification failed" error hit tripleo-ipsec > > three times in a row, I suspect we have something stuck (or we > > always hit the same cache/worker). > > The errors happened when run on nodes in different providers. > Looking at the build logs, I also notice that they fail for the > stable/rocky branch but succeeded for other branches of the same > repository. They're specifically erroring when trying to reach the > Gerrit server over SSH, the connection details for which are encoded > on the .gitreview file in each branch. This leads me to wonder > whether there's something about the fact that the stable/rocky > branch of tripleo-ipsec is still using the old review.openstack.org > > hostname to push, and maybe we're not pre-seeding an appropriate > hostkey entry for that in the known_hosts file? > > > Hi Thierry, Jeremy > > is there something tripleo can do to help here? > > Should I update that > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 > > to use opendev.org ? Though I know Elod would have > words with me if I did ;) as we declared it EOL @ > https://review.opendev.org/c/openstack/releases/+/779218 > I'm not sure... We'll discuss it during the release meeting today. -- Thierry From hberaud at redhat.com Thu Mar 11 12:17:15 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 13:17:15 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient Message-ID: Hello folks, As discussed this morning on #openstack-release the masakari team needs OpenstackSDK changes that haven't been released. They have been merged after the final release [1]. Those aren't landed in OpenstackSDK version 0.54.0 [2]. Here is the delta of changes merged between 0.54.0 and 736f3aa1 ( https://review.opendev.org/c/openstack/openstacksdk/+/777299): $ git log --no-merges --online 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 736f3aa1 add masakari enabled to segment Only these changes are in the delta. Do you mind if we start releasing a new version 0.55.0 to land these changes and unlock the path of the masakari team? Client library deadline is today. Also this topic opens the door for another discussion. Indeed it's questionable that openstacksdk should follow the early library deadline, for that precise reason. We release python-*client the same time as parent projects so that late features can be taken into account. It seems appropriate that openstacksdk is in the same bucket All those safeguards are a lot less needed now that we move slower and break less things. That could be translated by moving OpenstackSDK from `type: library` [3] to `type: client-library` [4]. Let us know what you think about the FFE. Let's open the discussions about the type shifting. [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 [2] https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 [3] https://releases.openstack.org/reference/deliverable_types.html#library [4] https://releases.openstack.org/reference/deliverable_types.html#client-library -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Thu Mar 11 13:21:52 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 11 Mar 2021 14:21:52 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient In-Reply-To: References: Message-ID: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> Hi all, I have no problems with releasing another SDK version now. Currently 2 another mandatory changes landed in OSC, so it can be released if necessary as well. As a food for the discussion - SDK is not only client lib, but also a lib used in the core components (afaik Nova). So just changing label is not necessarily helping (correct me if I misinterpret the meanings of lib/client-lib) Regards, Artem > On 11. Mar 2021, at 13:17, Herve Beraud wrote: > > Hello folks, > > As discussed this morning on #openstack-release the masakari team needs OpenstackSDK changes that haven't been released. They have been merged after the final release [1]. Those aren't landed in OpenstackSDK version 0.54.0 [2]. > > Here is the delta of changes merged between 0.54.0 and 736f3aa1 (https://review.opendev.org/c/openstack/openstacksdk/+/777299 ): > > $ git log --no-merges --online 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 > 736f3aa1 add masakari enabled to segment > > Only these changes are in the delta. > > Do you mind if we start releasing a new version 0.55.0 to land these changes and unlock the path of the masakari team? > > Client library deadline is today. > > Also this topic opens the door for another discussion. Indeed it's questionable that openstacksdk should follow the early library deadline, for that precise reason. We release python-*client the same time as parent projects so that late features can be taken into account. It seems appropriate that openstacksdk is in the same bucket > > All those safeguards are a lot less needed now that we move slower and break less things. > > That could be translated by moving OpenstackSDK from `type: library` [3] to `type: client-library` [4]. > > Let us know what you think about the FFE. > Let's open the discussions about the type shifting. > > [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 > [2] https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 > [3] https://releases.openstack.org/reference/deliverable_types.html#library > [4] https://releases.openstack.org/reference/deliverable_types.html#client-library > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Thu Mar 11 13:21:59 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 11 Mar 2021 14:21:59 +0100 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: References: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> <1924f447-2199-dfa4-4a70-575cda438107@inovex.de> <63f1d37c-f7fd-15b1-5f71-74e26c44ea94@inovex.de> Message-ID: <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> Hey Brian, On 25/02/2021 19:05, Brian Rosmaita wrote: >> Please kindly let me know if there is anything required to get this >> merged. > > We have to release wallaby os-brick next week, so highest priority > right now are os-brick reviews, but we'll get you some feedback on > your patch as soon as we can. There is one +1 from Sofia on the change now. Let me know if there is anything else missing or needs changing. Is this something that still could go into Wallaby BTW? Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu Mar 11 13:22:37 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 11 Mar 2021 14:22:37 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: <4fb951fa-0b94-dac2-97db-6f799c3986ce@est.tech> For the meeting, in advance, my opinion: What Jeremy wrote looks promising. I mean it might be that the root cause is the wrong hostname/hostkey. I think we have two options: 1. fix the hostkey in the known_hosts file (though I don't know how we could do this o:)) 2. technically the tagging has not happened yet (as far as I see), so we ask Marios (:)) to update the .gitreview and then we give another round for the 'remove' and 'readd' rocky-eol tag in tripleo-ipsec.yaml. We can discuss this further @ the meeting today. Cheers, Előd On 2021. 03. 11. 12:42, Thierry Carrez wrote: > Marios Andreou wrote: >> >> >> On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley > > wrote: >> >>     On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: >>      > We got two similar release processing failures: >>     [...] >>      > Since this "host key verification failed" error hit tripleo-ipsec >>      > three times in a row, I suspect we have something stuck (or we >>      > always hit the same cache/worker). >> >>     The errors happened when run on nodes in different providers. >>     Looking at the build logs, I also notice that they fail for the >>     stable/rocky branch but succeeded for other branches of the same >>     repository. They're specifically erroring when trying to reach the >>     Gerrit server over SSH, the connection details for which are encoded >>     on the .gitreview file in each branch. This leads me to wonder >>     whether there's something about the fact that the stable/rocky >>     branch of tripleo-ipsec is still using the old review.openstack.org >>     >>     hostname to push, and maybe we're not pre-seeding an appropriate >>     hostkey entry for that in the known_hosts file? >> >> >> Hi Thierry, Jeremy >> >> is there something tripleo can do to help here? >> >> Should I update that >> https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 >> >> to use opendev.org ? Though I know Elod would >> have words with me if I did ;) as we declared it EOL @ >> https://review.opendev.org/c/openstack/releases/+/779218 >> > > I'm not sure... We'll discuss it during the release meeting today. > From midhunlaln66 at gmail.com Thu Mar 11 13:24:31 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Thu, 11 Mar 2021 18:54:31 +0530 Subject: Windows vm is not launching from horizon Message-ID: Hi all, I successfully installed the openstack rocky version and everything is working properly. ---->Uploaded all images to openstack and I can see all images listed through openstack CLI. +--------------------------------------+---------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------+--------+ | 1912a80a-1139-4e00-b0d9-e98dc801e54b | CentOS-7-x86_64 active | | 595b928f-8a32-4107-b6e8-fea294d3f7f1 | CentOS-8-x86_64 active| | f531869a-e0ac-4690-8f79-4ff64d284576 | Ubuntu-18.04-x86_64 active| | 7860193b-81bd-4a22-846c-ae3a89117f9d | Ubuntu-20.04-x86_64 active | 1c3a9878-6db3-441e-9a85-73d3bc4a87a7 | cirros active | d321f3a3-08a3-47bf-8f53-6aa46a6a4b52 | windows10 active --->same images I can see through horizon dashboard,but i am not able to launch windows VM from horizon dashboard .Balance all images launching from dashboard. Getting below error; Error: Failed to perform requested operation on instance "windows", the instance has an error status: Please try again later [Error: Build of instance 15a3047d-dfca-4615-9d34-c46764b703ff aborted: Volume 5ca83635-a2fe-458c-9053-cb7a9804156b did not finish being created even after we waited 190 seconds or 61 attempts. And its status is downloading.]. --->I am able to launch windows VM from openstack CLI What is the issue ?why I am not able to launch from dashboard?please help me. Thanks & Regards Midhunlal N B +918921245637 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Mar 11 13:51:46 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Mar 2021 13:51:46 +0000 Subject: Windows vm is not launching from horizon In-Reply-To: References: Message-ID: <5c4b80a2e9c6daeb3e9f5c49b7533dac04bc2106.camel@redhat.com> On Thu, 2021-03-11 at 18:54 +0530, Midhunlal Nb wrote: > Hi all, > I successfully installed the openstack rocky version and everything is > working properly. > > ---->Uploaded all images to openstack and I can see all images listed > through openstack CLI. > +--------------------------------------+---------------------+--------+ > > ID | > Name | Status | > +--------------------------------------+---------------------+--------+ > > 1912a80a-1139-4e00-b0d9-e98dc801e54b | CentOS-7-x86_64 active | > > 595b928f-8a32-4107-b6e8-fea294d3f7f1 | CentOS-8-x86_64 > active| > > f531869a-e0ac-4690-8f79-4ff64d284576 | Ubuntu-18.04-x86_64 >  active| > > 7860193b-81bd-4a22-846c-ae3a89117f9d | Ubuntu-20.04-x86_64 active > > 1c3a9878-6db3-441e-9a85-73d3bc4a87a7 | cirros >      active > > d321f3a3-08a3-47bf-8f53-6aa46a6a4b52 | windows10 >   active > > --->same images I can see through horizon dashboard,but i am not able to > launch windows VM from horizon dashboard .Balance all images launching from > dashboard. > > Getting below error; > Error: Failed to perform requested operation on instance "windows", the > instance has an error status: Please try again later [Error: Build of > instance 15a3047d-dfca-4615-9d34-c46764b703ff aborted: Volume > 5ca83635-a2fe-458c-9053-cb7a9804156b did not finish being created even > after we waited 190 seconds or 61 attempts. And its status is downloading.]. > > --->I am able to launch windows VM from openstack CLI > > What is the issue ?why I am not able to launch from dashboard?please help your cinder sotrage is likel not that fast and the windows image is much larger then the other images so its takeing longer then our default timeout to create the volume. im assuming your using somthing like the cinder LVM backend. try adding  [DEFAULT] block_device_allocate_retries_interval=10 to your nova.conf or nova-cpu.conf if this is a devstack install if you look in the cinder driver log you will likely see its still creating the volume form the image and if you use htop/ps on the host with the cinder volume driver you will see the qemu-img command running. other ways to work around this is to confgiure glance to use cider for image storage that will allow the cinder backend to create a volumn snapshot for the new volume instaed of making a full copy. over all that is much more effeicnt and works similar to when you use ceph for both glance and cinder. > me. > > Thanks & Regards > Midhunlal N B > +918921245637 From kira034 at 163.com Thu Mar 11 14:19:21 2021 From: kira034 at 163.com (Hongbin Lu) Date: Thu, 11 Mar 2021 22:19:21 +0800 (CST) Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations March 2021 End In-Reply-To: References: Message-ID: <164d5ea5.6b38.17821a7ffd3.Coremail.kira034@163.com> Hi, FYI. Zun has a candidancy for Xena: http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020998.html . Shengqin will continue to serve as Zun PTL. Best regards, Hongbin At 2021-03-10 07:52:01, "Kendall Nelson" wrote: Hello! The PTL and TC Nomination period is now over. The official candidate lists for PTLs [0] and TC seats [1] are available on the election website. -- PTL Election Details -- There are 8 projects without candidates, so according to this resolution[2], the TC will have to decide how the following projects will proceed: Barbican, Cyborg, Keystone, Mistral, Monasca, Senlin, Zaqar, Zun -- TC Election Details -- There are 0 projects that will have elections. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start Mar 12, 2021 23:45 UTC. Thank you, -Kendall Nelson (diablo_rojo) & the Election Officials [0] https://governance.openstack.org/election/#xena-ptl-candidates [1] https://governance.openstack.org/election/#xena-tc-candidates [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Mar 11 14:25:17 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 11 Mar 2021 15:25:17 +0100 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> <68d4b804-5729-e313-7f29-6c7b14166c5c@nemebean.com> <176d8d1747f.e25ad718873974.5391430581306589135@ghanshyammann.com> Message-ID: On Mon, 11 Jan 2021 at 18:09, Ben Nemec wrote: > On 1/6/21 3:23 PM, Pierre Riteau wrote: > > On Wed, 6 Jan 2021 at 18:58, Ghanshyam Mann wrote: > >> > >> ---- On Wed, 06 Jan 2021 10:34:35 -0600 Ben Nemec wrote ---- > >> > > >> > > >> > On 1/5/21 3:51 PM, Jeremy Stanley wrote: > >> > > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > >> > >> There have been many patches submitted to drop the Python 3.7 > >> > >> classifier from setup.cfg: > >> > >> https://review.opendev.org/q/%2522remove+py37%2522 > >> > >> The justification is that Wallaby tested runtimes only include 3.6 and 3.8. > >> > >> > >> > >> Most projects are merging these patches, but I've seen a couple of > >> > >> objections from ironic and horizon: > >> > >> > >> > >> - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > >> > >> - https://review.opendev.org/c/openstack/horizon/+/769237 > >> > >> > >> > >> What are the thoughts of the TC and of the overall community on this? > >> > >> Should we really drop these classifiers when there are no > >> > >> corresponding CI jobs, even though more Python versions may well be > >> > >> supported? > >> > > > >> > > My recollection of the many discussions we held was that the runtime > >> > > document would recommend the default python3 available in our > >> > > targeted platforms, but that we would also make a best effort to > >> > > test with the latest python3 available to us at the start of the > >> > > cycle as well. It was suggested more than once that we should test > >> > > all minor versions in between, but this was ruled out based on the > >> > > additional CI resources it would consume for minimal gain. Instead > >> > > we deemed that testing our target version and the latest available > >> > > would give us sufficient confidence that, if those worked, the > >> > > versions in between them were likely fine as well. Based on that, I > >> > > think the versions projects claim to work with should be contiguous > >> > > ranges, not contiguous lists of the exact versions tested (noting > >> > > that those aren't particularly *exact* versions to begin with). > >> > > > >> > > Apologies for the lack of references to old discussions, I can > >> > > probably dig some up from the ML and TC meetings several years back > >> > > of folks think it will help inform this further. > >> > > > >> > > >> > For what little it's worth, that jives with my hazy memories of the > >> > discussion too. The assumption was that if we tested the upper and lower > >> > bounds of our Python versions then the ones in the middle would be > >> > unlikely to break. It was a compromise to support multiple versions of > >> > Python without spending a ton of testing resources on it. > >> > >> > >> Exactly, py3.7 is not broken for OpenStack so declaring it not supported is not the right thing. > >> I remember the discussion when we declared the wallaby (probably from Victoria) testing runtime, > >> we decided if we test py3.6 and py3.8 it means we are not going to break py3.7 support so indirectly > >> it is tested and supported. > >> > >> And testing runtime does not mean we have to drop everything else testing means projects are all > >> welcome to keep running the py3.7 testing job on the gate there is no harm in that. > >> > >> In both cases, either project has an explicit py3.7 job or not we should not remove it from classifiers. > >> > >> > >> -gmann > > > > Thanks everyone for your input. Then should we request that those > > patches dropping the 3.7 classifier are abandoned, or reverted if > > already merged? > > > > That would be my takeaway from this discussion, yes. I saw that many projects which had merged the "remove py37" patches (e.g. Masakari, Vitrage) have now reverted them, thanks! Skimming through Gerrit, I noticed that Cyborg hasn't merged the revert commits: - https://review.opendev.org/c/openstack/cyborg/+/770719 - https://review.opendev.org/c/openstack/python-cyborgclient/+/770911 From mbultel at redhat.com Thu Mar 11 14:38:40 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Thu, 11 Mar 2021 15:38:40 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> Message-ID: Hey, Thank you Gael, +2 obviously :) Mathieu On Tue, Mar 9, 2021 at 5:06 PM Marios Andreou wrote: > > > On Tue, Mar 9, 2021 at 5:52 PM Gaël Chamoulaud > wrote: > >> On 09/Mar/2021 17:46, Marios Andreou wrote: >> > >> > >> > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud >> wrote: >> > >> > Hi TripleO Devs, >> > >> > David is already a key member of our team since a long time now, he >> > provided all the needed ansible roles for the Validation Framework >> into >> > tripleo-ansible-operator. He continuously provides excellent code >> reviews >> > and he >> > is a source of great ideas for the future of the Validation >> Framework. >> > That's >> > why we would highly benefit from his addition to the core reviewer >> team. >> > >> > Assuming that there are no objections, we will add David to the >> core team >> > next >> > week. >> > >> > >> > o/ Gael >> > >> > so it is clear and fair for everyone (e.g. I've been approached by >> others about >> > candidates for tripleo-core) >> > >> > I'd like to be clear on your proposal here because I don't think we >> have a >> > 'validation framework core' group in gerrit - do we? >> > >> > Is your proposal that David is added to the tripleo-core group [1] with >> the >> > understanding that voting rights will be exercised only in the >> following repos: >> > tripleo-validations, validations-common and validations-libs? >> >> Yes exactly! Sorry for the confusion. >> > > > ACK no problem ;) As I said we need to be transparent and fair towards > everyone. > > +1 from me to your proposal. > > Being obligated to do so at PTL ;) I did a quick review of activities. I > can see that David has been particularly active in Wallaby [1] but has made > tripleo contributions going back to 2017 [2] - I cannot see some reason to > object to the proposal! > > regards, marios > > [1] > https://www.stackalytics.io/?module=tripleo-group&project_type=openstack&user_id=davidjpeacock&metric=marks&release=wallaby > [2] https://review.opendev.org/q/owner:davidjpeacock > > > >> >> > thanks, marios >> > >> > [1] https://review.opendev.org/admin/groups/ >> > 0319cee8020840a3016f46359b076fa6b6ea831a >> > >> > >> > >> > >> > >> > Thanks, David, for your excellent work! >> > >> > -- >> > Gaël Chamoulaud - (He/Him/His) >> > .::. Red Hat .::. OpenStack .::. >> > .::. DFG:DF Squad:VF .::. >> > >> >> -- >> Gaël Chamoulaud - (He/Him/His) >> .::. Red Hat .::. OpenStack .::. >> .::. DFG:DF Squad:VF .::. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Mar 11 14:39:45 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 11 Mar 2021 14:39:45 +0000 Subject: [cinder] review priorities for the next few days In-Reply-To: <16bfaaa8-5761-eb83-efda-cf5030b10a49@gmail.com> References: <16bfaaa8-5761-eb83-efda-cf5030b10a49@gmail.com> Message-ID: Brian, Can you also add https://review.opendev.org/c/openstack/cinder/+/773854 To drivers for Wallaby. It has been waiting for reviews for quite some time. Thanks, Arkayd -----Original Message----- From: Brian Rosmaita Sent: Wednesday, March 10, 2021 2:37 PM To: openstack-discuss at lists.openstack.org Subject: [cinder] review priorities for the next few days [EXTERNAL EMAIL] Here's the list of cinder and driver features that haven't yet merged: https://etherpad.opendev.org/p/cinder-wallaby-features Please make reviewing these your top priorities. We'll get back to reviewing bug fixes next week. cheers, brian From skaplons at redhat.com Thu Mar 11 14:40:59 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 11 Mar 2021 15:40:59 +0100 Subject: [neutron] Drivers meeting - agenda for 12.03.2021 Message-ID: <20210311144059.aelx36hhnndswo2c@p1.localdomain> Hi, For tomorrow's drivers meeting we have 2 RFEs related to the integration of Neutron with Designate: * https://bugs.launchpad.net/neutron/+bug/1904559 * https://bugs.launchpad.net/neutron/+bug/1918424 I didn't marked them as triaged yet as I'm not really sure what more data we could need in both cases. I hope that we will get some more questions during the meeting so we can at least together triage them :) Of course, please check those RFEs and ask for anything You need there, even before the meeting :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From yasemin.demiral at tubitak.gov.tr Thu Mar 11 14:46:48 2021 From: yasemin.demiral at tubitak.gov.tr (Yasemin =?utf-8?Q?DEM=C4=B0RAL_=28B=C4=B0LGEM_BTE=29?=) Date: Thu, 11 Mar 2021 17:46:48 +0300 (EET) Subject: Windows vm is not launching from horizon In-Reply-To: References: Message-ID: <1915077082.56120094.1615474008539.JavaMail.zimbra@tubitak.gov.tr> Hi Do you have virtio driver your windows images ? You should create windows images with [ https://github.com/cloudbase/windows-openstack-imaging-tools | https://github.com/cloudbase/windows-openstack-imaging-tools ] for openstack. You can try this: [ https://cloudbase.it/windows-cloud-images/ | https://cloudbase.it/windows-cloud-images/ ] Thanks Yasemin DEMİRAL Araştırmacı Bulut Bilişim ve Büyük Veri Araştırma Lab. B ilişim Teknolojileri Enstitüsü TÜBİTAK BİLGEM 41470 Gebze, KOCAELİ T +90 262 675 2417 F +90 262 646 3187 [ http://bilgem.tubitak.gov.tr/ | www.bilgem.tubitak.gov.tr ] [ mailto:yasemin.demiral at tubitak.gov.tr | yasemin.demiral at tubitak.gov.tr ] [ mailto:ozgur.gun at tubitak.gov.tr | ................................................................ ] [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi | Sorumluluk Reddi ] Kimden: "Midhunlal Nb" Kime: "openstack-discuss" , "Satish Patel" Gönderilenler: 11 Mart Perşembe 2021 16:24:31 Konu: Windows vm is not launching from horizon Hi all, I successfully installed the openstack rocky version and everything is working properly. ---->Uploaded all images to openstack and I can see all images listed through openstack CLI. +--------------------------------------+---------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------+--------+ | 1912a80a-1139-4e00-b0d9-e98dc801e54b | CentOS-7-x86_64 active | | 595b928f-8a32-4107-b6e8-fea294d3f7f1 | CentOS-8-x86_64 active| | f531869a-e0ac-4690-8f79-4ff64d284576 | Ubuntu-18.04-x86_64 active| | 7860193b-81bd-4a22-846c-ae3a89117f9d | Ubuntu-20.04-x86_64 active | 1c3a9878-6db3-441e-9a85-73d3bc4a87a7 | cirros active | d321f3a3-08a3-47bf-8f53-6aa46a6a4b52 | windows10 active --->same images I can see through horizon dashboard,but i am not able to launch windows VM from horizon dashboard .Balance all images launching from dashboard. Getting below error; Error: Failed to perform requested operation on instance "windows", the instance has an error status: Please try again later [Error: Build of instance 15a3047d-dfca-4615-9d34-c46764b703ff aborted: Volume 5ca83635-a2fe-458c-9053-cb7a9804156b did not finish being created even after we waited 190 seconds or 61 attempts. And its status is downloading.]. ---> I am able to launch windows VM from openstack CLI What is the issue ?why I am not able to launch from dashboard?please help me. Thanks & Regards Midhunlal N B +918921245637 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bilgem.jpg Type: image/jpeg Size: 3031 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Mar 11 14:47:09 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 11 Mar 2021 08:47:09 -0600 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations March 2021 End In-Reply-To: <164d5ea5.6b38.17821a7ffd3.Coremail.kira034@163.com> References: <164d5ea5.6b38.17821a7ffd3.Coremail.kira034@163.com> Message-ID: <17821c1743d.f17233d0360467.502278629083295333@ghanshyammann.com> ---- On Thu, 11 Mar 2021 08:19:21 -0600 Hongbin Lu wrote ---- > Hi, > FYI. Zun has a candidancy for Xena: http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020998.html . Shengqin will continue to serve as Zun PTL. Thanks, We noted that in etherpad[1] and as the next step TC will discuss on appointing Shengqin as Zun PTL. [1] https://etherpad.opendev.org/p/xena-leaderless -gmann > > Best regards,Hongbin > > > > > > At 2021-03-10 07:52:01, "Kendall Nelson" wrote: > Hello! > The PTL and TC Nomination period is now over. The official candidate lists > for PTLs [0] and TC seats [1] are available on the election website. > > -- PTL Election Details -- > There are 8 projects without candidates, so according to this > resolution[2], the TC will have to decide how the following > projects will proceed: Barbican, Cyborg, Keystone, Mistral, Monasca, Senlin, Zaqar, Zun > > -- TC Election Details -- > There are 0 projects that will have elections. > > Now begins the campaigning period where candidates and electorate may debate their statements. > > Polling will start Mar 12, 2021 23:45 UTC. > > Thank you, > -Kendall Nelson (diablo_rojo) & the Election Officials > > [0] https://governance.openstack.org/election/#xena-ptl-candidates > [1] https://governance.openstack.org/election/#xena-tc-candidates > [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html > > > > From hberaud at redhat.com Thu Mar 11 14:57:05 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 15:57:05 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient In-Reply-To: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> References: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> Message-ID: Le jeu. 11 mars 2021 à 14:22, Artem Goncharov a écrit : > Hi all, > > I have no problems with releasing another SDK version now. Currently 2 > another mandatory changes landed in OSC, so it can be released if necessary > as well. > Thanks, then I'll proceed. > > As a food for the discussion - SDK is not only client lib, but also a lib > used in the core components (afaik Nova). So just changing label is not > necessarily helping (correct me if I misinterpret the meanings of > lib/client-lib) > Good point, I added this topic to our meeting agenda to discuss more deeply about that. > > Regards, > Artem > > On 11. Mar 2021, at 13:17, Herve Beraud wrote: > > Hello folks, > > As discussed this morning on #openstack-release the masakari team needs > OpenstackSDK changes that haven't been released. They have been merged > after the final release [1]. Those aren't landed in OpenstackSDK version > 0.54.0 [2]. > > Here is the delta of changes merged between 0.54.0 and 736f3aa1 ( > https://review.opendev.org/c/openstack/openstacksdk/+/777299): > > $ git log --no-merges --online > 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 > 736f3aa1 add masakari enabled to segment > > Only these changes are in the delta. > > Do you mind if we start releasing a new version 0.55.0 to land these > changes and unlock the path of the masakari team? > > Client library deadline is today. > > Also this topic opens the door for another discussion. Indeed it's > questionable that openstacksdk should follow the early library deadline, > for that precise reason. We release python-*client the same time as parent > projects so that late features can be taken into account. It seems > appropriate that openstacksdk is in the same bucket > > All those safeguards are a lot less needed now that we move slower and > break less things. > > That could be translated by moving OpenstackSDK from `type: library` [3] > to `type: client-library` [4]. > > Let us know what you think about the FFE. > Let's open the discussions about the type shifting. > > [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 > [2] > https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 > [3] > https://releases.openstack.org/reference/deliverable_types.html#library > [4] > https://releases.openstack.org/reference/deliverable_types.html#client-library > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 11 15:00:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 16:00:24 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <4fb951fa-0b94-dac2-97db-6f799c3986ce@est.tech> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> <4fb951fa-0b94-dac2-97db-6f799c3986ce@est.tech> Message-ID: 2) can be done easily. My 2 cents on it for now Le jeu. 11 mars 2021 à 14:24, Előd Illés a écrit : > For the meeting, in advance, my opinion: > > What Jeremy wrote looks promising. I mean it might be that the root > cause is the wrong hostname/hostkey. > > I think we have two options: > > 1. fix the hostkey in the known_hosts file (though I don't know how we > could do this o:)) > 2. technically the tagging has not happened yet (as far as I see), so we > ask Marios (:)) to update the .gitreview and then we give another round > for the 'remove' and 'readd' rocky-eol tag in tripleo-ipsec.yaml. > > We can discuss this further @ the meeting today. > > Cheers, > > Előd > > > On 2021. 03. 11. 12:42, Thierry Carrez wrote: > > Marios Andreou wrote: > >> > >> > >> On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley >> > wrote: > >> > >> On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > >> > We got two similar release processing failures: > >> [...] > >> > Since this "host key verification failed" error hit tripleo-ipsec > >> > three times in a row, I suspect we have something stuck (or we > >> > always hit the same cache/worker). > >> > >> The errors happened when run on nodes in different providers. > >> Looking at the build logs, I also notice that they fail for the > >> stable/rocky branch but succeeded for other branches of the same > >> repository. They're specifically erroring when trying to reach the > >> Gerrit server over SSH, the connection details for which are encoded > >> on the .gitreview file in each branch. This leads me to wonder > >> whether there's something about the fact that the stable/rocky > >> branch of tripleo-ipsec is still using the old review.openstack.org > >> > >> hostname to push, and maybe we're not pre-seeding an appropriate > >> hostkey entry for that in the known_hosts file? > >> > >> > >> Hi Thierry, Jeremy > >> > >> is there something tripleo can do to help here? > >> > >> Should I update that > >> > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 > >> < > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2> > > >> to use opendev.org ? Though I know Elod would > >> have words with me if I did ;) as we declared it EOL @ > >> https://review.opendev.org/c/openstack/releases/+/779218 > >> > > > > I'm not sure... We'll discuss it during the release meeting today. > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 11 15:11:17 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 16:11:17 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient In-Reply-To: References: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> Message-ID: Please have a look at the patch and let us know if that works for you. https://review.opendev.org/c/openstack/releases/+/780013 Thanks & Regards Le jeu. 11 mars 2021 à 15:57, Herve Beraud a écrit : > > > Le jeu. 11 mars 2021 à 14:22, Artem Goncharov > a écrit : > >> Hi all, >> >> I have no problems with releasing another SDK version now. Currently 2 >> another mandatory changes landed in OSC, so it can be released if necessary >> as well. >> > > Thanks, then I'll proceed. > > >> >> As a food for the discussion - SDK is not only client lib, but also a lib >> used in the core components (afaik Nova). So just changing label is not >> necessarily helping (correct me if I misinterpret the meanings of >> lib/client-lib) >> > > Good point, I added this topic to our meeting agenda to discuss more > deeply about that. > > >> >> Regards, >> Artem >> >> On 11. Mar 2021, at 13:17, Herve Beraud wrote: >> >> Hello folks, >> >> As discussed this morning on #openstack-release the masakari team needs >> OpenstackSDK changes that haven't been released. They have been merged >> after the final release [1]. Those aren't landed in OpenstackSDK version >> 0.54.0 [2]. >> >> Here is the delta of changes merged between 0.54.0 and 736f3aa1 ( >> https://review.opendev.org/c/openstack/openstacksdk/+/777299): >> >> $ git log --no-merges --online >> 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 >> 736f3aa1 add masakari enabled to segment >> >> Only these changes are in the delta. >> >> Do you mind if we start releasing a new version 0.55.0 to land these >> changes and unlock the path of the masakari team? >> >> Client library deadline is today. >> >> Also this topic opens the door for another discussion. Indeed it's >> questionable that openstacksdk should follow the early library deadline, >> for that precise reason. We release python-*client the same time as parent >> projects so that late features can be taken into account. It seems >> appropriate that openstacksdk is in the same bucket >> >> All those safeguards are a lot less needed now that we move slower and >> break less things. >> >> That could be translated by moving OpenstackSDK from `type: library` [3] >> to `type: client-library` [4]. >> >> Let us know what you think about the FFE. >> Let's open the discussions about the type shifting. >> >> [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 >> [2] >> https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 >> [3] >> https://releases.openstack.org/reference/deliverable_types.html#library >> [4] >> https://releases.openstack.org/reference/deliverable_types.html#client-library >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Thu Mar 11 15:12:08 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Thu, 11 Mar 2021 16:12:08 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> Message-ID: Hi, I'm not core, just a member of the VF squad. I want to say that I would certainly like David to be VF Core. On Thu, Mar 11, 2021 at 3:44 PM Mathieu Bultel wrote: > Hey, > > Thank you Gael, > > +2 obviously :) > > Mathieu > > On Tue, Mar 9, 2021 at 5:06 PM Marios Andreou wrote: > >> >> >> On Tue, Mar 9, 2021 at 5:52 PM Gaël Chamoulaud >> wrote: >> >>> On 09/Mar/2021 17:46, Marios Andreou wrote: >>> > >>> > >>> > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud >>> wrote: >>> > >>> > Hi TripleO Devs, >>> > >>> > David is already a key member of our team since a long time now, he >>> > provided all the needed ansible roles for the Validation Framework >>> into >>> > tripleo-ansible-operator. He continuously provides excellent code >>> reviews >>> > and he >>> > is a source of great ideas for the future of the Validation >>> Framework. >>> > That's >>> > why we would highly benefit from his addition to the core reviewer >>> team. >>> > >>> > Assuming that there are no objections, we will add David to the >>> core team >>> > next >>> > week. >>> > >>> > >>> > o/ Gael >>> > >>> > so it is clear and fair for everyone (e.g. I've been approached by >>> others about >>> > candidates for tripleo-core) >>> > >>> > I'd like to be clear on your proposal here because I don't think we >>> have a >>> > 'validation framework core' group in gerrit - do we? >>> > >>> > Is your proposal that David is added to the tripleo-core group [1] >>> with the >>> > understanding that voting rights will be exercised only in the >>> following repos: >>> > tripleo-validations, validations-common and validations-libs? >>> >>> Yes exactly! Sorry for the confusion. >>> >> >> >> ACK no problem ;) As I said we need to be transparent and fair towards >> everyone. >> >> +1 from me to your proposal. >> >> Being obligated to do so at PTL ;) I did a quick review of activities. I >> can see that David has been particularly active in Wallaby [1] but has made >> tripleo contributions going back to 2017 [2] - I cannot see some reason to >> object to the proposal! >> >> regards, marios >> >> [1] >> https://www.stackalytics.io/?module=tripleo-group&project_type=openstack&user_id=davidjpeacock&metric=marks&release=wallaby >> [2] https://review.opendev.org/q/owner:davidjpeacock >> >> >> >>> >>> > thanks, marios >>> > >>> > [1] https://review.opendev.org/admin/groups/ >>> > 0319cee8020840a3016f46359b076fa6b6ea831a >>> > >>> > >>> > >>> > >>> > >>> > Thanks, David, for your excellent work! >>> > >>> > -- >>> > Gaël Chamoulaud - (He/Him/His) >>> > .::. Red Hat .::. OpenStack .::. >>> > .::. DFG:DF Squad:VF .::. >>> > >>> >>> -- >>> Gaël Chamoulaud - (He/Him/His) >>> .::. Red Hat .::. OpenStack .::. >>> .::. DFG:DF Squad:VF .::. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Mar 11 16:04:59 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 11 Mar 2021 21:34:59 +0530 Subject: [ops][glance][security] looking for metadefs users Message-ID: Hello operators and other people interested in metadefs, The Glance team will be giving the metadefs some love in the Xena development cycle in order to address OSSN-0088 [0]. The people who designed and implemented metadefs are long gone, and in determining how to fix OSSN-0088, we would like to understand how people are actually using them in the wild so we don't restrict them so much as to make them useless. We are looking for an operator who uses metadefs to give us a walkthrough on how you are using them at the Xena (virtual) PTG. We are planning to have this session on 23rd April around 1400 UTC. You can find more details about the same in PTG planning etherpad [1]. We are also willing to meet outside the PTG schedule in case the current scheduled time might be blocking the people. I will also reply to this as a reminder mail once our PTG schedule is final. If you do not use the metadef API for some reason related to its inability to solve a problem, lack of flexibility, or other reasons (but wish you could), we would also like to hear about that. We need to know if the feature is worth fixing and maintaining going forward. And when we say "an operator", we don't mean just one ... ideally, we'd like to have a few real-life use cases to consider. If this is affecting you (as an operator) then you can reach us either by mail or #openstack-glance IRC channel or glance weekly meeting [2] which will be held every Thursday around 1400 UTC. [0] https://wiki.openstack.org/wiki/OSSN/OSSN-0088 [1] https://etherpad.opendev.org/p/xena-ptg-glance-planning [2] https://etherpad.opendev.org/p/glance-team-meeting-agenda Thank you and Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Mar 11 16:23:34 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 16:23:34 +0000 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: <20210311162334.iyy7lzjdkumveuia@yuggoth.org> On 2021-03-11 12:12:52 +0200 (+0200), Marios Andreou wrote: [...] > is there something tripleo can do to help here? > > Should I update that > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 > to use opendev.org? Though I know Elod would have words with me if I did ;) > as we declared it EOL @ > https://review.opendev.org/c/openstack/releases/+/779218 Yeah, it's interesting that the job wants to push anything for that branch to start with. But when we finish the work to delete those EOL branches I suppose the problem will disappear anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 11 16:26:55 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 16:26:55 +0000 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <20210311162334.iyy7lzjdkumveuia@yuggoth.org> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> <20210311162334.iyy7lzjdkumveuia@yuggoth.org> Message-ID: <20210311162655.wyao6mywy3ejllew@yuggoth.org> On 2021-03-11 16:23:34 +0000 (+0000), Jeremy Stanley wrote: [...] > Yeah, it's interesting that the job wants to push anything for that > branch to start with. But when we finish the work to delete those > EOL branches I suppose the problem will disappear anyway. Oh, right, the branch is not *yet* EOL, and it's breaking when trying to push the rocky-eol tag I guess? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Thu Mar 11 16:33:18 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 11 Mar 2021 18:33:18 +0200 Subject: [TripleO] next irc meeting Tuesday Mar 16 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 16 March at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on Mar 02nd - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-03-02-14.00.html Hope you can make it on Tuesday, thanks, marios From smooney at redhat.com Thu Mar 11 17:24:26 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Mar 2021 17:24:26 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: Message-ID: On Thu, 2021-03-11 at 21:34 +0530, Abhishek Kekane wrote: > Hello operators and other people interested in metadefs, > > The Glance team will be giving the metadefs some love in the Xena > development cycle in order to address OSSN-0088 [0]. > > The people who designed and implemented metadefs are long gone, and in > determining how to fix OSSN-0088, we would like to understand how people > are actually using them in the wild so we don't restrict them so much as to > make them useless. the metadef api was orginally created as a centralised catalog for defineing all teh tuneable that can be defiend via metadata,extra specs or as attibutes on vairous resouces across multipel porjects. https://docs.openstack.org/glance/latest/user/metadefs-concepts.html#background has a table covering most of them it was intended to provide a programitc way for clients to discover what option are valid and is use by horizon and heat to generate uis and validate input. https://pasteboard.co/JS99sgU.png the list of available extra specs for the flavor metadta api is generated dirctly form the metadefs api including the desciption we se for "hw:mem_page_size". wehere a validator is specifd such as for hw:cpu_policy a drop down list in the case of enums or other validation can be applied by horizon to the parmaters. https://github.com/openstack/glance/blob/45749c30c1c02375a85eb17be0ccd983c695953f/etc/metadefs/compute-cpu-pinning.json#L23-L31 "cpu_policy": { "title": "CPU Pinning policy", "description": "Type of CPU pinning policy.", "type": "string", "enum": [ "shared", "dedicated" ] }, this same information used to be used in heat whan you a heat template to generate and validat the allowed values for some parmater although i dont knwo if that is still used. heat certely uses info form nova to get the list of flavor exctra but i belive it also used this informate when generation the ui for templates that defiled new flavors. while this was never integratein into the unified openstack client to enabel validation of flavors,images ectra that was part of the eventual design goal. at present this is the only openstack api i am currently aware of that allows you to programaticaly diccover this informateion. > > We are looking for an operator who uses metadefs to give us a walkthrough > on how you are using them at the Xena (virtual) PTG. We are planning to > have this session on 23rd April around 1400 UTC. You can find more details > about the same in PTG planning etherpad [1]. We are also willing to meet > outside the PTG schedule in case the current scheduled time might be > blocking the people. I will also reply to this as a reminder mail once our > PTG schedule is final. > > If you do not use the metadef API for some reason related to its inability > to solve a problem, lack of flexibility, or other reasons (but wish you > could), we would also like to hear about that. We need to know if the > feature is worth fixing and maintaining going forward. i still think this is a valueable feature that i which was used more often it may seam odd now that galce was choosen as the central registry for storing this information but if this api was removed i think it would be important for all project that have this type of metadta to have an alternitve metond to advertise this info. > > And when we say "an operator", we don't mean just one ... ideally, we'd > like to have a few real-life use cases to consider. looking at the OSSN https://wiki.openstack.org/wiki/OSSN/OSSN-0088 i am rather suprised that writing to this api was not admin only. i had alway tought that it was in the past and that readign form it was the only thing that was globally accessable as a normal user. i would suggest that one possible fix would be to alter the policy so that writing to this api is admin only. at the ptg we coudl discuss shoudl that be extended to user too but i dont personally see a good usecase for normally users to be able to create new metadefs. disableing it would break the current functionaliyt in horizon so i do not think that would be a good ensuer experince. > > If this is affecting you (as an operator) then you can reach us either by > mail or #openstack-glance IRC channel or glance weekly meeting [2] which > will be held every Thursday around 1400 UTC. > > [0] https://wiki.openstack.org/wiki/OSSN/OSSN-0088 > [1] https://etherpad.opendev.org/p/xena-ptg-glance-planning > [2] https://etherpad.opendev.org/p/glance-team-meeting-agenda > > > Thank you and Best Regards, > > Abhishek Kekane From yasufum.o at gmail.com Thu Mar 11 17:25:30 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Fri, 12 Mar 2021 02:25:30 +0900 Subject: [tacker] Xena PTG Message-ID: <025d0b80-5571-882a-a93f-331e4a25813f@gmail.com> Hi, Next vPTG is going to be held 19-23 April. I've prepared etherpad for the PTG[1] and reserved same timeslots as previous, 6-8 am UTC from 20th Apr[2]. Please register from [3] and fill "Attendees" and "Topics" in the etherpad for your proposals. [1] https://etherpad.opendev.org/p/tacker-xena-ptg [2] https://ethercalc.net/oz7q0gds9zfi [3] https://april2021-ptg.eventbrite.com/ Thanks, Yasufumi From dms at danplanet.com Thu Mar 11 17:46:47 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 11 Mar 2021 09:46:47 -0800 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: (Sean Mooney's message of "Thu, 11 Mar 2021 17:24:26 +0000") References: Message-ID: > it was intended to provide a programitc way for clients to discover > what option are valid and is use by horizon and heat to generate uis > and validate input. https://pasteboard.co/JS99sgU.png Ah, okay, good to know that Horizon uses this, thanks. That closes the loop quite a bit, and I assume explains why some people submit updates to the static definitions now and then. > this same information used to be used in heat whan you a heat template > to generate and validat the allowed values for some parmater although > i dont knwo if that is still used. heat certely uses info form nova > to get the list of flavor exctra but i belive it also used this > informate when generation the ui for templates that defiled new > flavors. Okay, it would be good to know if Heat uses this as well. > looking at the OSSN https://wiki.openstack.org/wiki/OSSN/OSSN-0088 i > am rather suprised that writing to this api was not admin only. i had > alway tought that it was in the past and that readign form it was the > only thing that was globally accessable as a normal user. Yeah, the API seems to clearly allow for public and private namespaces (at least) and loosely ties the structures in the database to a namespace for ownership. I think that means it was expected to be usable by regular users and providing some amount of isolation between them. It does not, however, seem to be very good at keeping private things private :) A lot of things in glance from the same time period did admin-only policy enforcement in code instead of policy, which is probably another indication that it was _expected_ to be usable by regular users. > i would suggest that one possible fix would be to alter the policy so > that writing to this api is admin only. at the ptg we coudl discuss > shoudl that be extended to user too but i dont personally see a good > usecase for normally users to be able to create new metadefs. > > disableing it would break the current functionaliyt in horizon so i do > not think that would be a good ensuer experince. Yeah, if Horizon uses this to make things pick-and-choose'able to the user, it would be nice to not just fully disable it. Right now, regular users can create these resources, and may not be aware that the names they choose are leaked to other users. Further, the creation of those things are not constrained by any limit, which is also a problem. If the main usage pattern of this is for admins to define these things (or just take all the defaults) and users just need to be able to see the largely-public lists of things they can choose from, then limiting creation to admins by default instead of fully disabling everything seems like a good course of action. I guess the remaining thing I'd like to know is: does anyone want or expect unprivileged users to be able to create these resources? Admins should still be aware that the naming is potentially leaky, and especially if they create these resources for special customers. They also may want to audit their systems for any resources that users have created up to this point, which may expose something if they keep read access enabled for everyone. It might be good if we can amend the recommendation to explain the impact of disabling everything on Horizon, along with the recommendation to restrict creation to admin-only and audit. Not sure what the procedure is for that. Thanks Sean! --Dan From fungi at yuggoth.org Thu Mar 11 18:12:12 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 18:12:12 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: Message-ID: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> On 2021-03-11 09:46:47 -0800 (-0800), Dan Smith wrote: [...] > It might be good if we can amend the recommendation to explain the > impact of disabling everything on Horizon, along with the recommendation > to restrict creation to admin-only and audit. Not sure what the > procedure is for that. The recommendation is in a wiki article[1] (in the form of an OSSN document), so can be freely edited. But if someone makes significant updates to the recommendation then we should probably also send an errata announcement to the openstack-announce and openstack-discuss mailing lists detailing what's changed since initial publication. The OSSN process[2] doesn't mandate any particular errata steps, but we can use our own judgement to determine what may be additionally worth announcing/updating for it. [1] https://wiki.openstack.org/wiki/OSSN/OSSN-0088 [2] https://wiki.openstack.org/wiki/Security/Security_Note_Process -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tobias.urdin at binero.com Thu Mar 11 16:18:40 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 11 Mar 2021 16:18:40 +0000 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> , Message-ID: <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> Hello, Not sure if you are having the same issue as us, but we are following https://bugs.launchpad.net/neutron/+bug/1901707 but are patching it with something similar to https://review.opendev.org/c/openstack/nova/+/741529 to workaround the issue until it's completely solved. Best regards ________________________________ From: Ignazio Cassano Sent: Wednesday, March 10, 2021 7:57:21 AM To: Sean Mooney Cc: openstack-discuss; Slawek Kaplonski Subject: Re: [stein][neutron] gratuitous arp Hello All, please, are there news about bug 1815989 ? On stein I modified code as suggested in the patches. I am worried when I will upgrade to train: wil this bug persist ? On which openstack version this bug is resolved ? Ignazio Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano > ha scritto: Hello, I tried to update to last stein packages on yum and seems this bug still exists. Before the yum update I patched some files as suggested and and ping to vm worked fine. After yum update the issue returns. Please, let me know If I must patch files by hand or some new parameters in configuration can solve and/or the issue is solved in newer openstack versions. Thanks Ignazio Il Mer 29 Apr 2020, 19:49 Sean Mooney > ha scritto: On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > Many thanks. > Please keep in touch. here are the two patches. the first https://review.opendev.org/#/c/724386/ is the actual change to add the new config opition this needs a release note and some tests but it shoudl be functional hence the [WIP] i have not enable the workaround in any job in this patch so the ci run will assert this does not break anything in the default case the second patch is https://review.opendev.org/#/c/724387/ which enables the workaround in the multi node ci jobs and is testing that live migration exctra works when the workaround is enabled. this should work as it is what we expect to happen if you are using a moderne nova with an old neutron. its is marked [DNM] as i dont intend that patch to merge but if the workaround is useful we migth consider enableing it for one of the jobs to get ci coverage but not all of the jobs. i have not had time to deploy a 2 node env today but ill try and test this locally tomorow. > Ignazio > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > ha scritto: > > > so bing pragmatic i think the simplest path forward given my other patches > > have not laned > > in almost 2 years is to quickly add a workaround config option to disable > > mulitple port bindign > > which we can backport and then we can try and work on the actual fix after. > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that shoudl > > serve as a workaround > > for thos that hav this issue but its a regression in functionality. > > > > i can create a patch that will do that in an hour or so and submit a > > followup DNM patch to enabel the > > workaound in one of the gate jobs that tests live migration. > > i have a meeting in 10 mins and need to finish the pacht im currently > > updating but ill submit a poc once that is done. > > > > im not sure if i will be able to spend time on the actul fix which i > > proposed last year but ill see what i can do. > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > PS > > > I have testing environment on queens,rocky and stein and I can make test > > > as you need. > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > Hello Sean, > > > > the following is the configuration on my compute nodes: > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > As far as firewall driver > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > testing > > > > environment and the > > > > same firewall driver. > > > > Live migration on provider network on queens works fine. > > > > It does not work fine on rocky and stein (vm lost connection after it > > > > is > > > > migrated and start to respond only when the vm send a network packet , > > > > for > > > > example when chrony pools the time server). > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > smooney at redhat.com> > > > > ha scritto: > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > Hello, some updated about this issue. > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > sent by > > > > > > qemu during live miration. > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > bugged. > > > > > > > > > > it is not correct. > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > its mac > > > > > learning frames > > > > > instead > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > but was fixed by > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > can you confirm you are not using the broken 2.6.0 release and are > > > > using > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > libvirt/qemu > > > > > > packages I installed on queens (I updated compute and controllers > > > > node > > > > > > > > > > on > > > > > > queens for obtaining same libvirt/qemu version deployed on rocky > > > > and > > > > > > > > > > stein). > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > fine. > > > > > > On rocky and stein not, so I think the issue is related to > > > > openstack > > > > > > components . > > > > > > > > > > on queens we have only a singel prot binding and nova blindly assumes > > > > > that the port binding details wont > > > > > change when it does a live migration and does not update the xml for > > > > the > > > > > netwrok interfaces. > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > post_livemigration > > > > > in rocky+ neutron optionally uses the multiple port bindings flow to > > > > > prebind the port to the destiatnion > > > > > so it can update the xml if needed and if post copy live migration is > > > > > enable it will asyconsly activate teh dest port > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > if you are using the iptables firewall os-vif will have precreated > > > > the > > > > > ovs port and intermediate linux bridge before the > > > > > migration started which will allow neutron to wire it up (put it on > > > > the > > > > > correct vlan and install security groups) before > > > > > the vm completes the migraton. > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > port > > > > > but libvirt deletes it and recreats it too. > > > > > as a result there is a race when using openvswitch firewall that can > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > Best Regards > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, I have this problem with rocky or newer with > > > > iptables_hybrid > > > > > > > > firewall. > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky the > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > spec intoduced teh ablity to shorten the outage by pre biding the > > > > > > > > > > port and > > > > > > > activating it when > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > live > > > > > > > > > > migrate. > > > > > > > > > > > > > > this reduces the outage time although i cant be fully elimiated > > > > as > > > > > > > > > > some > > > > > > > level of packet loss is > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be aware > > > > that > > > > > > > > > > if a > > > > > > > network partion happens > > > > > > > during a post copy live migration the vm will crash and need to > > > > be > > > > > > > restarted. > > > > > > > it is generally safe to use and will imporve the migration > > > > performace > > > > > > > > > > but > > > > > > > unlike pre copy migration if > > > > > > > the guess resumes on the dest and the mempry page has not been > > > > copied > > > > > > > > > > yet > > > > > > > then it must wait for it to be copied > > > > > > > and retrive it form the souce host. if the connection too the > > > > souce > > > > > > > > > > host > > > > > > > is intrupted then the vm cant > > > > > > > do that and the migration will fail and the instance will crash. > > > > if > > > > > > > > > > you > > > > > > > are using precopy migration > > > > > > > if there is a network partaion during the migration the > > > > migration will > > > > > > > fail but the instance will continue > > > > > > > to run on the source host. > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to be > > > > aware > > > > > > > > > > of > > > > > > > that behavior change. > > > > > > > > > > > > > > > Thanks > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > ha > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm migrate > > > > > > > > > > from one > > > > > > > > > > > > > > node > > > > > > > > > > to another I cannot ping it for several minutes. If in the > > > > vm I > > > > > > > > > > put a > > > > > > > > > > script that ping the gateway continously, the live > > > > migration > > > > > > > > > > works > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > and > > > > > > > > > > I can ping it. Why this happens ? I read something about > > > > > > > > > > gratuitous > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an older > > > > > > > > > > protocal > > > > > > > > > > > > > > called > > > > > > > > > RARP > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you using > > > > > > > > > > iptables > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can do > > > > until > > > > > > > > > > we > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > currently libvirt handels interface plugging for kernel ovs > > > > when > > > > > > > > > > using > > > > > > > > > > > > > > the > > > > > > > > > openvswitch firewall driver > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > but it > > > > > > > > > > and > > > > > > > > > > > > > > the > > > > > > > > > neutron patch are > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > while > > > > > > > > > > libvirt > > > > > > > > > > > > > > is > > > > > > > > > pluging the vif there will always be > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > then mac > > > > > > > > > > > > > > learning > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have opnestack > > > > > > > > > > rock or > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > it should reduce the downtime. in this conficution we do not > > > > have > > > > > > > > > > the > > > > > > > > > > > > > > race > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Thu Mar 11 18:50:18 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 11 Mar 2021 10:50:18 -0800 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> (Jeremy Stanley's message of "Thu, 11 Mar 2021 18:12:12 +0000") References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> Message-ID: > The recommendation is in a wiki article[1] (in the form of an OSSN > document), so can be freely edited. But if someone makes significant > updates to the recommendation then we should probably also send an > errata announcement to the openstack-announce and openstack-discuss > mailing lists detailing what's changed since initial publication. > The OSSN process[2] doesn't mandate any particular errata steps, but > we can use our own judgement to determine what may be additionally > worth announcing/updating for it. Cool, thanks. I think the recommendation is "review and reconsider the default policy for this feature" and "here is what we think is a good default if you don't otherwise know". Changing our recommended default to be a more generally-applicable doesn't seem to alter the general message to me, so I'd tend think just editing the wiki page is fine. --Dan From anlin.kong at gmail.com Thu Mar 11 19:21:23 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 12 Mar 2021 08:21:23 +1300 Subject: [trove] trove-agent can't connect postgresql container In-Reply-To: <565885711.55511185.1615450497419.JavaMail.zimbra@tubitak.gov.tr> References: <565885711.55511185.1615450497419.JavaMail.zimbra@tubitak.gov.tr> Message-ID: We've had a chat on irc, for anyone interested in the discussion, please see http://eavesdrop.openstack.org/irclogs/%23openstack-trove/%23openstack-trove.2021-03-11.log.html --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Thu, Mar 11, 2021 at 9:15 PM Yasemin DEMİRAL (BİLGEM BTE) < yasemin.demiral at tubitak.gov.tr> wrote: > Hi, > > I work on postgresql 12.4 datastore at OpenStack Victoria. Postgresql > can't create user and database automatically with trove, but when i created > database and user manually, i can connect to psql. I think postgresql > container can't communicate the trove agent. How can I fix this? > > > Thank you > > *Yasemin DEMİRAL* > > Araştırmacı > > * Bulut Bilişim ve Büyük Veri Araştırma Lab.* > > *B**ilişim Teknolojileri Enstitüsü* > > TÜBİTAK BİLGEM > > 41470 Gebze, KOCAELİ > > *T* +90 262 675 2417 > > *F* +90 262 646 3187 > > www.bilgem.tubitak.gov.tr > > yasemin.demiral at tubitak.gov.tr > > ................................................................ > > > Sorumluluk Reddi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bilgem.jpg Type: image/jpeg Size: 3031 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 11 19:27:31 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 19:27:31 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> Message-ID: <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> On 2021-03-11 10:50:18 -0800 (-0800), Dan Smith wrote: [...] > I think the recommendation is "review and reconsider the default > policy for this feature" and "here is what we think is a good > default if you don't otherwise know". Changing our recommended > default to be a more generally-applicable doesn't seem to alter > the general message to me, so I'd tend think just editing the wiki > page is fine. Seems reasonable to me. What do other folks think? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From moreira.belmiro.email.lists at gmail.com Thu Mar 11 19:59:02 2021 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Thu, 11 Mar 2021 20:59:02 +0100 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> Message-ID: Hi, we use metadef to validate our custom metadata in Horizon when users create instances. In our use case, metadef should be written only by admin. +1 to review the default policy. Belmiro CERN On Thu, Mar 11, 2021 at 8:34 PM Jeremy Stanley wrote: > On 2021-03-11 10:50:18 -0800 (-0800), Dan Smith wrote: > [...] > > I think the recommendation is "review and reconsider the default > > policy for this feature" and "here is what we think is a good > > default if you don't otherwise know". Changing our recommended > > default to be a more generally-applicable doesn't seem to alter > > the general message to me, so I'd tend think just editing the wiki > > page is fine. > > Seems reasonable to me. What do other folks think? > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Mar 11 20:22:21 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 11 Mar 2021 14:22:21 -0600 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> Message-ID: <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> ---- On Thu, 11 Mar 2021 13:59:02 -0600 Belmiro Moreira wrote ---- > Hi,we use metadef to validate our custom metadata in Horizon when users create instances. > In our use case, metadef should be written only by admin.+1 to review the default policy. In a quick search, interop certification guidelines 1] also does not use these API capabilities so changing to admin should be fine from interop and so does from Tempest test modification point of view. [1] https://opendev.org/osf/interop -gmann > BelmiroCERN > On Thu, Mar 11, 2021 at 8:34 PM Jeremy Stanley wrote: > On 2021-03-11 10:50:18 -0800 (-0800), Dan Smith wrote: > [...] > > I think the recommendation is "review and reconsider the default > > policy for this feature" and "here is what we think is a good > > default if you don't otherwise know". Changing our recommended > > default to be a more generally-applicable doesn't seem to alter > > the general message to me, so I'd tend think just editing the wiki > > page is fine. > > Seems reasonable to me. What do other folks think? > -- > Jeremy Stanley > From fungi at yuggoth.org Thu Mar 11 20:54:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 20:54:17 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> Message-ID: <20210311205417.jjcdudejvo4ejdsr@yuggoth.org> On 2021-03-11 14:22:21 -0600 (-0600), Ghanshyam Mann wrote: [...] > In a quick search, interop certification guidelines 1] also does > not use these API capabilities so changing to admin should be fine > from interop and so does from Tempest test modification point of > view. [...] Yep, if you check out the original bug reports leading up to the OSSN, we did at least confirm these were not part of any trademark program requirement before recommending that access be blocked. That was one of our deciding factors in the disclosure timeline. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From victoria at vmartinezdelacruz.com Thu Mar 11 21:00:22 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Thu, 11 Mar 2021 22:00:22 +0100 Subject: [manila] [FFE] Request for "Update cephfs drivers to use ceph-mgr" and "create share from snapshot support for CephFS" Message-ID: Hi, I would like to ask for an FFE for the RFEs "Update cephfs drivers to use ceph-mgr" [0] and "Create share from snapshot support for CephFS" [1] Both RFEs are related. The first one updates the cephfs drivers to use the ceph-mgr interface for all manila operations. This change is required since the library currently used is already deprecated and it is expected to be removed in the next Ceph release. The second one leverages the previous one and adds a new functionality available through the ceph-mgr interface which is the create share from snapshot support. We have been working on both features for two cycles already and we could use a few more days to finish the testing. Looking forward to a positive response. Thanks, Victoria [0] https://blueprints.launchpad.net/manila/+spec/update-cephfs-drivers [1] https://blueprints.launchpad.net/manila/+spec/create-share-from-snapshot-cephfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Mar 11 21:29:55 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 11 Mar 2021 22:29:55 +0100 Subject: [masakari] The legacy (client) has been dropped... finally! Message-ID: Hello Fellow OpenStackers, The Masakari team is proud to announce that the legacy Masakari client is no more! It has been deprecated since Stein and was less featureful than the OpenStack client plugin that was fully supported for a long time. The OSC plugin is also entirely based on OpenStack SDK. All Masakari client parts are available there. (This applies to Wallaby [version 7.0.0] and later.) -yoctozepto From luke.camilleri at zylacomputing.com Thu Mar 11 21:39:05 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 11 Mar 2021 22:39:05 +0100 Subject: Keystone enforced policy issues - Victoria Message-ID: Hi there, we have been troubleshooting keystone policies for a couple of days now (Victoria) and would like to reach out to anyone with some experience on the keystone policies. Basically we would like to follow the standard admin, member and reader roles together with the scopes function and have therefore enabled the below two option in keystone.conf enforce_scope = true enforce_new_defaults = true once enabled we started seeing the below error related to token validation in keystone.log: 2021-03-11 19:50:12.009 1047463 WARNING keystone.server.flask.application [req-33cda154-1d54-447e-8563-0676dc5d8471 020bd854741e4ba69c87d4142cad97a5 c584121f9d1a4334a3853c61bb5b9d93 - default default] You are not authorized to perform the requested action: identity:validate_token.: keystone.exception.ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token. The policy was previously setup as: identity:validate_token: rule:service_admin_or_token_subject but has now been implemented with the new scope format to: identity:validate_token: (role:reader and system_scope:all) or rule:service_role or rule:token_subject If we change the policy to the old one, we stop receiving the identity:validate_token exception. This only happens in horizon and running the same commands in the CLI (python-openstack-client) does not output any errors. To work around this behavior we place the old policy for validate_token rule in /usr/share/openstack-dashboard/openstack_dashboard/conf/keystone_policy.yaml and in the file /etc/keystone/keystone.conf A second way how we can solve the validate_token exception is to disable the option which has been enabled above "enforce_new_defaults = true" which will obviously allow the deprecated policy rules to become effective and hence has the same behavior as implementing the old policy as we did We would like to know if anyone have had this behavior and how it has been solved maybe someone can point us in the right direction to identify better what is going on. Last but not least, it seems that from Horizon, the admin user is being assigned a project scoped token instead of a system scoped token, while from the CLI the same admin user can successfully issue a system:all token and run commands across all resources. We would be very happy to receive any form of input related to the above issues we are facing Thanks in advance for any assistance From smooney at redhat.com Thu Mar 11 22:04:36 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Mar 2021 22:04:36 +0000 Subject: [Nova][FFE] libvirt vdpa support Message-ID: <4f89537fa9972b37014e24f332e6a22e34f233f3.camel@redhat.com> Hi everyone its that time again where feature are almost ready but not quite. vDPA (vHost data path acceleration) is an option kernel device offload api that allows virtio complient software devices such as a virtio net interface to be offloaded to a software or hardware acclerator such as a nic. the full detail can be found in the spec: https://specs.openstack.org/openstack/nova-specs/specs/wallaby/approved/libvirt-vdpa-support.html The series is up for review and gibi and stephen have been activly reviewing it over the last few day. https://review.opendev.org/q/topic:%22vhost-vdpa%22+(status:open%20OR%20status:merged) while the majority of the code now has +2s it wont be approved before the end of the day so i am formal asking for a feature freeze exception to allow an addtional day or to finalise the feature. the current status of the seriese id that there is one bug related to claiming pci device and marking the parent as unavaiable and claiming the parent and marking the child VF/VDPA devices as unavaiable. i have adressed this locally and will be pushing a patch to adress that later this evening once i add unit tests for thoes edgecaes. there is one patch that is pendeing to block unsupported lifecycle operation which i will be writing tomorrow. that final patch will also document them both in a release note and in the api-guide. in the evnet the full seriese is not deamed to be ready in time i have also written seperate patch to block booting vms with VDPA ports now that neutron supports it but nova does not, https://review.opendev.org/c/openstack/nova/+/780065 if we do not merge the full serise i would like to ask that we merge teh first 3 patches. openstack/nova: master: block vm boot with vdpa ports openstack/nova: master: objects: Add 'VDPA' to 'PciDeviceType' openstack/nova: master: add constants for vnic type vdpa the reason for block patch is that now that neutron support vnic-type vdpa i belive its possibel to create a port of that type and for nova to boot a vm however it will not actully use vdpa. i belvie it will take this else branch https://github.com/openstack/nova/blob/master/nova/network/os_vif_util.py#L349-L357 and result in the vm booting with a stansard ovs prot as if it was vnic-type=normal which will be confusing to debug since you were expecting to get hardware offloaed ovs but just got a standard ovs port instead without hardware offloads. operators can also block vnic-type vdpa in the neutron config. https://github.com/openstack/neutron/blob/a9fc746249cd34cb7cc594c0b4d74d8ddf65bd46/neutron/conf/plugins/ml2/drivers/openvswitch/mech_ovs_conf.py#L21-L37 but i think the nova solution is nicer since it does not require them to update there configs if this feature is not complted in Wallaby. i do belive we can complete this feature with only a small amount of addtional work and can continue to add more testing and lifecycle operations next cycle. stephen has submited https://review.opendev.org/c/openstack/nova/+/780112 to start extending the functional test framework to emulate vdpa devices so that we can harden the validation in the futrue via functional tests in addtion to the existing unit test coverage. i have not reviewed it yet so cant say how long that will take but i think that is something we can work and ensure it merged early next cycle if its not completed early next week. regards sean. From gouthampravi at gmail.com Fri Mar 12 00:45:04 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 11 Mar 2021 16:45:04 -0800 Subject: [manila] [FFE] Request for "Update cephfs drivers to use ceph-mgr" and "create share from snapshot support for CephFS" In-Reply-To: References: Message-ID: On Thu, Mar 11, 2021 at 1:05 PM Victoria Martínez de la Cruz wrote: > > Hi, > > I would like to ask for an FFE for the RFEs "Update cephfs drivers to use ceph-mgr" [0] and "Create share from snapshot support for CephFS" [1] > > Both RFEs are related. > > The first one updates the cephfs drivers to use the ceph-mgr interface for all manila operations. This change is required since the library currently used is already deprecated and it is expected to be removed in the next Ceph release. > > The second one leverages the previous one and adds a new functionality available through the ceph-mgr interface which is the create share from snapshot support. > > We have been working on both features for two cycles already and we could use a few more days to finish the testing. Thank you for your work on this for the past two cycles. It is difficult to coordinate changes within CephFS and manila at the same time. The ceph community dropping support for ceph_volume_client does put manila users of CephFS in peril when they have to upgrade ceph in a future release. Since we don't know how long the Wallaby release will be used by our users, I don't mind us taking this extra time to test your refactor. The work here, per [0] and [1] should only affect an optional backend driver in manila, and adds no new requirements; nor does it affect other projects or clients. Is that correct? Since Manila doesn't get translations, we don't have the risk of introducing user facing strings in this driver code. So I'm okay with approving this FFE. Please ensure we can wrap this up early so we have sufficient time to test after these changes are merged. > > Looking forward to a positive response. > > Thanks, > > Victoria > > [0] https://blueprints.launchpad.net/manila/+spec/update-cephfs-drivers > [1] https://blueprints.launchpad.net/manila/+spec/create-share-from-snapshot-cephfs From ricolin at ricolky.com Fri Mar 12 04:58:31 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 12 Mar 2021 12:58:31 +0800 Subject: [heat] Xena PTG: same schedule as meeting time (4/21)(2 hours) Message-ID: Hi all As VPTG approaches (PTG will be held April 19th through April 23rd), I reserve a room on 4/21 Wed. from 14:00-16:00 (UTC time). We will use Meetpad. this time. Let me know if you have any questions. Please put your name in the PTG etherpad [1] if you plan to join. Also putting topic suggestions or comments on is more than welcome. Note: Don't forget to register PTG (for free) on [2]. [1] https://etherpad.opendev.org/p/xena-ptg-heat [2] PTG Registration: https://april2021-ptg.eventbrite.com *Rico Lin* OIF Board director, OpenSack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack *Email: ricolin at ricolky.com * *Phone: +886-963-612-021* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Fri Mar 12 05:05:22 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 12 Mar 2021 13:05:22 +0800 Subject: [Multi-arch SIG] Xena PTG Message-ID: Hi all As VPTG approaches (PTG will be held April 19th through April 23rd), I reserve a room on 4/20 Tuesday from 07:00-08:00 and 15:00-16:00 (UTC time). Exactly same as meeting schedule. We will use Meetpad this time. Let me know if you have any questions. You can find us on irc #openstack-multi-arch Please put your name in the PTG etherpad [1] if you plan to join. Also putting topic suggestions or comments on is more than welcome. Note: Don't forget to register PTG (for free) on [2]. [1] https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig [2] PTG Registration: https://april2021-ptg.eventbrite.com *Rico Lin* OIF Board director, OpenSack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack *Email: ricolin at ricolky.com * *Phone: +886-963-612-021* -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Fri Mar 12 05:28:57 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 12 Mar 2021 10:58:57 +0530 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <20210311205417.jjcdudejvo4ejdsr@yuggoth.org> References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> <20210311205417.jjcdudejvo4ejdsr@yuggoth.org> Message-ID: On Fri, Mar 12, 2021 at 2:27 AM Jeremy Stanley wrote: > On 2021-03-11 14:22:21 -0600 (-0600), Ghanshyam Mann wrote: > [...] > > In a quick search, interop certification guidelines 1] also does > > not use these API capabilities so changing to admin should be fine > > from interop and so does from Tempest test modification point of > > view. > [...] > > Yep, if you check out the original bug reports leading up to the > OSSN, we did at least confirm these were not part of any trademark > program requirement before recommending that access be blocked. That > was one of our deciding factors in the disclosure timeline. > -- > Jeremy Stanley > Thanks to Sean and Belmiro for confirming how and where metadefs are used. I think it makes more sense now to keep these metadef create/update/delete APIs admin-only and grant read-only access to normal users. In the advisory we should also specify that there is still a possibility of information leak in this case. Thanks and Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongli.he at intel.com Fri Mar 12 07:18:58 2021 From: yongli.he at intel.com (yonglihe) Date: Fri, 12 Mar 2021 15:18:58 +0800 Subject: [Nova][FFE] Smart-nic support Message-ID: <83cf823f-04ab-4f3a-a6c3-3bca365303df@intel.com> Hi,  Everyone Smart nics management involved Nova, Neutron and Cyborg. After 2 releases of discussing, coding and reviewing, we have merged Cyborg and Neutron support. Nova patches also got lots of attention and had many rounds of review, plus several +2. I hope we could merge Nova patches also in this release. The following is the Nova patches topic, individual patches links, and related resources links Now we blocked by lots of comments when reaching to the end of the feature freeze. Those comments could be addressed in 2-3 days, then we could get 3 more rounds review, very likely we could make it by RC1. Those comments include: • functional test -- released already. • minor typo, text changes, and needs more code comments • several codes flow refactor request • how to store arq-uuid ==nova patch topic link: https://review.opendev.org/q/topic:%22bp%252Fsriov-smartnic-support%22+(status:open%20OR%20status:merged) ==nova individual patch link: 1) smartnic support - cyborg drive: two +2 https://review.opendev.org/c/openstack/nova/+/771362 2) smartnic support - new vnic type: rounds review https://review.opendev.org/c/openstack/nova/+/771363 3) smartnic support: main patch rounds review, one +2 https://review.opendev.org/c/openstack/nova/+/758944 4) smartnic support - reject server move and suspend, one +2 https://review.opendev.org/c/openstack/nova/+/779913 5) smartnic support - functional tests https://review.opendev.org/c/openstack/nova/+/780147 == resource: 1) blueprint sriov-smartnic-support https://blueprints.launchpad.net/nova/+spec/sriov-smartnic-support 2) Approved spec: merged https://review.opendev.org/c/openstack/nova-specs/+/742785 3) Cyborg NIC driver: merged https://review.opendev.org/c/openstack/cyborg/+/758942 4) neutron patch sets: merged neutron patch set: merged https://review.opendev.org/q/topic:%22bug%252F1906602%22+ 5) neutron-lib: merged Add new VNIC types for Cyborg provisioned ports https://review.opendev.org/c/openstack/neutron-lib/+/768324 6) neutron ml2 plugin support: merged [SR-IOV] Add support for ACCELERATOR_DIRECT VNIC type https://review.opendev.org/c/openstack/neutron/+/779292 Yongli He Regards From hberaud at redhat.com Fri Mar 12 08:49:11 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 12 Mar 2021 09:49:11 +0100 Subject: [release] Release countdown for week R-5 Mar 15 - Mar 19 Message-ID: Development Focus ----------------- We just passed feature freeze! Until release branches are cut, you should stop accepting featureful changes to deliverables following the cycle-with-rc release model, or to libraries. Exceptions should be discussed on separate threads on the mailing-list, and feature freeze exceptions approved by the team's PTL. Focus should be on finding and fixing release-critical bugs, so that release candidates and final versions of the Wallaby deliverables can be proposed, well ahead of the final Wallaby release date (14 April, 2021). General Information ------------------- We are still finishing up processing a few release requests, but the Wallaby release requirements are now frozen. If new library releases are needed to fix release-critical bugs in Wallaby, you must request a Requirements Freeze Exception (RFE) from the requirements team before we can do a new release to avoid having something released in Wallaby that is not actually usable. This is done by posting to the openstack-discuss mailing list with a subject line similar to: [$PROJECT][requirements] RFE requested for $PROJECT_LIB Include justification/reasoning for why a RFE is needed for this lib. If/when the requirements team OKs the post-freeze update, we can then process a new release. A soft String freeze is now in effect, in order to let the I18N team do the translation work in good conditions. In Horizon and the various dashboard plugins, you should stop accepting changes that modify user-visible strings. Exceptions should be discussed on the mailing-list. By 5 April, 2021 this will become a hard string freeze, with no changes in user-visible strings allowed. Actions ------- stable/wallaby branches should be created soon for all not-already-branched libraries. You should expect 2-3 changes to be proposed for each: a .gitreview update, a reno update (skipped for projects not using reno), and a tox.ini constraints URL update. Please review those in priority so that the branch can be functional ASAP. The Prelude section of reno release notes is rendered as the top level overview for the release. Any important overall messaging for Wallaby changes should be added there to make sure the consumers of your release notes see them. Finally, if you haven't proposed Wallaby cycle-highlights yet, you are already late to the party. Please see http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020714.html for details. Upcoming Deadlines & Dates -------------------------- RC1 deadline: 22 March, 2021 (R-3 week) Final RC deadline: 5 April, 2021 (R-1 week) Final Wallaby release: 14 April, 2021 Xena PTG: 19 - 23 April, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Mar 12 09:31:24 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 12 Mar 2021 10:31:24 +0100 Subject: [Nova][FFE] libvirt vdpa support In-Reply-To: <4f89537fa9972b37014e24f332e6a22e34f233f3.camel@redhat.com> References: <4f89537fa9972b37014e24f332e6a22e34f233f3.camel@redhat.com> Message-ID: On Thu, Mar 11, 2021 at 22:04, Sean Mooney wrote: > Hi everyone > its that time again where feature are almost ready but not quite. > vDPA (vHost data path acceleration) is an option kernel device > offload api > that allows virtio complient software devices such as a virtio net > interface to be > offloaded to a software or hardware acclerator such as a nic. > the full detail can be found in the spec: > https://specs.openstack.org/openstack/nova-specs/specs/wallaby/approved/libvirt-vdpa-support.html > > The series is up for review and gibi and stephen have been activly > reviewing it over the last few day. > https://review.opendev.org/q/topic:%22vhost-vdpa%22+(status:open%20OR%20status:merged) > > while the majority of the code now has +2s it wont be approved before > the end of the day so i am formal asking for > a feature freeze exception to allow an addtional day or to finalise > the feature. > > the current status of the seriese id that there is one bug related to > claiming pci device and marking the parent as unavaiable > and claiming the parent and marking the child VF/VDPA devices as > unavaiable. > i have adressed this locally and will be pushing a patch to adress > that later this evening once i add unit tests for thoes > edgecaes. > > there is one patch that is pendeing to block unsupported lifecycle > operation which i will be writing tomorrow. > that final patch will also document them both in a release note and > in the api-guide. > > in the evnet the full seriese is not deamed to be ready in time i > have also written seperate patch to block booting vms with VDPA ports > now that neutron supports it but nova does not, > https://review.opendev.org/c/openstack/nova/+/780065 if we do not > merge the full serise > i would like to ask that we merge teh first 3 patches. > > openstack/nova: master: block vm boot with vdpa ports > openstack/nova: master: objects: Add 'VDPA' to 'PciDeviceType' > openstack/nova: master: add constants for vnic type vdpa > > the reason for block patch is that now that neutron support vnic-type > vdpa i belive its possibel to create a port of that type > and for nova to boot a vm however it will not actully use vdpa. i > belvie it will take this else branch > https://github.com/openstack/nova/blob/master/nova/network/os_vif_util.py#L349-L357 > and result in the vm booting with a stansard ovs > prot as if it was vnic-type=normal which will be confusing to debug > since you were expecting to get hardware offloaed > ovs but just got a standard ovs port instead without hardware > offloads. operators can also block vnic-type vdpa in the neutron > config. > https://github.com/openstack/neutron/blob/a9fc746249cd34cb7cc594c0b4d74d8ddf65bd46/neutron/conf/plugins/ml2/drivers/openvswitch/mech_ovs_conf.py#L21-L37 > but i think the nova solution is nicer since it does not require them > to update there configs if this feature is not complted in Wallaby. > > i do belive we can complete this feature with only a small amount of > addtional work > and can continue to add more testing and lifecycle operations next > cycle. > stephen has submited > https://review.opendev.org/c/openstack/nova/+/780112 to start > extending the functional > test framework to emulate vdpa devices so that we can harden the > validation in the futrue via functional tests > in addtion to the existing unit test coverage. i have not reviewed it > yet so cant say how long that will take > but i think that is something we can work and ensure it merged early > next cycle if its not completed early next week. Both Stephen and I on board to continue reviewing the VDPA series further. I just did my review round this morning and the implementation looks good to me, the bug we see yesterday has been fixed. I have some minor comments, mostly cosmetic. I can review the ops blocking patch today and that will be the last functional patch in the series that needs to land. The reno patch can be merged later without risk. The functional tests also can be landed next week without risk. I know that Sean did testing with real hardware locally so I'm pretty confident about the series. So I'm OK with accepting the FFE request. If anybody has objection please reply. Cheers, gibi > > regards > sean. > > > > > From balazs.gibizer at est.tech Fri Mar 12 09:51:25 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 12 Mar 2021 10:51:25 +0100 Subject: [Nova][FFE] Smart-nic support In-Reply-To: <83cf823f-04ab-4f3a-a6c3-3bca365303df@intel.com> References: <83cf823f-04ab-4f3a-a6c3-3bca365303df@intel.com> Message-ID: On Fri, Mar 12, 2021 at 15:18, yonglihe wrote: > Hi, Everyone > > Smart nics management involved Nova, Neutron and Cyborg. After 2 > releases of discussing, coding and reviewing, we have merged Cyborg > and Neutron support. Nova patches also got lots of attention and had > many rounds of review, plus several +2. > > I hope we could merge Nova patches also in this release. The > following is the Nova patches topic, individual patches links, and > related resources links > > Now we blocked by lots of comments when reaching to the end of the > feature freeze. Those comments could be addressed in 2-3 days, then > we could get 3 more rounds review, very likely we could make it by > RC1. > > Those comments include: > • functional test -- released already. > • minor typo, text changes, and needs more code comments > • several codes flow refactor request > • how to store arq-uuid > > > ==nova patch topic link: > https://review.opendev.org/q/topic:%22bp%252Fsriov-smartnic-support%22+(status:open%20OR%20status:merged) > > ==nova individual patch link: > 1) smartnic support - cyborg drive: two +2 > https://review.opendev.org/c/openstack/nova/+/771362 > > 2) smartnic support - new vnic type: rounds review > https://review.opendev.org/c/openstack/nova/+/771363 > > 3) smartnic support: main patch rounds review, one +2 > https://review.opendev.org/c/openstack/nova/+/758944 > > 4) smartnic support - reject server move and suspend, one +2 > https://review.opendev.org/c/openstack/nova/+/779913 > > 5) smartnic support - functional tests > https://review.opendev.org/c/openstack/nova/+/780147 > > > == resource: > 1) blueprint sriov-smartnic-support > https://blueprints.launchpad.net/nova/+spec/sriov-smartnic-support > > 2) Approved spec: merged > https://review.opendev.org/c/openstack/nova-specs/+/742785 > > 3) Cyborg NIC driver: merged > https://review.opendev.org/c/openstack/cyborg/+/758942 > > 4) neutron patch sets: merged > neutron patch set: merged > https://review.opendev.org/q/topic:%22bug%252F1906602%22+ > > 5) neutron-lib: merged > Add new VNIC types for Cyborg provisioned ports > https://review.opendev.org/c/openstack/neutron-lib/+/768324 > > 6) neutron ml2 plugin support: merged > [SR-IOV] Add support for ACCELERATOR_DIRECT VNIC type > https://review.opendev.org/c/openstack/neutron/+/779292 We talked about this on IRC[1] this morning. In summary there are couple of sizable changes needed in the series which pushes the expected readiness of the patches to mid next week. As the series has OVO changes we feel this is too risky to merge close to RC1. So we agreed with Yongli to defer this feature to Xena. Cheers, gibi [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2021-03-12.log.html#t2021-03-12T09:12:23 > > Yongli He > Regards > > From balazs.gibizer at est.tech Fri Mar 12 10:26:37 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 12 Mar 2021 11:26:37 +0100 Subject: [nova][placement] Wallaby release In-Reply-To: <8KLAPQ.YM5K5P44TRVX2@est.tech> References: <8KLAPQ.YM5K5P44TRVX2@est.tech> Message-ID: Hi, So we hit Feature Freeze yesterday. I've update launchpad blueprint statuses to reflect reality. There are couple of series that was approved before the freeze but haven't landed yet. We are pushing these through the gate: * pci-socket-affinity https://review.opendev.org/c/openstack/nova/+/772779 * port-scoped-sriov-numa-affinity https://review.opendev.org/c/openstack/nova/+/773792 * https://review.opendev.org/q/topic:bp/compact-db-migrations-wallaby+status:open * https://review.opendev.org/q/topic:bp/allow-secure-boot-for-qemu-kvm-guests+status:open We are finishing up the work on https://review.opendev.org/q/topic:vhost-vdpa+status:open where FFE is being requested. Cheers, gibi On Mon, Mar 1, 2021 at 14:31, Balazs Gibizer wrote: > Hi, > > We are getting close to the Wallaby release. So I create a tracking > etherpad[1] with the schedule and TODOs. > > One thing that I want to highlight is that we will hit Feature Freeze > on 11th of March. As the timeframe between FF and RC1 is short I'm > not plannig with FFEs. Patches that are approved before 11 March EOB > can be rechecked or rebased if needed and then re-approved. If you > have a patch that is really close but not approved before the > deadline, and you think there are two cores that willing to review it > before RC1, then please send a mail to the ML with [nova][FFE] > subject prefix not later than 16th of March EOB. > > Cheers, > gibi > > [1] https://etherpad.opendev.org/p/nova-wallaby-rc-potential > > > From luke.camilleri at zylacomputing.com Fri Mar 12 11:30:15 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Fri, 12 Mar 2021 12:30:15 +0100 Subject: [keystone][ops]Keystone enforced policy issues - Victoria In-Reply-To: References: Message-ID: Updated subject with tags On 11/03/2021 22:39, Luke Camilleri wrote: > Hi there, we have been troubleshooting keystone policies for a couple > of days now (Victoria) and would like to reach out to anyone with some > experience on the keystone policies. Basically we would like to follow > the standard admin, member and reader roles together with the scopes > function and have therefore enabled the below two option in keystone.conf > > enforce_scope = true > enforce_new_defaults = true > > once enabled we started seeing the below error related to token > validation in keystone.log: > > 2021-03-11 19:50:12.009 1047463 WARNING > keystone.server.flask.application > [req-33cda154-1d54-447e-8563-0676dc5d8471 > 020bd854741e4ba69c87d4142cad97a5 c584121f9d1a4334a3853c61bb5b9d93 - > default default] You are not authorized to perform the requested > action: identity:validate_token.: keystone.exception.ForbiddenAction: > You are not authorized to perform the requested action: > identity:validate_token. > > The policy was previously setup as: > > identity:validate_token: rule:service_admin_or_token_subject > > but has now been implemented with the new scope format to: > > identity:validate_token: (role:reader and system_scope:all) or > rule:service_role or rule:token_subject > > If we change the policy to the old one, we stop receiving the > identity:validate_token exception. This only happens in horizon and > running the same commands in the CLI (python-openstack-client) does > not output any errors. To work around this behavior we place the old > policy for validate_token rule in > /usr/share/openstack-dashboard/openstack_dashboard/conf/keystone_policy.yaml > and in the file /etc/keystone/keystone.conf > > A second way how we can solve the validate_token exception is to > disable the option which has been enabled above "enforce_new_defaults > = true" which will obviously allow the deprecated policy rules to > become effective and hence has the same behavior as implementing the > old policy as we did > > We would like to know if anyone have had this behavior and how it has > been solved maybe someone can point us in the right direction to > identify better what is going on. > > Last but not least, it seems that from Horizon, the admin user is > being assigned a project scoped token instead of a system scoped > token, while from the CLI the same admin user can successfully issue a > system:all token and run commands across all resources. We would be > very happy to receive any form of input related to the above issues we > are facing > > Thanks in advance for any assistance > From hberaud at redhat.com Fri Mar 12 11:52:10 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 12 Mar 2021 12:52:10 +0100 Subject: [nova][placement] Wallaby release In-Reply-To: References: <8KLAPQ.YM5K5P44TRVX2@est.tech> Message-ID: Ack (from the release team). Le ven. 12 mars 2021 à 11:29, Balazs Gibizer a écrit : > Hi, > > So we hit Feature Freeze yesterday. I've update launchpad blueprint > statuses to reflect reality. > > There are couple of series that was approved before the freeze but > haven't landed yet. We are pushing these through the gate: > > * pci-socket-affinity > https://review.opendev.org/c/openstack/nova/+/772779 > * port-scoped-sriov-numa-affinity > https://review.opendev.org/c/openstack/nova/+/773792 > * > > https://review.opendev.org/q/topic:bp/compact-db-migrations-wallaby+status:open > * > > https://review.opendev.org/q/topic:bp/allow-secure-boot-for-qemu-kvm-guests+status:open > > We are finishing up the work on > https://review.opendev.org/q/topic:vhost-vdpa+status:open where FFE is > being requested. > > Cheers, > gibi > > > On Mon, Mar 1, 2021 at 14:31, Balazs Gibizer > wrote: > > Hi, > > > > We are getting close to the Wallaby release. So I create a tracking > > etherpad[1] with the schedule and TODOs. > > > > One thing that I want to highlight is that we will hit Feature Freeze > > on 11th of March. As the timeframe between FF and RC1 is short I'm > > not plannig with FFEs. Patches that are approved before 11 March EOB > > can be rechecked or rebased if needed and then re-approved. If you > > have a patch that is really close but not approved before the > > deadline, and you think there are two cores that willing to review it > > before RC1, then please send a mail to the ML with [nova][FFE] > > subject prefix not later than 16th of March EOB. > > > > Cheers, > > gibi > > > > [1] https://etherpad.opendev.org/p/nova-wallaby-rc-potential > > > > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Mar 12 15:07:27 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 12 Mar 2021 09:07:27 -0600 Subject: [openstack-community] [victoria][keystone][ops]Keystone enforced policy issues In-Reply-To: <95b2b307-64de-c843-0744-1126718cf24a@zylacomputing.com> References: <95b2b307-64de-c843-0744-1126718cf24a@zylacomputing.com> Message-ID: <51EA38A2-457F-435E-AB96-3C547D393F42@demarco.com> Adding the OpenStack discuss list > On Mar 12, 2021, at 6:34 AM, Luke Camilleri wrote: > > Hi there, we have been troubleshooting keystone policies for a couple of days now (Victoria) and would like to reach out to anyone with some experience on the keystone policies. Basically we would like to follow the standard admin, member and reader roles together with the scopes function and have therefore enabled the below two option in keystone.conf > > enforce_scope = true > enforce_new_defaults = true > > once enabled we started seeing the below error related to token validation in keystone.log: > > 2021-03-11 19:50:12.009 1047463 WARNING keystone.server.flask.application [req-33cda154-1d54-447e-8563-0676dc5d8471 020bd854741e4ba69c87d4142cad97a5 c584121f9d1a4334a3853c61bb5b9d93 - default default] You are not authorized to perform the requested action: identity:validate_token.: keystone.exception.ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token. > > The policy was previously setup as: > > identity:validate_token: rule:service_admin_or_token_subject > > but has now been implemented with the new scope format to: > > identity:validate_token: (role:reader and system_scope:all) or rule:service_role or rule:token_subject > > If we change the policy to the old one, we stop receiving the identity:validate_token exception. This only happens in horizon and running the same commands in the CLI (python-openstack-client) does not output any errors. To work around this behavior we place the old policy for validate_token rule in /usr/share/openstack-dashboard/openstack_dashboard/conf/keystone_policy.yaml and in the file /etc/keystone/keystone.conf > > A second way how we can solve the validate_token exception is to disable the option which has been enabled above "enforce_new_defaults = true" which will obviously allow the deprecated policy rules to become effective and hence has the same behavior as implementing the old policy as we did > > We would like to know if anyone have had this behavior and how it has been solved maybe someone can point us in the right direction to identify better what is going on. > > Last but not least, it seems that from Horizon, the admin user is being assigned a project scoped token instead of a system scoped token, while from the CLI the same admin user can successfully issue a system:all token and run commands across all resources. We would be very happy to receive any form of input related to the above issues we are facing > > Thanks in advance for any assistance > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community From kennelson11 at gmail.com Fri Mar 12 19:01:47 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 12 Mar 2021 11:01:47 -0800 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Hello All! I know we are past the deadline, but I wanted to do one final call. If you have highlights you want included in the release marketing for Wallaby, you must have patches pushed to the releases repo by Sunday March 14th at 6:00 UTC. If you can't have it pushed by then but want to be included, please contact me directly. Thanks! -Kendall (diablo_rojo) On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson wrote: > Hello Everyone! > > It's time to start thinking about calling out 'cycle-highlights' in your > deliverables! I have no idea how we are here AGAIN ALREADY, alas, here we > be. > > As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > the last few releases: Stein[1], Train[2]. > > We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. Looking through your release notes > might be a good place to start. > > *The deadline for cycle highlights is the end of the R-5 week [3] (next > week) on March 12th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the > openstack/releases repo like this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/$RELEASE/highlights.html > file or the output of the build-openstack-sphinx-docs job under > html/$RELEASE/highlights.html. > > Feel free to add me as a reviewer on your patches. > > Can't wait to see you all have accomplished this release! > > Thanks :) > > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/stein/highlights.html > [2] https://releases.openstack.org/train/highlights.html > [3] htt > > https://releases.openstack.org/wallaby/schedule.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Mar 12 19:05:36 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 12 Mar 2021 12:05:36 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Wed, Mar 10, 2021 at 10:36 AM Wesley Hayutin wrote: > > > On Tue, Mar 9, 2021 at 5:13 PM Wesley Hayutin wrote: > >> >> >> On Mon, Mar 8, 2021 at 12:46 AM Marios Andreou wrote: >> >>> >>> >>> On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin >>> wrote: >>> >>>> >>>> >>>> On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy >>>> wrote: >>>> >>>>> Hello All, >>>>> >>>>> Just a reminder that we will be starting to implement steps to migrate >>>>> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >>>>> >>>>> The plan is outlined in: >>>>> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >>>>> >>>>> In summary, on Sunday, we plan to: >>>>> - Move the master integration line for promotions to build containers >>>>> and images on centos-8 stream nodes >>>>> - Change the release files to bring down centos-8 stream repos for use >>>>> in test jobs (test jobs will still start on centos-8 nodes - changing this >>>>> nodeset will happen later) >>>>> - Image build and container build check jobs will be moved to >>>>> non-voting during this transition. >>>>> >>>> >>>>> We have already run all the test jobs in RDO with centos-8 stream >>>>> content running on centos-8 nodes to prequalify this transition. >>>>> >>>>> We will update this list with status as we go forward with next steps. >>>>> >>>>> Thanks! >>>>> >>>> >>>> OK... status update. >>>> >>>> Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure >>>> Monday wasn't a disaster upstream. TripleO master jobs have successfully >>>> been migrated to CentOS-8-Stream today. You should see "8-stream" now in >>>> /etc/yum.repos.d/tripleo-centos.* repos. >>>> >>>> >>> >>> \o/ this is fantastic! >>> >>> nice work all thanks to everyone involved for getting this done with >>> minimal disruption >>> >>> tripleo-ci++ >>> >>> >>> >>> >>> >>>> Your CentOS-8-Stream Master hash is: >>>> >>>> edd46672cb9b7a661ecf061942d71a72 >>>> >>>> Your master repos are: >>>> https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo >>>> >>>> Containers, and overcloud images should all be centos-8-stream. >>>> >>>> The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. >>>> >>>> Please open launchpad bugs w/ the "alert" tag if you are having any issues. >>>> >>>> Thanks and well done all! >>>> >>>> >>>> >>>>> >>>>> >>>>> >>>>> >>>> >> OK.... stable/victoria will start to migrate this evening to >> centos-8-stream >> >> We are looking to promote the following [1]. Again if you hit any >> issues, please just file a launchpad bug w/ the "alert" tag. >> >> Thanks >> >> >> [1] >> https://trunk.rdoproject.org/api-centos8-victoria/api/civotes_agg_detail.html?ref_hash=457ea897ac3b7552b82c532adcea63f0 >> >> > > OK... stable/victoria is now on centos-8-stream. > Holler via launchpad if you hit something... > now we're working on stable/ussuri :) > OK.. stable/ussuri and stable/train will be converted to use centos-8-stream shortly just waiting on two patches in the gate [1]. Once they are merged we'll be updating zuul to start with a centos-8-stream node as well. The hard part is just about done. Again, thanks to Ananya and Ronelle! [1] https://review.opendev.org/c/openstack/tripleo-quickstart/+/779803/ https://review.opendev.org/c/openstack/tripleo-quickstart/+/779799/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ntt7 at psu.edu Fri Mar 12 19:53:33 2021 From: ntt7 at psu.edu (Tallman, Nathan) Date: Fri, 12 Mar 2021 19:53:33 +0000 Subject: Status of Qinling Message-ID: <79C191AF-0BE2-486C-9D21-1F158575A991@psu.edu> Hello OpenStack community, I'm trying to find out if Qinling is alive or dead. The GitHub repo says it's archived and no longer maintained, but the project page on openstack.org doesn't mention anything about Qinling being abandoned. Can anyone shed light on this for me? Thanks, Nathan -- Nathan Tallman Digital Preservation Librarian Penn State University Libraries (814) 865-0860 ntt7 at psu.edu Schedule a Meeting Chat with me on Teams [Microsoft Teams Logo] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 360 bytes Desc: image001.png URL: From fungi at yuggoth.org Fri Mar 12 20:34:47 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 12 Mar 2021 20:34:47 +0000 Subject: Status of Qinling In-Reply-To: <79C191AF-0BE2-486C-9D21-1F158575A991@psu.edu> References: <79C191AF-0BE2-486C-9D21-1F158575A991@psu.edu> Message-ID: <20210312203446.wekaefvvqwotf3y3@yuggoth.org> On 2021-03-12 19:53:33 +0000 (+0000), Tallman, Nathan wrote: > I'm trying to find out if Qinling is alive or dead. The GitHub > repo https://opendev.org/openstack/qinling/src/branch/master/README.rst > says it's archived and no longer maintained, but the project > page https://www.openstack.org/software/releases/rocky/components/qinling > on openstack.org doesn't mention anything about Qinling being > abandoned. Can anyone shed light on this for me? "Dead" (unless someone resurrects it). Rocky was roughly 2.5 years ago, which is the information you're looking at on that software page. The most recent release was Victoria and the upcoming one in a few weeks is Wallaby (release names go in English alphabetical order). The Qinling project was officially retired when https://review.opendev.org/764523 merged on 2020-12-09, so Victoria was the last official OpenStack release to include it: https://releases.openstack.org/victoria/ It will not be included in the Wallaby release. If there is interest in reviving Qinling, the OpenStack Technical Committee might consider allowing its inclusion in future releases. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ignaziocassano at gmail.com Fri Mar 12 06:43:22 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 12 Mar 2021 07:43:22 +0100 Subject: [stein][neutron] gratuitous arp In-Reply-To: <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> Message-ID: Hello Tobias, the result is the same as your. I do not know what happens in depth to evaluate if the behavior is the same. I solved on stein with patch suggested by Sean : force_legacy_port_bind workaround. So I am asking if the problem exists also on train. Ignazio Il Gio 11 Mar 2021, 19:27 Tobias Urdin ha scritto: > Hello, > > > Not sure if you are having the same issue as us, but we are following > https://bugs.launchpad.net/neutron/+bug/1901707 but > > are patching it with something similar to > https://review.opendev.org/c/openstack/nova/+/741529 to workaround the > issue until it's completely solved. > > > Best regards > > ------------------------------ > *From:* Ignazio Cassano > *Sent:* Wednesday, March 10, 2021 7:57:21 AM > *To:* Sean Mooney > *Cc:* openstack-discuss; Slawek Kaplonski > *Subject:* Re: [stein][neutron] gratuitous arp > > Hello All, > please, are there news about bug 1815989 ? > On stein I modified code as suggested in the patches. > I am worried when I will upgrade to train: wil this bug persist ? > On which openstack version this bug is resolved ? > Ignazio > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> Hello, I tried to update to last stein packages on yum and seems this bug >> still exists. >> Before the yum update I patched some files as suggested and and ping to >> vm worked fine. >> After yum update the issue returns. >> Please, let me know If I must patch files by hand or some new parameters >> in configuration can solve and/or the issue is solved in newer openstack >> versions. >> Thanks >> Ignazio >> >> >> Il Mer 29 Apr 2020, 19:49 Sean Mooney ha scritto: >> >>> On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: >>> > Many thanks. >>> > Please keep in touch. >>> here are the two patches. >>> the first https://review.opendev.org/#/c/724386/ is the actual change >>> to add the new config opition >>> this needs a release note and some tests but it shoudl be functional >>> hence the [WIP] >>> i have not enable the workaround in any job in this patch so the ci run >>> will assert this does not break >>> anything in the default case >>> >>> the second patch is https://review.opendev.org/#/c/724387/ which >>> enables the workaround in the multi node ci jobs >>> and is testing that live migration exctra works when the workaround is >>> enabled. >>> >>> this should work as it is what we expect to happen if you are using a >>> moderne nova with an old neutron. >>> its is marked [DNM] as i dont intend that patch to merge but if the >>> workaround is useful we migth consider enableing >>> it for one of the jobs to get ci coverage but not all of the jobs. >>> >>> i have not had time to deploy a 2 node env today but ill try and test >>> this locally tomorow. >>> >>> >>> >>> > Ignazio >>> > >>> > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < >>> smooney at redhat.com> >>> > ha scritto: >>> > >>> > > so bing pragmatic i think the simplest path forward given my other >>> patches >>> > > have not laned >>> > > in almost 2 years is to quickly add a workaround config option to >>> disable >>> > > mulitple port bindign >>> > > which we can backport and then we can try and work on the actual fix >>> after. >>> > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that >>> shoudl >>> > > serve as a workaround >>> > > for thos that hav this issue but its a regression in functionality. >>> > > >>> > > i can create a patch that will do that in an hour or so and submit a >>> > > followup DNM patch to enabel the >>> > > workaound in one of the gate jobs that tests live migration. >>> > > i have a meeting in 10 mins and need to finish the pacht im >>> currently >>> > > updating but ill submit a poc once that is done. >>> > > >>> > > im not sure if i will be able to spend time on the actul fix which i >>> > > proposed last year but ill see what i can do. >>> > > >>> > > >>> > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: >>> > > > PS >>> > > > I have testing environment on queens,rocky and stein and I can >>> make test >>> > > > as you need. >>> > > > Ignazio >>> > > > >>> > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < >>> > > > ignaziocassano at gmail.com> ha scritto: >>> > > > >>> > > > > Hello Sean, >>> > > > > the following is the configuration on my compute nodes: >>> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt >>> > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 >>> > > > > libvirt-libs-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 >>> > > > > libvirt-client-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 >>> > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 >>> > > > > libvirt-python-4.5.0-1.el7.x86_64 >>> > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 >>> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu >>> > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 >>> > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 >>> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >>> > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch >>> > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch >>> > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 >>> > > > > >>> > > > > >>> > > > > As far as firewall driver >>> > > >>> > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: >>> > > > > >>> > > > > firewall_driver = iptables_hybrid >>> > > > > >>> > > > > I have same libvirt/qemu version on queens, on rocky and on stein >>> > > >>> > > testing >>> > > > > environment and the >>> > > > > same firewall driver. >>> > > > > Live migration on provider network on queens works fine. >>> > > > > It does not work fine on rocky and stein (vm lost connection >>> after it >>> > > >>> > > is >>> > > > > migrated and start to respond only when the vm send a network >>> packet , >>> > > >>> > > for >>> > > > > example when chrony pools the time server). >>> > > > > >>> > > > > Ignazio >>> > > > > >>> > > > > >>> > > > > >>> > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < >>> > > >>> > > smooney at redhat.com> >>> > > > > ha scritto: >>> > > > > >>> > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: >>> > > > > > > Hello, some updated about this issue. >>> > > > > > > I read someone has got same issue as reported here: >>> > > > > > > >>> > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 >>> > > > > > > >>> > > > > > > If you read the discussion, someone tells that the garp must >>> be >>> > > >>> > > sent by >>> > > > > > > qemu during live miration. >>> > > > > > > If this is true, this means on rocky/stein the qemu/libvirt >>> are >>> > > >>> > > bugged. >>> > > > > > >>> > > > > > it is not correct. >>> > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve >>> as >>> > > >>> > > its mac >>> > > > > > learning frames >>> > > > > > instead >>> > > >>> > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol >>> > > > > > >>> https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html >>> > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 >>> > > > > > >>> https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html >>> > > > > > but was fixed by >>> > > > > > >>> > > >>> > > >>> https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b >>> > > > > > can you confirm you are not using the broken 2.6.0 release and >>> are >>> > > >>> > > using >>> > > > > > 2.7 or newer or 2.4 and older. >>> > > > > > >>> > > > > > >>> > > > > > > So I tried to use stein and rocky with the same version of >>> > > >>> > > libvirt/qemu >>> > > > > > > packages I installed on queens (I updated compute and >>> controllers >>> > > >>> > > node >>> > > > > > >>> > > > > > on >>> > > > > > > queens for obtaining same libvirt/qemu version deployed on >>> rocky >>> > > >>> > > and >>> > > > > > >>> > > > > > stein). >>> > > > > > > >>> > > > > > > On queens live migration on provider network continues to >>> work >>> > > >>> > > fine. >>> > > > > > > On rocky and stein not, so I think the issue is related to >>> > > >>> > > openstack >>> > > > > > > components . >>> > > > > > >>> > > > > > on queens we have only a singel prot binding and nova blindly >>> assumes >>> > > > > > that the port binding details wont >>> > > > > > change when it does a live migration and does not update the >>> xml for >>> > > >>> > > the >>> > > > > > netwrok interfaces. >>> > > > > > >>> > > > > > the port binding is updated after the migration is complete in >>> > > > > > post_livemigration >>> > > > > > in rocky+ neutron optionally uses the multiple port bindings >>> flow to >>> > > > > > prebind the port to the destiatnion >>> > > > > > so it can update the xml if needed and if post copy live >>> migration is >>> > > > > > enable it will asyconsly activate teh dest port >>> > > > > > binding before post_livemigration shortenting the downtime. >>> > > > > > >>> > > > > > if you are using the iptables firewall os-vif will have >>> precreated >>> > > >>> > > the >>> > > > > > ovs port and intermediate linux bridge before the >>> > > > > > migration started which will allow neutron to wire it up (put >>> it on >>> > > >>> > > the >>> > > > > > correct vlan and install security groups) before >>> > > > > > the vm completes the migraton. >>> > > > > > >>> > > > > > if you are using the ovs firewall os-vif still precreates teh >>> ovs >>> > > >>> > > port >>> > > > > > but libvirt deletes it and recreats it too. >>> > > > > > as a result there is a race when using openvswitch firewall >>> that can >>> > > > > > result in the RARP packets being lost. >>> > > > > > >>> > > > > > > >>> > > > > > > Best Regards >>> > > > > > > Ignazio Cassano >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < >>> > > > > > >>> > > > > > smooney at redhat.com> >>> > > > > > > ha scritto: >>> > > > > > > >>> > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: >>> > > > > > > > > Hello, I have this problem with rocky or newer with >>> > > >>> > > iptables_hybrid >>> > > > > > > > > firewall. >>> > > > > > > > > So, can I solve using post copy live migration ??? >>> > > > > > > > >>> > > > > > > > so this behavior has always been how nova worked but rocky >>> the >>> > > > > > > > >>> > > > > > > > >>> > > > > > >>> > > > > > >>> > > >>> > > >>> https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html >>> > > > > > > > spec intoduced teh ablity to shorten the outage by pre >>> biding the >>> > > > > > >>> > > > > > port and >>> > > > > > > > activating it when >>> > > > > > > > the vm is resumed on the destiation host before we get to >>> pos >>> > > >>> > > live >>> > > > > > >>> > > > > > migrate. >>> > > > > > > > >>> > > > > > > > this reduces the outage time although i cant be fully >>> elimiated >>> > > >>> > > as >>> > > > > > >>> > > > > > some >>> > > > > > > > level of packet loss is >>> > > > > > > > always expected when you live migrate. >>> > > > > > > > >>> > > > > > > > so yes enabliy post copy live migration should help but be >>> aware >>> > > >>> > > that >>> > > > > > >>> > > > > > if a >>> > > > > > > > network partion happens >>> > > > > > > > during a post copy live migration the vm will crash and >>> need to >>> > > >>> > > be >>> > > > > > > > restarted. >>> > > > > > > > it is generally safe to use and will imporve the migration >>> > > >>> > > performace >>> > > > > > >>> > > > > > but >>> > > > > > > > unlike pre copy migration if >>> > > > > > > > the guess resumes on the dest and the mempry page has not >>> been >>> > > >>> > > copied >>> > > > > > >>> > > > > > yet >>> > > > > > > > then it must wait for it to be copied >>> > > > > > > > and retrive it form the souce host. if the connection too >>> the >>> > > >>> > > souce >>> > > > > > >>> > > > > > host >>> > > > > > > > is intrupted then the vm cant >>> > > > > > > > do that and the migration will fail and the instance will >>> crash. >>> > > >>> > > if >>> > > > > > >>> > > > > > you >>> > > > > > > > are using precopy migration >>> > > > > > > > if there is a network partaion during the migration the >>> > > >>> > > migration will >>> > > > > > > > fail but the instance will continue >>> > > > > > > > to run on the source host. >>> > > > > > > > >>> > > > > > > > so while i would still recommend using it, i it just good >>> to be >>> > > >>> > > aware >>> > > > > > >>> > > > > > of >>> > > > > > > > that behavior change. >>> > > > > > > > >>> > > > > > > > > Thanks >>> > > > > > > > > Ignazio >>> > > > > > > > > >>> > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney < >>> smooney at redhat.com> ha >>> > > > > > >>> > > > > > scritto: >>> > > > > > > > > >>> > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano >>> wrote: >>> > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm >>> migrate >>> > > > > > >>> > > > > > from one >>> > > > > > > > >>> > > > > > > > node >>> > > > > > > > > > > to another I cannot ping it for several minutes. If >>> in the >>> > > >>> > > vm I >>> > > > > > >>> > > > > > put a >>> > > > > > > > > > > script that ping the gateway continously, the live >>> > > >>> > > migration >>> > > > > > >>> > > > > > works >>> > > > > > > > >>> > > > > > > > fine >>> > > > > > > > > > >>> > > > > > > > > > and >>> > > > > > > > > > > I can ping it. Why this happens ? I read something >>> about >>> > > > > > >>> > > > > > gratuitous >>> > > > > > > > >>> > > > > > > > arp. >>> > > > > > > > > > >>> > > > > > > > > > qemu does not use gratuitous arp but instead uses an >>> older >>> > > > > > >>> > > > > > protocal >>> > > > > > > > >>> > > > > > > > called >>> > > > > > > > > > RARP >>> > > > > > > > > > to do mac address learning. >>> > > > > > > > > > >>> > > > > > > > > > what release of openstack are you using. and are you >>> using >>> > > > > > >>> > > > > > iptables >>> > > > > > > > > > firewall of openvswitch firewall. >>> > > > > > > > > > >>> > > > > > > > > > if you are using openvswtich there is is nothing we >>> can do >>> > > >>> > > until >>> > > > > > >>> > > > > > we >>> > > > > > > > > > finally delegate vif pluging to os-vif. >>> > > > > > > > > > currently libvirt handels interface plugging for >>> kernel ovs >>> > > >>> > > when >>> > > > > > >>> > > > > > using >>> > > > > > > > >>> > > > > > > > the >>> > > > > > > > > > openvswitch firewall driver >>> > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress >>> that >>> > > >>> > > but it >>> > > > > > >>> > > > > > and >>> > > > > > > > >>> > > > > > > > the >>> > > > > > > > > > neutron patch are >>> > > > > > > > > > https://review.opendev.org/#/c/640258 rather out >>> dated. >>> > > >>> > > while >>> > > > > > >>> > > > > > libvirt >>> > > > > > > > >>> > > > > > > > is >>> > > > > > > > > > pluging the vif there will always be >>> > > > > > > > > > a race condition where the RARP packets sent by qemu >>> and >>> > > >>> > > then mac >>> > > > > > > > >>> > > > > > > > learning >>> > > > > > > > > > packets will be lost. >>> > > > > > > > > > >>> > > > > > > > > > if you are using the iptables firewall and you have >>> opnestack >>> > > > > > >>> > > > > > rock or >>> > > > > > > > > > later then if you enable post copy live migration >>> > > > > > > > > > it should reduce the downtime. in this conficution we >>> do not >>> > > >>> > > have >>> > > > > > >>> > > > > > the >>> > > > > > > > >>> > > > > > > > race >>> > > > > > > > > > betwen neutron and libvirt so the rarp >>> > > > > > > > > > packets should not be lost. >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > > Please, help me ? >>> > > > > > > > > > > Any workaround , please ? >>> > > > > > > > > > > >>> > > > > > > > > > > Best Regards >>> > > > > > > > > > > Ignazio >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > >>> > > > > > >>> > > >>> > > >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Fri Mar 12 08:13:29 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 12 Mar 2021 08:13:29 +0000 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com>, Message-ID: <95ccfc366d4b497c8af232f38d07559f@binero.com> Hello, If it's the same as us, then yes, the issue occurs on Train and is not completely solved yet. Best regards ________________________________ From: Ignazio Cassano Sent: Friday, March 12, 2021 7:43:22 AM To: Tobias Urdin Cc: openstack-discuss Subject: Re: [stein][neutron] gratuitous arp Hello Tobias, the result is the same as your. I do not know what happens in depth to evaluate if the behavior is the same. I solved on stein with patch suggested by Sean : force_legacy_port_bind workaround. So I am asking if the problem exists also on train. Ignazio Il Gio 11 Mar 2021, 19:27 Tobias Urdin > ha scritto: Hello, Not sure if you are having the same issue as us, but we are following https://bugs.launchpad.net/neutron/+bug/1901707 but are patching it with something similar to https://review.opendev.org/c/openstack/nova/+/741529 to workaround the issue until it's completely solved. Best regards ________________________________ From: Ignazio Cassano > Sent: Wednesday, March 10, 2021 7:57:21 AM To: Sean Mooney Cc: openstack-discuss; Slawek Kaplonski Subject: Re: [stein][neutron] gratuitous arp Hello All, please, are there news about bug 1815989 ? On stein I modified code as suggested in the patches. I am worried when I will upgrade to train: wil this bug persist ? On which openstack version this bug is resolved ? Ignazio Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano > ha scritto: Hello, I tried to update to last stein packages on yum and seems this bug still exists. Before the yum update I patched some files as suggested and and ping to vm worked fine. After yum update the issue returns. Please, let me know If I must patch files by hand or some new parameters in configuration can solve and/or the issue is solved in newer openstack versions. Thanks Ignazio Il Mer 29 Apr 2020, 19:49 Sean Mooney > ha scritto: On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > Many thanks. > Please keep in touch. here are the two patches. the first https://review.opendev.org/#/c/724386/ is the actual change to add the new config opition this needs a release note and some tests but it shoudl be functional hence the [WIP] i have not enable the workaround in any job in this patch so the ci run will assert this does not break anything in the default case the second patch is https://review.opendev.org/#/c/724387/ which enables the workaround in the multi node ci jobs and is testing that live migration exctra works when the workaround is enabled. this should work as it is what we expect to happen if you are using a moderne nova with an old neutron. its is marked [DNM] as i dont intend that patch to merge but if the workaround is useful we migth consider enableing it for one of the jobs to get ci coverage but not all of the jobs. i have not had time to deploy a 2 node env today but ill try and test this locally tomorow. > Ignazio > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > ha scritto: > > > so bing pragmatic i think the simplest path forward given my other patches > > have not laned > > in almost 2 years is to quickly add a workaround config option to disable > > mulitple port bindign > > which we can backport and then we can try and work on the actual fix after. > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that shoudl > > serve as a workaround > > for thos that hav this issue but its a regression in functionality. > > > > i can create a patch that will do that in an hour or so and submit a > > followup DNM patch to enabel the > > workaound in one of the gate jobs that tests live migration. > > i have a meeting in 10 mins and need to finish the pacht im currently > > updating but ill submit a poc once that is done. > > > > im not sure if i will be able to spend time on the actul fix which i > > proposed last year but ill see what i can do. > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > PS > > > I have testing environment on queens,rocky and stein and I can make test > > > as you need. > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > Hello Sean, > > > > the following is the configuration on my compute nodes: > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > As far as firewall driver > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > testing > > > > environment and the > > > > same firewall driver. > > > > Live migration on provider network on queens works fine. > > > > It does not work fine on rocky and stein (vm lost connection after it > > > > is > > > > migrated and start to respond only when the vm send a network packet , > > > > for > > > > example when chrony pools the time server). > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > smooney at redhat.com> > > > > ha scritto: > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > Hello, some updated about this issue. > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > sent by > > > > > > qemu during live miration. > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > bugged. > > > > > > > > > > it is not correct. > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > its mac > > > > > learning frames > > > > > instead > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > but was fixed by > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > can you confirm you are not using the broken 2.6.0 release and are > > > > using > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > libvirt/qemu > > > > > > packages I installed on queens (I updated compute and controllers > > > > node > > > > > > > > > > on > > > > > > queens for obtaining same libvirt/qemu version deployed on rocky > > > > and > > > > > > > > > > stein). > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > fine. > > > > > > On rocky and stein not, so I think the issue is related to > > > > openstack > > > > > > components . > > > > > > > > > > on queens we have only a singel prot binding and nova blindly assumes > > > > > that the port binding details wont > > > > > change when it does a live migration and does not update the xml for > > > > the > > > > > netwrok interfaces. > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > post_livemigration > > > > > in rocky+ neutron optionally uses the multiple port bindings flow to > > > > > prebind the port to the destiatnion > > > > > so it can update the xml if needed and if post copy live migration is > > > > > enable it will asyconsly activate teh dest port > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > if you are using the iptables firewall os-vif will have precreated > > > > the > > > > > ovs port and intermediate linux bridge before the > > > > > migration started which will allow neutron to wire it up (put it on > > > > the > > > > > correct vlan and install security groups) before > > > > > the vm completes the migraton. > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > port > > > > > but libvirt deletes it and recreats it too. > > > > > as a result there is a race when using openvswitch firewall that can > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > Best Regards > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, I have this problem with rocky or newer with > > > > iptables_hybrid > > > > > > > > firewall. > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky the > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > spec intoduced teh ablity to shorten the outage by pre biding the > > > > > > > > > > port and > > > > > > > activating it when > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > live > > > > > > > > > > migrate. > > > > > > > > > > > > > > this reduces the outage time although i cant be fully elimiated > > > > as > > > > > > > > > > some > > > > > > > level of packet loss is > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be aware > > > > that > > > > > > > > > > if a > > > > > > > network partion happens > > > > > > > during a post copy live migration the vm will crash and need to > > > > be > > > > > > > restarted. > > > > > > > it is generally safe to use and will imporve the migration > > > > performace > > > > > > > > > > but > > > > > > > unlike pre copy migration if > > > > > > > the guess resumes on the dest and the mempry page has not been > > > > copied > > > > > > > > > > yet > > > > > > > then it must wait for it to be copied > > > > > > > and retrive it form the souce host. if the connection too the > > > > souce > > > > > > > > > > host > > > > > > > is intrupted then the vm cant > > > > > > > do that and the migration will fail and the instance will crash. > > > > if > > > > > > > > > > you > > > > > > > are using precopy migration > > > > > > > if there is a network partaion during the migration the > > > > migration will > > > > > > > fail but the instance will continue > > > > > > > to run on the source host. > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to be > > > > aware > > > > > > > > > > of > > > > > > > that behavior change. > > > > > > > > > > > > > > > Thanks > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > ha > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm migrate > > > > > > > > > > from one > > > > > > > > > > > > > > node > > > > > > > > > > to another I cannot ping it for several minutes. If in the > > > > vm I > > > > > > > > > > put a > > > > > > > > > > script that ping the gateway continously, the live > > > > migration > > > > > > > > > > works > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > and > > > > > > > > > > I can ping it. Why this happens ? I read something about > > > > > > > > > > gratuitous > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an older > > > > > > > > > > protocal > > > > > > > > > > > > > > called > > > > > > > > > RARP > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you using > > > > > > > > > > iptables > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can do > > > > until > > > > > > > > > > we > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > currently libvirt handels interface plugging for kernel ovs > > > > when > > > > > > > > > > using > > > > > > > > > > > > > > the > > > > > > > > > openvswitch firewall driver > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > but it > > > > > > > > > > and > > > > > > > > > > > > > > the > > > > > > > > > neutron patch are > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > while > > > > > > > > > > libvirt > > > > > > > > > > > > > > is > > > > > > > > > pluging the vif there will always be > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > then mac > > > > > > > > > > > > > > learning > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have opnestack > > > > > > > > > > rock or > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > it should reduce the downtime. in this conficution we do not > > > > have > > > > > > > > > > the > > > > > > > > > > > > > > race > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Fri Mar 12 21:55:23 2021 From: mthode at mthode.org (Matthew Thode) Date: Fri, 12 Mar 2021 15:55:23 -0600 Subject: [requirements][all] Openstack Requirements is now frozen Message-ID: <20210312215523.wnyast62an3gg2wo@mthode.org> As the title states, requirements is now frozen, any updates to the master branch not already approved from now until requirements branches the stable/wallaby branch will require a FFE email to be submitted to this (openstack-discuss) mailing list. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Fri Mar 12 22:31:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 12 Mar 2021 17:31:43 -0500 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> References: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> <1924f447-2199-dfa4-4a70-575cda438107@inovex.de> <63f1d37c-f7fd-15b1-5f71-74e26c44ea94@inovex.de> <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> Message-ID: <764a5932-f9b2-a882-7992-e8c3711d8ad8@gmail.com> On 3/11/21 8:21 AM, Christian Rohmann wrote: > Hey Brian, > > On 25/02/2021 19:05, Brian Rosmaita wrote: >>> Please kindly let me know if there is anything required to get this >>> merged. >> >> We have to release wallaby os-brick next week, so highest priority >> right now are os-brick reviews, but we'll get you some feedback on >> your patch as soon as we can. > > There is one +1 from Sofia on the change now. > Let me know if there is anything else missing or needs changing. Commit message needs an update to reflect the current direction of the patch. > Is this something that still could go into Wallaby BTW? Yes indeed! Thanks for your patience. > Regards > > Christian > From smooney at redhat.com Fri Mar 12 22:36:51 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 12 Mar 2021 22:36:51 +0000 Subject: [stein][neutron] gratuitous arp In-Reply-To: <95ccfc366d4b497c8af232f38d07559f@binero.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> , <95ccfc366d4b497c8af232f38d07559f@binero.com> Message-ID: <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> On Fri, 2021-03-12 at 08:13 +0000, Tobias Urdin wrote: > Hello, > > If it's the same as us, then yes, the issue occurs on Train and is not completely solved yet. there is a downstream bug trackker for this https://bugzilla.redhat.com/show_bug.cgi?id=1917675 its fixed by a combination of 3 enturon patches and i think 1 nova one https://review.opendev.org/c/openstack/neutron/+/766277/ https://review.opendev.org/c/openstack/neutron/+/753314/ https://review.opendev.org/c/openstack/neutron/+/640258/ and https://review.opendev.org/c/openstack/nova/+/770745 the first tree neutron patches would fix the evauate case but break live migration the nova patch means live migration will work too although to fully fix the related live migration packet loss issues you need https://review.opendev.org/c/openstack/nova/+/747454/4 https://review.opendev.org/c/openstack/nova/+/742180/12 to fix live migration with network abckend that dont suppor tmultiple port binding and https://review.opendev.org/c/openstack/nova/+/602432 (the only one not merged yet.) for live migrateon with ovs and hybridg plug=false (e.g. ovs firewall driver, noop or ovn instead of ml2/ovs. multiple port binding was not actully the reason for this there was a race in neutorn itslef that would have haapend even without multiple port binding between the dhcp agent and l2 agent. some of those patches have been backported already and all shoudl eventually make ti to train the could be brought to stine potentially if peopel are open to backport/review them. > > > Best regards > > ________________________________ > From: Ignazio Cassano > Sent: Friday, March 12, 2021 7:43:22 AM > To: Tobias Urdin > Cc: openstack-discuss > Subject: Re: [stein][neutron] gratuitous arp > > Hello Tobias, the result is the same as your. > I do not know what happens in depth to evaluate if the behavior is the same. > I solved on stein with patch suggested by Sean : force_legacy_port_bind workaround. > So I am asking if the problem exists also on train. > Ignazio > > Il Gio 11 Mar 2021, 19:27 Tobias Urdin > ha scritto: > > Hello, > > > Not sure if you are having the same issue as us, but we are following https://bugs.launchpad.net/neutron/+bug/1901707 but > > are patching it with something similar to https://review.opendev.org/c/openstack/nova/+/741529 to workaround the issue until it's completely solved. > > > Best regards > > ________________________________ > From: Ignazio Cassano > > Sent: Wednesday, March 10, 2021 7:57:21 AM > To: Sean Mooney > Cc: openstack-discuss; Slawek Kaplonski > Subject: Re: [stein][neutron] gratuitous arp > > Hello All, > please, are there news about bug 1815989 ? > On stein I modified code as suggested in the patches. > I am worried when I will upgrade to train: wil this bug persist ? > On which openstack version this bug is resolved ? > Ignazio > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano > ha scritto: > Hello, I tried to update to last stein packages on yum and seems this bug still exists. > Before the yum update I patched some files as suggested and and ping to vm worked fine. > After yum update the issue returns. > Please, let me know If I must patch files by hand or some new parameters in configuration can solve and/or the issue is solved in newer openstack versions. > Thanks > Ignazio > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney > ha scritto: > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > Many thanks. > > Please keep in touch. > here are the two patches. > the first https://review.opendev.org/#/c/724386/ is the actual change to add the new config opition > this needs a release note and some tests but it shoudl be functional hence the [WIP] > i have not enable the workaround in any job in this patch so the ci run will assert this does not break > anything in the default case > > the second patch is https://review.opendev.org/#/c/724387/ which enables the workaround in the multi node ci jobs > and is testing that live migration exctra works when the workaround is enabled. > > this should work as it is what we expect to happen if you are using a moderne nova with an old neutron. > its is marked [DNM] as i dont intend that patch to merge but if the workaround is useful we migth consider enableing > it for one of the jobs to get ci coverage but not all of the jobs. > > i have not had time to deploy a 2 node env today but ill try and test this locally tomorow. > > > > > Ignazio > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > > ha scritto: > > > > > so bing pragmatic i think the simplest path forward given my other patches > > > have not laned > > > in almost 2 years is to quickly add a workaround config option to disable > > > mulitple port bindign > > > which we can backport and then we can try and work on the actual fix after. > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that shoudl > > > serve as a workaround > > > for thos that hav this issue but its a regression in functionality. > > > > > > i can create a patch that will do that in an hour or so and submit a > > > followup DNM patch to enabel the > > > workaound in one of the gate jobs that tests live migration. > > > i have a meeting in 10 mins and need to finish the pacht im currently > > > updating but ill submit a poc once that is done. > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > proposed last year but ill see what i can do. > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > PS > > > >  I have testing environment on queens,rocky and stein and I can make test > > > > as you need. > > > > Ignazio > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > > > Hello Sean, > > > > > the following is the configuration on my compute nodes: > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > As far as firewall driver > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > testing > > > > > environment and the > > > > > same firewall driver. > > > > > Live migration on provider network on queens works fine. > > > > > It does not work fine on rocky and stein (vm lost connection after it > > > > > > is > > > > > migrated and start to respond only when the vm send a network packet , > > > > > > for > > > > > example when chrony pools the time server). > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > smooney at redhat.com> > > > > > ha scritto: > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > Hello, some updated about this issue. > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > > > sent by > > > > > > > qemu during live miration. > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > > > bugged. > > > > > > > > > > > > it is not correct. > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > > > its mac > > > > > > learning frames > > > > > > instead > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > but was fixed by > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > can you confirm you are not using the broken 2.6.0 release and are > > > > > > using > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > libvirt/qemu > > > > > > > packages I installed on queens (I updated compute and controllers > > > > > > node > > > > > > > > > > > > on > > > > > > > queens for obtaining same libvirt/qemu version deployed on rocky > > > > > > and > > > > > > > > > > > > stein). > > > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > > > fine. > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > openstack > > > > > > > components . > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly assumes > > > > > > that the port binding details wont > > > > > > change when it does a live migration and does not update the xml for > > > > > > the > > > > > > netwrok interfaces. > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > post_livemigration > > > > > > in rocky+ neutron optionally uses the multiple port bindings flow to > > > > > > prebind the port to the destiatnion > > > > > > so it can update the xml if needed and if post copy live migration is > > > > > > enable it will asyconsly activate teh dest port > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > if you are using the iptables firewall os-vif will have precreated > > > > > > the > > > > > > ovs port and intermediate linux bridge before the > > > > > > migration started which will allow neutron to wire it up (put it on > > > > > > the > > > > > > correct vlan and install security groups) before > > > > > > the vm completes the migraton. > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > > > port > > > > > > but libvirt deletes it and recreats it too. > > > > > > as a result there is a race when using openvswitch firewall that can > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > smooney at redhat.com> > > > > > > > ha scritto: > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > iptables_hybrid > > > > > > > > > firewall. > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > spec intoduced teh ablity to shorten the outage by pre biding the > > > > > > > > > > > > port and > > > > > > > > activating it when > > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > > > live > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully elimiated > > > > > > as > > > > > > > > > > > > some > > > > > > > > level of packet loss is > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be aware > > > > > > that > > > > > > > > > > > > if a > > > > > > > > network partion happens > > > > > > > > during a post copy live migration the vm will crash and need to > > > > > > be > > > > > > > > restarted. > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > performace > > > > > > > > > > > > but > > > > > > > > unlike pre copy migration if > > > > > > > > the guess resumes on the dest and the mempry page has not been > > > > > > copied > > > > > > > > > > > > yet > > > > > > > > then it must wait for it to be copied > > > > > > > > and retrive it form the souce host. if the connection too the > > > > > > souce > > > > > > > > > > > > host > > > > > > > > is intrupted then the vm cant > > > > > > > > do that and the migration will fail and the instance will crash. > > > > > > if > > > > > > > > > > > > you > > > > > > > > are using precopy migration > > > > > > > > if there is a network partaion during the migration the > > > > > > migration will > > > > > > > > fail but the instance will continue > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to be > > > > > > aware > > > > > > > > > > > > of > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > ha > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm migrate > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > node > > > > > > > > > > > to another I cannot ping it for several minutes. If in the > > > > > > vm I > > > > > > > > > > > > put a > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > migration > > > > > > > > > > > > works > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > I can ping it. Why this happens ? I read something about > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an older > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > called > > > > > > > > > > RARP > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you using > > > > > > > > > > > > iptables > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can do > > > > > > until > > > > > > > > > > > > we > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > currently libvirt handels interface plugging for kernel ovs > > > > > > when > > > > > > > > > > > > using > > > > > > > > > > > > > > > > the > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > > > but it > > > > > > > > > > > > and > > > > > > > > > > > > > > > > the > > > > > > > > > > neutron patch are > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > > > while > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > is > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > > > then mac > > > > > > > > > > > > > > > > learning > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have opnestack > > > > > > > > > > > > rock or > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > it should reduce the downtime. in this conficution we do not > > > > > > have > > > > > > > > > > > > the > > > > > > > > > > > > > > > > race > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From rosmaita.fossdev at gmail.com Sat Mar 13 05:00:37 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sat, 13 Mar 2021 00:00:37 -0500 Subject: [cinder] wallaby feature freeze Message-ID: <39b4a13a-7862-b484-285a-47f00f27190a@gmail.com> The Wallaby feature freeze is now in effect. The following cinder patches have already been approved, but have not yet merged, and are granted a Feature Freeze Exception: - Add support to store volume format info in Cinder https://review.opendev.org/c/openstack/cinder/+/761152 - Add Consistency Groups support in PowerStore driver https://review.opendev.org/c/openstack/cinder/+/775090 The following patch had been approved, but ran into merge conflicts and had to be resubmitted, and requires re-review and approval. It also has been granted a FFE: - NetApp ONTAP: Implement FlexGroup pool https://review.opendev.org/c/openstack/cinder/+/776713 The third party CI is not reporting on the following patch, which otherwise would have been approved at this point. It has an FFE while this gets worked out: - Revert to snapshot for volumes on DS8000 https://review.opendev.org/c/openstack/cinder/+/773937 Any other feature patch (including driver features) must apply for an FFE by responding to this email before Tuesday, 16 March at 16:00 UTC. Note that application for an FFE is not a guarantee that an FFE will be granted. Grant of an FFE is not a guarantee that the code will be merged for Wallaby. It is expected that patches granted an FFE will be merged by 21:00 UTC on Friday 19 March. Bugfixes do not require an FFE. To be included in Wallaby, they must be merged before the first Release Candidate is cut on Thursday 25 March. After that point, only release-critical bugs will be allowed into the Wallaby release. From openinfradn at gmail.com Sat Mar 13 09:22:53 2021 From: openinfradn at gmail.com (open infra) Date: Sat, 13 Mar 2021 14:52:53 +0530 Subject: How to enable GPU in k8s environment Message-ID: Hi, What are the config files to deal with when enabling GPU (both nvidia and itel)? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Mar 13 09:28:37 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 13 Mar 2021 10:28:37 +0100 Subject: [all][tc][legal] Possible GPL violation in several places Message-ID: Dear fellow OpenStackers, it has been brought to my attention that having any (Python) imports against a GPL lib (e.g Ansible) *might be* considered linking with all the repercussions of that (copyleft anyone?). Please see the original thread in Launchpad: https://bugs.launchpad.net/kolla-ansible/+bug/1918663 Not only the projects that have "Ansible" in name might be affected, e.g. Ironic also imports Ansible parts. Do note *I am not a lawyer* so I have no idea whether the Python importing is analogous to linking in terms of GPL. I am only forwarding the concerns reported to me in Launchpad. A quick Google search was inconclusive. I have done some analysis in the original Launchpad thread mentioned above. I am copying it here for ease of reference. """ Hmm, the reasoning is interesting. OpenStack Ansible and TripleO would probably also be interested in knowing whether GPL violation is happening or not. I am not in a position to answer this question. I will propagate this matter to the mailing list: http://lists.openstack.org/pipermail/openstack-discuss/ FWIW, TripleO is by Red Hat, just like Ansible, so one would assume they know what they are doing. OTOH, there is always room for a mistake. All in all, end users must use Ansible so they must agree to GPL anyhow, so the license switch would be mostly cosmetic: +/- the OpenStack licensing requirements [1]. That said, Ansible Collections for OpenStack are already licensed under GPL [2]. And a related (and very relevant) project using Ansible - Zuul - includes a note about partial GPL [3]. A quick search [4] reveals a lot of places that could be violating GPL in OpenStack (e.g. in Ironic, base CI jobs) if we followed this linking logic. [1] https://governance.openstack.org/tc/reference/licensing.html [2] https://opendev.org/openstack/ansible-collections-openstack [3] https://opendev.org/zuul/zuul [4] https://codesearch.opendev.org/?q=(from%7Cimport)%5Cs%2B%5Cbansible%5Cb&i=nope&files=&excludeFiles=&repos= """ -yoctozepto From kira034 at 163.com Sat Mar 13 09:57:22 2021 From: kira034 at 163.com (Hongbin Lu) Date: Sat, 13 Mar 2021 17:57:22 +0800 (CST) Subject: [neutron] Bug deputy report for the week of Mar-8-2021 Message-ID: <5a38316f.209b.1782b04dece.Coremail.kira034@163.com> Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. High * Functional test test_gateway_chassis_rebalance failing due to "failed to bind logical router": https://bugs.launchpad.net/neutron/+bug/1918266 * Neutron doesn't honor system-scope: https://bugs.launchpad.net/neutron/+bug/1918506 * quota: invalid JSON for reservation value when positive: https://bugs.launchpad.net/neutron/+bug/1918565 Medium * Slownesses on neutron API with many RBAC rules: https://bugs.launchpad.net/neutron/+bug/1918145 * [QoS][SR-IOV] Minimum BW dataplane enforcement fails if NIC does not support min_tx_rate: https://bugs.launchpad.net/neutron/+bug/1918464 * [OVN] BW limit QoS rules assigned to SR-IOV ports are created on NBDB: https://bugs.launchpad.net/neutron/+bug/1918702 * Toggling dhcp on and off in a subnet causes new instances to be unreachable: https://bugs.launchpad.net/neutron/+bug/1918914 Low * Driver VLAN do not create the VlanAllocation registers if "network_vlan_ranges" only specify the network: https://bugs.launchpad.net/neutron/+bug/1918274 RFE * [RFE] RFC 2317 support in Neutron with designate provider: https://bugs.launchpad.net/neutron/+bug/1918424 Won't Fix * Can't create m2 interface on the 15.3.2: https://bugs.launchpad.net/neutron/+bug/1918155 Invalid * Designate DNS – create TLD using valid Unicode string: https://bugs.launchpad.net/neutron/+bug/1918653 Incomplete * Race between sriov_config service and openib service: https://bugs.launchpad.net/neutron/+bug/1918255 * failed to name sriov vfs because the names reserved for representors: https://bugs.launchpad.net/neutron/+bug/1918397 Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Sat Mar 13 11:07:20 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Sat, 13 Mar 2021 12:07:20 +0100 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: References: Message-ID: Hi, Thanks for raising this. A note regarding ironic inline. On Sat, Mar 13, 2021 at 10:32 AM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > Dear fellow OpenStackers, > > it has been brought to my attention that having any (Python) imports > against a GPL lib (e.g Ansible) *might be* considered linking with all > the repercussions of that (copyleft anyone?). > > Please see the original thread in Launchpad: > https://bugs.launchpad.net/kolla-ansible/+bug/1918663 > > Not only the projects that have "Ansible" in name might be affected, > e.g. Ironic also imports Ansible parts. > Only in our ansible modules and only the BSD parts, namely https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/basic.py Dmitry > > Do note *I am not a lawyer* so I have no idea whether the Python > importing is analogous to linking in terms of GPL. I am only > forwarding the concerns reported to me in Launchpad. A quick Google > search was inconclusive. > > I have done some analysis in the original Launchpad thread mentioned > above. I am copying it here for ease of reference. > > """ > Hmm, the reasoning is interesting. OpenStack Ansible and TripleO would > probably also be interested in knowing whether GPL violation is > happening or not. I am not in a position to answer this question. I > will propagate this matter to the mailing list: > http://lists.openstack.org/pipermail/openstack-discuss/ > FWIW, TripleO is by Red Hat, just like Ansible, so one would assume > they know what they are doing. OTOH, there is always room for a > mistake. > All in all, end users must use Ansible so they must agree to GPL > anyhow, so the license switch would be mostly cosmetic: +/- the > OpenStack licensing requirements [1]. > That said, Ansible Collections for OpenStack are already licensed under > GPL [2]. > And a related (and very relevant) project using Ansible - Zuul - > includes a note about partial GPL [3]. > A quick search [4] reveals a lot of places that could be violating GPL > in OpenStack (e.g. in Ironic, base CI jobs) if we followed this > linking logic. > > [1] https://governance.openstack.org/tc/reference/licensing.html > [2] https://opendev.org/openstack/ansible-collections-openstack > [3] https://opendev.org/zuul/zuul > [4] > https://codesearch.opendev.org/?q=(from%7Cimport)%5Cs%2B%5Cbansible%5Cb&i=nope&files=&excludeFiles=&repos= > """ > > -yoctozepto > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Sat Mar 13 11:52:29 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Sat, 13 Mar 2021 12:52:29 +0100 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: <764a5932-f9b2-a882-7992-e8c3711d8ad8@gmail.com> References: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> <1924f447-2199-dfa4-4a70-575cda438107@inovex.de> <63f1d37c-f7fd-15b1-5f71-74e26c44ea94@inovex.de> <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> <764a5932-f9b2-a882-7992-e8c3711d8ad8@gmail.com> Message-ID: <3e265fa8-e402-97f6-0244-201c67416dd0@inovex.de> On 12/03/2021 23:31, Brian Rosmaita wrote: >> There is one +1 from Sofia on the change now. >> Let me know if there is anything else missing or needs changing. > > Commit message needs an update to reflect the current direction of the > patch. Argh, stupid mistake - I fixed it. Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Mar 13 13:02:50 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 13 Mar 2021 13:02:50 +0000 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: References: Message-ID: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> On 2021-03-13 10:28:37 +0100 (+0100), Radosław Piliszek wrote: [...] > And a related (and very relevant) project using Ansible - Zuul - > includes a note about partial GPL [3]. A quick search [4] reveals > a lot of places that could be violating GPL in OpenStack (e.g. in > Ironic, base CI jobs) if we followed this linking logic. [...] The reason parts of Zuul are GPL is that they actually include forks of some components of Ansible itself (for example, to be able to thoroughly redirect command outputs to a console stream). Zuul isolates the execution of those parts of its source in order to avoid causing the entire service to be GPL. Are you suggesting that files shipped in Ironic's deliverable repos directly import (in a Python sense) these GPL files? Nothing in the https://opendev.org/zuul/zuul-jobs repository is GPL. I've never seen any credible argument that merely executing a GPL program makes the calling program a derivative work. Also, if the Zuul jobs in project repositories really were GPL, that should still be fine according to our governance: Projects run as part of the OpenStack Infrastructure (in order to produce OpenStack software) may be licensed under any OSI-approved license. This includes tools that are run with or on OpenStack projects only during validation or testing phases of development (e.g., a source code linter). https://governance.openstack.org/tc/reference/licensing.html Anyway, we have a separate mailing list for such topics: http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss Let's please not jump to conclusions when it comes to software licenses. It's not as cut and dried as you might expect. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Sat Mar 13 14:14:13 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 13 Mar 2021 08:14:13 -0600 Subject: [tc][ptl][mistral] Leadership option for Mistral for Xena cycle Message-ID: <1782bf00412.12841034c449175.1405164895866307177@ghanshyammann.com> Hello Mistral Team, As Renat already sent an email about Mistral maintenance and help on PTL role[1], and as per TC discussion with the Mistral team, the DPL model[2] is one of the options we can try. As the Xena cycle election is completed now and Mistral is on the leaderless list[3], it is time to apply the DPL model. For this model, we need three mandatory liaisons 1. Release, 2. TACT SIG, 3. Security and these can be a single person or multiple[4]. Step to Apply: You need to propose the patch in governance with the three liaisons info; here is the example for Oslo project moving to DPL model - https://review.opendev.org/c/openstack/governance/+/757906 [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020137.html [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html [3] https://etherpad.opendev.org/p/xena-leaderless [4] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#required-roles -gmann From gmann at ghanshyammann.com Sat Mar 13 14:31:44 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 13 Mar 2021 08:31:44 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-5 Update Message-ID: <1782c000efd.c88e04e5449483.1996158145376826099@ghanshyammann.com> Hello Everyone, Please find the week's R-5 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml Updates: ======= * 8 more projects merged the patches last week, thanks to everyone for active review and merging these. * Sahara and Heat are failing on config object initialization, I am debugging those and will fix them soon. * Puppet Openstack left with the last patch which is on hold due to gnoochi upgrade in RDO[1]. * Panko patch is blocked due to failed gate[2], it is all good to merge (has +A) once the gate is green. Progress Summary: =============== Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml * Projects completed: 29 * Projects required to merge the patches: 5 (Openstackm Ansible , Sahara, Heat, Puppet Openstack, Telemetry ) * Projects do not need any work: 16 [1] https://review.opendev.org/c/openstack/puppet-gnocchi/+/768690 [2] https://review.opendev.org/c/openstack/panko/+/768498 -gmann From radoslaw.piliszek at gmail.com Sat Mar 13 17:03:49 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 13 Mar 2021 18:03:49 +0100 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> References: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> Message-ID: Thanks Dmitry and Jeremy for your quick response. Also thanks Jeremy for replying in Launchpad as well. My answers inline. On Sat, Mar 13, 2021 at 12:07 PM Dmitry Tantsur wrote: >> Not only the projects that have "Ansible" in name might be affected, >> e.g. Ironic also imports Ansible parts. > > > Only in our ansible modules and only the BSD parts, namely https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/basic.py Yes, that's something I have missed. The main README suggests everything is GPL in there so I was confused. Perhaps this is what also confused the OP from Launchpad. On Sat, Mar 13, 2021 at 2:07 PM Jeremy Stanley wrote: > > On 2021-03-13 10:28:37 +0100 (+0100), Radosław Piliszek wrote: > [...] > > And a related (and very relevant) project using Ansible - Zuul - > > includes a note about partial GPL [3]. A quick search [4] reveals > > a lot of places that could be violating GPL in OpenStack (e.g. in > > Ironic, base CI jobs) if we followed this linking logic. > [...] > > The reason parts of Zuul are GPL is that they actually include forks > of some components of Ansible itself (for example, to be able to > thoroughly redirect command outputs to a console stream). Zuul > isolates the execution of those parts of its source in order to > avoid causing the entire service to be GPL. That's good to know and good that Zuul does it this way. Zuul was given as *the good* example. (if it was not clear from my message) > Are you suggesting that files shipped in Ironic's deliverable repos > directly import (in a Python sense) these GPL files? Nothing in the > https://opendev.org/zuul/zuul-jobs repository is GPL. I've never > seen any credible argument that merely executing a GPL program makes > the calling program a derivative work. Also, if the Zuul jobs in > project repositories really were GPL, that should still be fine > according to our governance: I'm not suggesting anything. Dmitry gave a good example that parts of Ansible are actually BSD-licensed (even though it's only obvious after inspecting each file) so it has to be analysed file-per-file then. Also, a purist may say that Ansible still "curses" such usage by GPL because, when you import in Python, you are actually executing the __init__s in the context of your software and those are licensed under GPL, especially the root one. > Anyway, we have a separate mailing list for such topics: > > http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss I have missed it. Should we post there? It looks pretty abandoned (perhaps for a good reason ;-) ). > Let's please not jump to conclusions when it comes to software > licenses. It's not as cut and dried as you might expect. I know, and thus expect, it to be a complex topic and that's the role of the word "possible" in the very subject. I am seeking advice. Also, for clarity, I am just propagating the original report with my extra findings. I was not aware that there could be any issue whatsoever with the licenses we have. -yoctozepto From ignaziocassano at gmail.com Sat Mar 13 17:20:37 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 13 Mar 2021 18:20:37 +0100 Subject: [Quuens][nova] lost config.drive file Message-ID: Hello, probably because an human error, we lost the file disk.config drive file for some instances under /var/lib/nova/instances/uuid directories. Please, is there something we can do for rebuild it for revert instances without rebuild ? Anycase, does the instance rebuilding regenerate the file? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Mar 13 19:49:05 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 13 Mar 2021 19:49:05 +0000 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: References: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> Message-ID: <20210313194905.fq2sh6u5bt7vb2d4@yuggoth.org> On 2021-03-13 18:03:49 +0100 (+0100), Radosław Piliszek wrote: [...] > The main README suggests everything is GPL in there so I was > confused. Perhaps this is what also confused the OP from > Launchpad. [...] I suspect (though do not know for sure) that this is why the Ansible maintainers have moved those files all into a separate directory tree. I would not be surprised if they have plans to move them into a separate repository in the future so as to provide even clearer separation. Some of the copyright license situation in there was murky a few years back, so I expect they're still working to improve things in that regard. > a purist may say that Ansible still "curses" such usage by GPL > because, when you import in Python, you are actually executing the > __init__s in the context of your software and those are licensed under > GPL, especially the root one. The way this is usually done is via a line like: from ansible.module_utils.basic import AnsibleModule I don't believe that actually loads the GPL-licensed ansible/__init__.py file, but I get a bit lost in the nuances of the several different kinds of Python package namespaces. However (and to reiterate, I'm no lawyer, this is not legal advice) what's generally important to look at in these sorts of situations is intent. Law is not like a computer program, and so strict literal interpretations are quite frequently off-base. It's fairly clear the Ansible authors intend for you to be able to import those scripts from ansible.module_utils in more permissively-licensed programs, so by doing that we're not acting counter to their wishes. > I have missed it. Should we post there? It looks pretty abandoned > (perhaps for a good reason ;-) ). [...] It's infrequently-used because such questions arise infrequently (thankfully). If anyone feels we need to start the process of soliciting an actual legal opinion on these matters though, we should re-raise the topic there initially. But before doing that, check the list archives to make sure we haven't already had this discussion in prior years, and also do a bit of research to see whether the Ansible project has already published documentation about the intended license situation for the files you're concerned about. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From masayuki.igawa at gmail.com Sun Mar 14 00:21:52 2021 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Sun, 14 Mar 2021 09:21:52 +0900 Subject: [qa][ptg] Xena PTG Message-ID: <07a5dab0-6b3e-490d-b8a9-fcde1f762c22@www.fastmail.com> Hi, If you are planning to attend QA sessions during the Xena PTG, please fill in the doodle[1] with time slots which are good for you before 23rd March so that we can book the best time slots for most of us. And, please add any topic which you want to discuss on the etherpad[2]. [1] https://doodle.com/poll/5ixvdcbm488kc2ic [2] https://etherpad.opendev.org/p/qa-xena-ptg Best Regards, -- Masayuki Igawa From ignaziocassano at gmail.com Sat Mar 13 20:56:13 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 13 Mar 2021 21:56:13 +0100 Subject: [stein][neutron] gratuitous arp In-Reply-To: <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> <95ccfc366d4b497c8af232f38d07559f@binero.com> <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> Message-ID: Many thanks for your explanation, Sean. Ignazio Il Ven 12 Mar 2021, 23:44 Sean Mooney ha scritto: > On Fri, 2021-03-12 at 08:13 +0000, Tobias Urdin wrote: > > Hello, > > > > If it's the same as us, then yes, the issue occurs on Train and is not > completely solved yet. > there is a downstream bug trackker for this > > https://bugzilla.redhat.com/show_bug.cgi?id=1917675 > > its fixed by a combination of 3 enturon patches and i think 1 nova one > > https://review.opendev.org/c/openstack/neutron/+/766277/ > https://review.opendev.org/c/openstack/neutron/+/753314/ > https://review.opendev.org/c/openstack/neutron/+/640258/ > > and > https://review.opendev.org/c/openstack/nova/+/770745 > > the first tree neutron patches would fix the evauate case but break live > migration > the nova patch means live migration will work too although to fully fix > the related > live migration packet loss issues you need > > https://review.opendev.org/c/openstack/nova/+/747454/4 > https://review.opendev.org/c/openstack/nova/+/742180/12 > to fix live migration with network abckend that dont suppor tmultiple port > binding > and > https://review.opendev.org/c/openstack/nova/+/602432 (the only one not > merged yet.) > for live migrateon with ovs and hybridg plug=false (e.g. ovs firewall > driver, noop or ovn instead of ml2/ovs. > > multiple port binding was not actully the reason for this there was a race > in neutorn itslef that would have haapend > even without multiple port binding between the dhcp agent and l2 agent. > > some of those patches have been backported already and all shoudl > eventually make ti to train the could be brought to stine potentially > if peopel are open to backport/review them. > > > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano > > Sent: Friday, March 12, 2021 7:43:22 AM > > To: Tobias Urdin > > Cc: openstack-discuss > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello Tobias, the result is the same as your. > > I do not know what happens in depth to evaluate if the behavior is the > same. > > I solved on stein with patch suggested by Sean : force_legacy_port_bind > workaround. > > So I am asking if the problem exists also on train. > > Ignazio > > > > Il Gio 11 Mar 2021, 19:27 Tobias Urdin tobias.urdin at binero.com>> ha scritto: > > > > Hello, > > > > > > Not sure if you are having the same issue as us, but we are following > https://bugs.launchpad.net/neutron/+bug/1901707 but > > > > are patching it with something similar to > https://review.opendev.org/c/openstack/nova/+/741529 to workaround the > issue until it's completely solved. > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano ignaziocassano at gmail.com>> > > Sent: Wednesday, March 10, 2021 7:57:21 AM > > To: Sean Mooney > > Cc: openstack-discuss; Slawek Kaplonski > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello All, > > please, are there news about bug 1815989 ? > > On stein I modified code as suggested in the patches. > > I am worried when I will upgrade to train: wil this bug persist ? > > On which openstack version this bug is resolved ? > > Ignazio > > > > > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > > Hello, I tried to update to last stein packages on yum and seems this > bug still exists. > > Before the yum update I patched some files as suggested and and ping to > vm worked fine. > > After yum update the issue returns. > > Please, let me know If I must patch files by hand or some new parameters > in configuration can solve and/or the issue is solved in newer openstack > versions. > > Thanks > > Ignazio > > > > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney smooney at redhat.com>> ha scritto: > > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > > Many thanks. > > > Please keep in touch. > > here are the two patches. > > the first https://review.opendev.org/#/c/724386/ is the actual change > to add the new config opition > > this needs a release note and some tests but it shoudl be functional > hence the [WIP] > > i have not enable the workaround in any job in this patch so the ci run > will assert this does not break > > anything in the default case > > > > the second patch is https://review.opendev.org/#/c/724387/ which > enables the workaround in the multi node ci jobs > > and is testing that live migration exctra works when the workaround is > enabled. > > > > this should work as it is what we expect to happen if you are using a > moderne nova with an old neutron. > > its is marked [DNM] as i dont intend that patch to merge but if the > workaround is useful we migth consider enableing > > it for one of the jobs to get ci coverage but not all of the jobs. > > > > i have not had time to deploy a 2 node env today but ill try and test > this locally tomorow. > > > > > > > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < > smooney at redhat.com> > > > ha scritto: > > > > > > > so bing pragmatic i think the simplest path forward given my other > patches > > > > have not laned > > > > in almost 2 years is to quickly add a workaround config option to > disable > > > > mulitple port bindign > > > > which we can backport and then we can try and work on the actual fix > after. > > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that > shoudl > > > > serve as a workaround > > > > for thos that hav this issue but its a regression in functionality. > > > > > > > > i can create a patch that will do that in an hour or so and submit a > > > > followup DNM patch to enabel the > > > > workaound in one of the gate jobs that tests live migration. > > > > i have a meeting in 10 mins and need to finish the pacht im > currently > > > > updating but ill submit a poc once that is done. > > > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > > proposed last year but ill see what i can do. > > > > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > > PS > > > > > I have testing environment on queens,rocky and stein and I can > make test > > > > > as you need. > > > > > Ignazio > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > > ignaziocassano at gmail.com> ha > scritto: > > > > > > > > > > > Hello Sean, > > > > > > the following is the configuration on my compute nodes: > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > > > > As far as firewall driver > > > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > > > testing > > > > > > environment and the > > > > > > same firewall driver. > > > > > > Live migration on provider network on queens works fine. > > > > > > It does not work fine on rocky and stein (vm lost connection > after it > > > > > > > > is > > > > > > migrated and start to respond only when the vm send a network > packet , > > > > > > > > for > > > > > > example when chrony pools the time server). > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, some updated about this issue. > > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must > be > > > > > > > > sent by > > > > > > > > qemu during live miration. > > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt > are > > > > > > > > bugged. > > > > > > > > > > > > > > it is not correct. > > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve > as > > > > > > > > its mac > > > > > > > learning frames > > > > > > > instead > > > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > > but was fixed by > > > > > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > > can you confirm you are not using the broken 2.6.0 release and > are > > > > > > > > using > > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > > > libvirt/qemu > > > > > > > > packages I installed on queens (I updated compute and > controllers > > > > > > > > node > > > > > > > > > > > > > > on > > > > > > > > queens for obtaining same libvirt/qemu version deployed on > rocky > > > > > > > > and > > > > > > > > > > > > > > stein). > > > > > > > > > > > > > > > > On queens live migration on provider network continues to > work > > > > > > > > fine. > > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > > > openstack > > > > > > > > components . > > > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly > assumes > > > > > > > that the port binding details wont > > > > > > > change when it does a live migration and does not update the > xml for > > > > > > > > the > > > > > > > netwrok interfaces. > > > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > > post_livemigration > > > > > > > in rocky+ neutron optionally uses the multiple port bindings > flow to > > > > > > > prebind the port to the destiatnion > > > > > > > so it can update the xml if needed and if post copy live > migration is > > > > > > > enable it will asyconsly activate teh dest port > > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > > > if you are using the iptables firewall os-vif will have > precreated > > > > > > > > the > > > > > > > ovs port and intermediate linux bridge before the > > > > > > > migration started which will allow neutron to wire it up (put > it on > > > > > > > > the > > > > > > > correct vlan and install security groups) before > > > > > > > the vm completes the migraton. > > > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh > ovs > > > > > > > > port > > > > > > > but libvirt deletes it and recreats it too. > > > > > > > as a result there is a race when using openvswitch firewall > that can > > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > > > smooney at redhat.com> > > > > > > > > ha scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > > > iptables_hybrid > > > > > > > > > > firewall. > > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > > spec intoduced teh ablity to shorten the outage by pre > biding the > > > > > > > > > > > > > > port and > > > > > > > > > activating it when > > > > > > > > > the vm is resumed on the destiation host before we get to > pos > > > > > > > > live > > > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully > elimiated > > > > > > > > as > > > > > > > > > > > > > > some > > > > > > > > > level of packet loss is > > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be > aware > > > > > > > > that > > > > > > > > > > > > > > if a > > > > > > > > > network partion happens > > > > > > > > > during a post copy live migration the vm will crash and > need to > > > > > > > > be > > > > > > > > > restarted. > > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > > > performace > > > > > > > > > > > > > > but > > > > > > > > > unlike pre copy migration if > > > > > > > > > the guess resumes on the dest and the mempry page has not > been > > > > > > > > copied > > > > > > > > > > > > > > yet > > > > > > > > > then it must wait for it to be copied > > > > > > > > > and retrive it form the souce host. if the connection too > the > > > > > > > > souce > > > > > > > > > > > > > > host > > > > > > > > > is intrupted then the vm cant > > > > > > > > > do that and the migration will fail and the instance will > crash. > > > > > > > > if > > > > > > > > > > > > > > you > > > > > > > > > are using precopy migration > > > > > > > > > if there is a network partaion during the migration the > > > > > > > > migration will > > > > > > > > > fail but the instance will continue > > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good > to be > > > > > > > > aware > > > > > > > > > > > > > > of > > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney < > smooney at redhat.com> ha > > > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano > wrote: > > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm > migrate > > > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > > > node > > > > > > > > > > > > to another I cannot ping it for several minutes. If > in the > > > > > > > > vm I > > > > > > > > > > > > > > put a > > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > > > migration > > > > > > > > > > > > > > works > > > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > I can ping it. Why this happens ? I read something > about > > > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an > older > > > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > > > called > > > > > > > > > > > RARP > > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you > using > > > > > > > > > > > > > > iptables > > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we > can do > > > > > > > > until > > > > > > > > > > > > > > we > > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > > currently libvirt handels interface plugging for > kernel ovs > > > > > > > > when > > > > > > > > > > > > > > using > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress > that > > > > > > > > but it > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > neutron patch are > > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out > dated. > > > > > > > > while > > > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > > > is > > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > > a race condition where the RARP packets sent by qemu > and > > > > > > > > then mac > > > > > > > > > > > > > > > > > > learning > > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have > opnestack > > > > > > > > > > > > > > rock or > > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > > it should reduce the downtime. in this conficution we > do not > > > > > > > > have > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > > > race > > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcr1br24 at gmail.com Sat Mar 13 04:10:14 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Sat, 13 Mar 2021 12:10:14 +0800 Subject: [Consultation] SFC proxy Message-ID: Dear Sir, (We made some edits from our previous mail.) We are testing a simple SFC in OpenStack (Stein) + OpenDaylight (Neon) + Open vSwitch (v2.11.1). It's an all-in-one deployment (OPNFV-9.0.0). We have read the document: https://readthedocs.org/projects/odl-sfc/downloads/pdf/latest/ and https://github.com/opnfv/sfc/blob/master/docs/release/scenarios/os-odl-sfc-noha/scenario.description.rst The driver we use is odl_v2: [image: image.png] [image: image.png] Our SFC topology: [image: enter image description here] All on the same compute node. Build SFC through API: 1. openstack sfc flow classifier create --source-ip-prefix 10.20.0.0/24 --logical-source-port p0 FC1 2. openstack sfc port pair create --description "Firewall SF instance 1" --ingress p1 --egress p1 --service-function-parameters correlation=None PP1 3. openstack sfc port pair group create --port-pair PP1 PPG1 4. openstack sfc port chain create --port-pair-group PPG1 --flow-classifier FC1 --chain-parameters correlation=nsh PC1 we expect that the encapsulation/decapsulation will be performed by the back-end driver, so that the data packet passes through the nsh-unaware VNF like a regular data packet. However, we can see that the NSH packet reaches the VNF interface (screenshot attached). [image: 123.jpg] Flow table: [image: 螢幕擷取畫面 2021-02-06 153311.png] It seems NSH header not decapsulate before entering NSH-unaware VNF. Could you give us some advice? We really appreciate it. Regards, Zack [image: Mailtrack] Sender notified by Mailtrack 03/13/21, 12:07:12 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 3606 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 29563 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 螢幕擷取畫面 2021-02-06 153311.png Type: image/png Size: 181834 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 123.jpg Type: image/jpeg Size: 566882 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 15 09:09:56 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 10:09:56 +0100 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Hello Kendall, I assigned the topic `wallaby-cycle-highlights` to all highlights not yet merged. Let me know if it's not too late for those and if you want to continue with them. https://review.opendev.org/q/topic:%22wallaby-cycle-highlight%22+(status:open%20OR%20status:merged) Thanks Le ven. 12 mars 2021 à 20:08, Kendall Nelson a écrit : > Hello All! > > I know we are past the deadline, but I wanted to do one final call. If you > have highlights you want included in the release marketing for Wallaby, you > must have patches pushed to the releases repo by Sunday March 14th at 6:00 > UTC. > > If you can't have it pushed by then but want to be included, please > contact me directly. > > Thanks! > > -Kendall (diablo_rojo) > > On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson > wrote: > >> Hello Everyone! >> >> It's time to start thinking about calling out 'cycle-highlights' in your >> deliverables! I have no idea how we are here AGAIN ALREADY, alas, here we >> be. >> >> As PTLs, you probably get many pings towards the end of every release >> cycle by various parties (marketing, management, journalists, etc) asking >> for highlights of what is new and what significant changes are coming in >> the new release. By putting them all in the same place it makes them easy >> to reference because they get compiled into a pretty website like this from >> the last few releases: Stein[1], Train[2]. >> >> We don't need a fully fledged marketing message, just a few highlights >> (3-4 ideally), from each project team. Looking through your release >> notes might be a good place to start. >> >> *The deadline for cycle highlights is the end of the R-5 week [3] (next >> week) on March 12th.* >> >> How To Reminder: >> ------------------------- >> >> Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the >> openstack/releases repo like this: >> >> cycle-highlights: >> - Introduced new service to use unused host to mine bitcoin. >> >> The formatting options for this tag are the same as what you are probably >> used to with Reno release notes. >> >> Also, you can check on the formatting of the output by either running >> locally: >> >> tox -e docs >> >> And then checking the resulting doc/build/html/$RELEASE/highlights.html >> file or the output of the build-openstack-sphinx-docs job under >> html/$RELEASE/highlights.html. >> >> Feel free to add me as a reviewer on your patches. >> >> Can't wait to see you all have accomplished this release! >> >> Thanks :) >> >> -Kendall Nelson (diablo_rojo) >> >> [1] https://releases.openstack.org/stein/highlights.html >> [2] https://releases.openstack.org/train/highlights.html >> [3] htt >> >> https://releases.openstack.org/wallaby/schedule.html >> >> > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 15 11:37:16 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 15 Mar 2021 12:37:16 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib Message-ID: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> Hi, I'm raising this RFE to ask if we can release, and include new version of the neutron-lib in the Wallaby release. Neutron already migrated all our policy rules to use new personas like system-reader or system-admin. Recently Lance found bug [1] with those new personas. Fix for that is now merged in neutron-lib [2]. This fix required bump the oslo_context dependency in lower-constraints and that lead us to bump many other dependencies versions. But in fact all those new versions are now aligned with what we already have in the Neutron's requirements so in fact neutron-lib was tested with those new versions of the packages every time it was run with Neutron master branch. I already proposed release patch for neutron-lib [3]. Please let me know if I You need anything else regrarding this RFE. [1] https://launchpad.net/bugs/1918506 [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 [3] https://review.opendev.org/c/openstack/releases/+/780550 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From laurentfdumont at gmail.com Mon Mar 15 13:31:43 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 15 Mar 2021 09:31:43 -0400 Subject: OSP13 - Queens - Inconsistent data between in DB between ipamallocation and ipallocation Message-ID: Hey everyone, We just troubleshooted (and fixed) a weird issue with ipamallocation. It looked like ipallocations we're not properly removed from the ipamallocation table. This caused port creations to fail with an error about the IP being already allocated (when there wasn't an actual port using the IP). We found this old issue https://bugs.launchpad.net/neutron/+bug/1884532 but not a whole lot of details in there. Has anyone seen this before? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From igene at igene.tw Mon Mar 15 13:56:33 2021 From: igene at igene.tw (Gene Kuo) Date: Mon, 15 Mar 2021 13:56:33 +0000 Subject: [largescale-sig] Next meeting: March 10, 15utc In-Reply-To: References: Message-ID: Hi, I confirmed that I'm able to give a short talk to start off the discussion for the next video meeting. The topic will be "RabbitMQ Clusters at Large Scale OpenStack Infrastructure". Regards, Gene Kuo ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ 在 2021年3月11日星期四 01:06,Belmiro Moreira 寫道: > Hi, > we had the Large Scale SIG meeting today. > > Meeting logs are available at: > http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-03-10-15.00.log.html > > We discussed topics for a new video meeting in 2 weeks. > Details will be sent later. > > regards, > Belmiro > > On Mon, Mar 8, 2021 at 12:43 PM Thierry Carrez wrote: > >> Hi everyone, >> >> Our next Large Scale SIG meeting will be this Wednesday in >> #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it >> translates locally at: >> >> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210310T15 >> >> Belmiro Moreira will chair this meeting. A number of topics have already >> been added to the agenda, including discussing CentOS Stream, reflecting >> on last video meeting and pick a topic for the next one. >> >> Feel free to add other topics to our agenda at: >> https://etherpad.openstack.org/p/large-scale-sig-meeting >> >> Regards, >> >> -- >> Thierry Carrez -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Mon Mar 15 14:15:10 2021 From: mthode at mthode.org (Matthew Thode) Date: Mon, 15 Mar 2021 09:15:10 -0500 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> Message-ID: <20210315141510.fgcbpdd3z2xmncqg@mthode.org> On 21-03-15 12:37:16, Slawek Kaplonski wrote: > Hi, > > I'm raising this RFE to ask if we can release, and include new version of the > neutron-lib in the Wallaby release. > Neutron already migrated all our policy rules to use new personas like > system-reader or system-admin. Recently Lance found bug [1] with those new > personas. > Fix for that is now merged in neutron-lib [2]. > This fix required bump the oslo_context dependency in lower-constraints and > that lead us to bump many other dependencies versions. But in fact all those > new versions are now aligned with what we already have in the Neutron's > requirements so in fact neutron-lib was tested with those new versions of the > packages every time it was run with Neutron master branch. > > I already proposed release patch for neutron-lib [3]. > Please let me know if I You need anything else regrarding this RFE. > > [1] https://launchpad.net/bugs/1918506 > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat It looks like there are a lot of places neutron-lib is still used. My question here is if those projects need to update their requirements and re-release? -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 15 15:14:53 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 16:14:53 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315141510.fgcbpdd3z2xmncqg@mthode.org> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> Message-ID: If projects that use neutron-lib want this specific version of neutron-lib then I guess that yes they need to update their requirements and then re-release... We already freezed releases bor lib and client-lib few days ago if the update is outside this scope (services or UI) then I think we could accept it, else it will require lot of re-release and I don't think we want to do that, especially now that branching of those is on the rails. Le lun. 15 mars 2021 à 15:22, Matthew Thode a écrit : > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > Hi, > > > > I'm raising this RFE to ask if we can release, and include new version > of the > > neutron-lib in the Wallaby release. > > Neutron already migrated all our policy rules to use new personas like > > system-reader or system-admin. Recently Lance found bug [1] with those > new > > personas. > > Fix for that is now merged in neutron-lib [2]. > > This fix required bump the oslo_context dependency in lower-constraints > and > > that lead us to bump many other dependencies versions. But in fact all > those > > new versions are now aligned with what we already have in the Neutron's > > requirements so in fact neutron-lib was tested with those new versions > of the > > packages every time it was run with Neutron master branch. > > > > I already proposed release patch for neutron-lib [3]. > > Please let me know if I You need anything else regrarding this RFE. > > > > [1] https://launchpad.net/bugs/1918506 > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > It looks like there are a lot of places neutron-lib is still used. My > question here is if those projects need to update their requirements and > re-release? > > -- > Matthew Thode > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From munnaeebd at gmail.com Mon Mar 15 15:33:18 2021 From: munnaeebd at gmail.com (Md. Hejbul Tawhid MUNNA) Date: Mon, 15 Mar 2021 21:33:18 +0600 Subject: How to detach cinder volume Message-ID: Hi, We are using openstack rocky. One of our openstack vm was running with 3 volume , 1 is bootable and another 2 is normal volume . We have deleted 1 normal volume without properly detaching. So volume is deleted but instance is still showing 3 volume is attached. Now we can't snapshot the instance and facing some others issue. Please advise how we can detach the volume(deleted) from the instance Note: We have reset state volume attached status to detached and delete the volume. Regards, Munna -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 15 15:44:41 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 15 Mar 2021 16:44:41 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> Message-ID: <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> Hi, On Mon, Mar 15, 2021 at 04:14:53PM +0100, Herve Beraud wrote: > If projects that use neutron-lib want this specific version of neutron-lib > then I guess that yes they need to update their requirements and then > re-release... We already freezed releases bor lib and client-lib few days > ago if the update is outside this scope (services or UI) then I think we > could accept it, else it will require lot of re-release and I don't think > we want to do that, especially now that branching of those is on the rails. > > Le lun. 15 mars 2021 à 15:22, Matthew Thode a écrit : > > > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > > Hi, > > > > > > I'm raising this RFE to ask if we can release, and include new version > > of the > > > neutron-lib in the Wallaby release. > > > Neutron already migrated all our policy rules to use new personas like > > > system-reader or system-admin. Recently Lance found bug [1] with those > > new > > > personas. > > > Fix for that is now merged in neutron-lib [2]. > > > This fix required bump the oslo_context dependency in lower-constraints > > and > > > that lead us to bump many other dependencies versions. But in fact all > > those > > > new versions are now aligned with what we already have in the Neutron's > > > requirements so in fact neutron-lib was tested with those new versions > > of the > > > packages every time it was run with Neutron master branch. > > > > > > I already proposed release patch for neutron-lib [3]. > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > [1] https://launchpad.net/bugs/1918506 > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > It looks like there are a lot of places neutron-lib is still used. My > > question here is if those projects need to update their requirements and > > re-release? Fix included in that new version is relatively small. It is in the part of code which is used already by stadium projects which are using neutron_lib. But all of them also depends on Neutron so if neutron will bump minimum version of the neutron-lib, it will be "automatically" used by those stadium projects as well. And regarding all other lower-constaints changes in that new neutron-lib - as I said in my previous email, all those versions are already set as minimum in neutron so all of that was in fact effectively used, and is tested. If we will not have this fix in neutron-lib in Wallaby, it basically means that we will have broken support for those new personas like "system reader/admin". If that would be problem to make new neutron-lib release now, would it be possible to cut stable/wallaby branch in neutron-lib, backport that fix there and release new bugfix version of neutron-lib just after Wallaby will be released? Will it be possible to bump neutron's required neutron-lib version then? > > > > -- > > Matthew Thode > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mthode at mthode.org Mon Mar 15 15:48:04 2021 From: mthode at mthode.org (Matthew Thode) Date: Mon, 15 Mar 2021 10:48:04 -0500 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> Message-ID: <20210315154804.5fsfy6hm47xpielg@mthode.org> On 21-03-15 16:44:41, Slawek Kaplonski wrote: > Hi, > > On Mon, Mar 15, 2021 at 04:14:53PM +0100, Herve Beraud wrote: > > If projects that use neutron-lib want this specific version of neutron-lib > > then I guess that yes they need to update their requirements and then > > re-release... We already freezed releases bor lib and client-lib few days > > ago if the update is outside this scope (services or UI) then I think we > > could accept it, else it will require lot of re-release and I don't think > > we want to do that, especially now that branching of those is on the rails. > > > > Le lun. 15 mars 2021 à 15:22, Matthew Thode a écrit : > > > > > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > > > Hi, > > > > > > > > I'm raising this RFE to ask if we can release, and include new version > > > of the > > > > neutron-lib in the Wallaby release. > > > > Neutron already migrated all our policy rules to use new personas like > > > > system-reader or system-admin. Recently Lance found bug [1] with those > > > new > > > > personas. > > > > Fix for that is now merged in neutron-lib [2]. > > > > This fix required bump the oslo_context dependency in lower-constraints > > > and > > > > that lead us to bump many other dependencies versions. But in fact all > > > those > > > > new versions are now aligned with what we already have in the Neutron's > > > > requirements so in fact neutron-lib was tested with those new versions > > > of the > > > > packages every time it was run with Neutron master branch. > > > > > > > > I already proposed release patch for neutron-lib [3]. > > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > > > [1] https://launchpad.net/bugs/1918506 > > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > It looks like there are a lot of places neutron-lib is still used. My > > > question here is if those projects need to update their requirements and > > > re-release? > > Fix included in that new version is relatively small. It is in the part of code > which is used already by stadium projects which are using neutron_lib. But all > of them also depends on Neutron so if neutron will bump minimum version of the > neutron-lib, it will be "automatically" used by those stadium projects as well. > And regarding all other lower-constaints changes in that new neutron-lib - as I > said in my previous email, all those versions are already set as minimum in > neutron so all of that was in fact effectively used, and is tested. > > If we will not have this fix in neutron-lib in Wallaby, it basically means that > we will have broken support for those new personas like "system reader/admin". > > If that would be problem to make new neutron-lib release now, would it be > possible to cut stable/wallaby branch in neutron-lib, backport that fix there > and release new bugfix version of neutron-lib just after Wallaby will be > released? Will it be possible to bump neutron's required neutron-lib version > then? > > > > > > > -- > > > Matthew Thode > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat It sounds like other projects will NOT need a release after this is bumped. If that is the case the release has my signoff as requirements PTL. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 15 15:57:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 16:57:21 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315154804.5fsfy6hm47xpielg@mthode.org> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> <20210315154804.5fsfy6hm47xpielg@mthode.org> Message-ID: Ok thanks. Then I think we can continue Le lun. 15 mars 2021 à 16:50, Matthew Thode a écrit : > On 21-03-15 16:44:41, Slawek Kaplonski wrote: > > Hi, > > > > On Mon, Mar 15, 2021 at 04:14:53PM +0100, Herve Beraud wrote: > > > If projects that use neutron-lib want this specific version of > neutron-lib > > > then I guess that yes they need to update their requirements and then > > > re-release... We already freezed releases bor lib and client-lib few > days > > > ago if the update is outside this scope (services or UI) then I think > we > > > could accept it, else it will require lot of re-release and I don't > think > > > we want to do that, especially now that branching of those is on the > rails. > > > > > > Le lun. 15 mars 2021 à 15:22, Matthew Thode a > écrit : > > > > > > > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > > > > Hi, > > > > > > > > > > I'm raising this RFE to ask if we can release, and include new > version > > > > of the > > > > > neutron-lib in the Wallaby release. > > > > > Neutron already migrated all our policy rules to use new personas > like > > > > > system-reader or system-admin. Recently Lance found bug [1] with > those > > > > new > > > > > personas. > > > > > Fix for that is now merged in neutron-lib [2]. > > > > > This fix required bump the oslo_context dependency in > lower-constraints > > > > and > > > > > that lead us to bump many other dependencies versions. But in fact > all > > > > those > > > > > new versions are now aligned with what we already have in the > Neutron's > > > > > requirements so in fact neutron-lib was tested with those new > versions > > > > of the > > > > > packages every time it was run with Neutron master branch. > > > > > > > > > > I already proposed release patch for neutron-lib [3]. > > > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > > > > > [1] https://launchpad.net/bugs/1918506 > > > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > > > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > > > > > > > -- > > > > > Slawek Kaplonski > > > > > Principal Software Engineer > > > > > Red Hat > > > > > > > > It looks like there are a lot of places neutron-lib is still used. > My > > > > question here is if those projects need to update their requirements > and > > > > re-release? > > > > Fix included in that new version is relatively small. It is in the part > of code > > which is used already by stadium projects which are using neutron_lib. > But all > > of them also depends on Neutron so if neutron will bump minimum version > of the > > neutron-lib, it will be "automatically" used by those stadium projects > as well. > > And regarding all other lower-constaints changes in that new neutron-lib > - as I > > said in my previous email, all those versions are already set as minimum > in > > neutron so all of that was in fact effectively used, and is tested. > > > > If we will not have this fix in neutron-lib in Wallaby, it basically > means that > > we will have broken support for those new personas like "system > reader/admin". > > > > If that would be problem to make new neutron-lib release now, would it be > > possible to cut stable/wallaby branch in neutron-lib, backport that fix > there > > and release new bugfix version of neutron-lib just after Wallaby will be > > released? Will it be possible to bump neutron's required neutron-lib > version > > then? > > > > > > > > > > -- > > > > Matthew Thode > > > > > > > > > > > > > -- > > > Hervé Beraud > > > Senior Software Engineer at Red Hat > > > irc: hberaud > > > https://github.com/4383/ > > > https://twitter.com/4383hberaud > > > -----BEGIN PGP SIGNATURE----- > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > v6rDpkeNksZ9fFSyoY2o > > > =ECSj > > > -----END PGP SIGNATURE----- > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > It sounds like other projects will NOT need a release after this is > bumped. If that is the case the release has my signoff as requirements > PTL. > > -- > Matthew Thode > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Mar 15 16:11:18 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 15 Mar 2021 12:11:18 -0400 Subject: [devstack][cinder] ceph iscsi driver support Message-ID: Hello devstack cores, Cinder added a ceph iscsi backend driver at Wallaby milestone 2, and CI on the original patch [0] was done via a dependency chain. Ongoing CI for this driver depends on this devstack patch: Add support for ceph_iscsi cinder driver https://review.opendev.org/c/openstack/devstack/+/668668 It's got one +2. I just want to raise awareness so it can get another review so we can get the CI running before any changes are proposed for the driver. (All dependencies for the devstack patch have merged.) thanks! brian [0] https://review.opendev.org/c/openstack/cinder/+/662829 From tcr1br24 at gmail.com Mon Mar 15 07:58:53 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Mon, 15 Mar 2021 15:58:53 +0800 Subject: [ISSUE]instances cannot ping on different compute nodes Message-ID: Dear Sir, We are testing in OpenStack (Stein) + OpenDaylight (Neon) + Open vSwitch (v2.11.1). It's an all-in-one deployment (OPNFV-9.0.0). The instances can ping each other on the same compute node, but cannot ping each other if instances are on different compute nodes. *Cmp001: * ============================================= Manager "tcp:172.16.10.40:6640" is_connected: true Bridge br-floating Port "ens6" Interface "ens6" Port br-floating Interface br-floating type: internal Port br-floating-int-patch Interface br-floating-int-patch type: patch options: {peer=br-floating-pa} Bridge br-int Controller "tcp:172.16.10.40:6653" is_connected: true fail_mode: secure Port "tun9a4f2733bb3" Interface "tun9a4f2733bb3" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.5", remote_ip=flow} Port "tap59535eb9-ef" Interface "tap59535eb9-ef" Port br-floating-pa Interface br-floating-pa type: patch options: {peer=br-floating-int-patch} Port "tapc58e6fd1-cf" Interface "tapc58e6fd1-cf" Port "tap8bb9f6de-a9" Interface "tap8bb9f6de-a9" Port "tap5f490693-e5" Interface "tap5f490693-e5" Port "tund818d69b326" Interface "tund818d69b326" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.5", remote_ip="10.1.0.2"} Port br-int Interface br-int type: internal Port "tapef79fa77-ab" Interface "tapef79fa77-ab" Port "tapa9e63637-8a" Interface "tapa9e63637-8a" ovs_version: "2.11.1" ============================================= *Cmp002:* ============================================= Manager "tcp:172.16.10.40:6640" is_connected: true Manager "ptcp:6640:127.0.0.1" Bridge br-floating Port br-floating-int-patch Interface br-floating-int-patch type: patch options: {peer=br-floating-pa} Port br-floating Interface br-floating type: internal Port "ens6" Interface "ens6" Bridge br-int Controller "tcp:172.16.10.40:6653" is_connected: true fail_mode: secure Port br-floating-pa Interface br-floating-pa type: patch options: {peer=br-floating-int-patch} Port "tap50868440-95" Interface "tap50868440-95" Port "tapa496a81b-0c" Interface "tapa496a81b-0c" Port "tun8d223879efb" Interface "tun8d223879efb" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.6", remote_ip=flow} Port br-int Interface br-int type: internal Port "tun941edf0fa60" Interface "tun941edf0fa60" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.6", remote_ip="10.1.0.2"} ovs_version: "2.11.1" ====================================================== Could you give us some advice? Thanks. Regards, Zack [image: Mailtrack] Sender notified by Mailtrack 03/15/21, 03:56:22 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshewale at redhat.com Mon Mar 15 14:48:04 2021 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Mon, 15 Mar 2021 20:18:04 +0530 Subject: [tripleo] TripleO CI Summary: Unified Sprint 40 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 40** (Feb 04 thru Feb 24 2021). The following is a summary of completed work during this sprint cycle: - Deployed the new (next gen) promoter code - Have a successful promotion of master, victoria, ussuri and train c8 on newly deployed promoter server. - Completated dependency pipeline work as below: - RHEL and CentOS OpenVswitch pipeline - Upstream and downstream container-tools pipeline - Downstream RHELU.next - Add centos8 stream jobs and pipeline - https://hackmd.io/Y6IGnQGXTqCl8TAX0YAeoA?view - monitoring dependency pipeline under cockpit - Created the design document for tripleo-repos - https://hackmd.io/v2jCX9RwSeuP8EEFDHRa8g?view - https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - [tempest skip list] Add addtest command to tempest-skiplist - Tempest scenario manager Many of the scenario manager methods used to be private apis. Since these methods Are supposed to be used in tempest plugins they aren’t expected to be private. Following commits are with respect to this idea except - https://review.opendev.org/c/openstack/tempest/+/774085 - https://review.opendev.org/c/openstack/tempest/+/766472 - https://review.opendev.org/c/openstack/tempest/+/773783 - https://review.opendev.org/c/openstack/tempest/+/773019 Implementation of create_subnet() api varies with manila-tempest plugin For the stable implementation of create_subnet() following parameters have been added: 1. Condition to check empty str_cidr 2. More attributes in case of ipv6 3. Usage of default_subnet_pool - https://review.opendev.org/c/openstack/tempest/+/766472 - Ruck/Rover recorded notes [1]. The planned work for the next sprint and leftover previous sprint work as following: - Migrate upstream master from CentOS-8 to CentOS-8-Stream - improving resource usage/ reduce upstream infra footprint, context - Deploy the promoter server for downstream promotions - elastic-recheck containerization - https://hackmd.io/dmxF-brbS-yg7tkFB_kxXQ - Openstack health for tripleo - https://hackmd.io/HQ5hyGAOSuG44Le2x6YzUw - Tripleo-repos spec and implementation - https://hackmd.io/v2jCX9RwSeuP8EEFDHRa8g?view - https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - Leftover dependency pipeline work: - ansible 2.10/11 upstream pipeline - https://hackmd.io/riLaLFcTTpybbyY51xcm9A?both - Tempest skiplist: - https://review.opendev.org/c/openstack/tripleo-quickstart/+/771593/ - https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/778157 - https://review.opendev.org/c/openstack/tripleo-ci/+/778155 - https://review.opendev.org/c/osf/python-tempestconf/+/775195 The Ruck and Rover for this sprint are Soniya Vyas (soniya29) and Bhagyashri Shewale (bhagyashri). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in hackmd [2]. Thanks, Bhagyashri Shewale [1] https://hackmd.io/Ta6fdwi2Sme4map8WEiaQA [2] https://hackmd.io/ii6M2T4RTUSFeTZqkw3uRQ -------------- next part -------------- An HTML attachment was scrubbed... URL: From foundjem at ieee.org Mon Mar 15 15:36:27 2021 From: foundjem at ieee.org (Armstrong Foundjem) Date: Mon, 15 Mar 2021 11:36:27 -0400 Subject: [barbican][ironic][neutron] [QA][OpenstackSDK][swift] References: <2B2B1E99-1139-4A1F-B3F0-229E240A8752.ref@ieee.org> Message-ID: <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> Hello! Quick reminder that for deliverables following the cycle-with-intermediary model, the release team will use the latest Wallaby release available on release week. The following deliverables have done a Wallaby release, but it was not refreshed in the last two months: ansible-role-lunasa-hsm ironic-inspector ironic-prometheus-exporter ironic-ui ovn-octavia-provider patrole python-openstackclient swift You should consider making a new one very soon, so that we don't use an outdated version for the final release. - Armstrong Foundjem (armstrong) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Mar 15 16:31:40 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 15 Mar 2021 17:31:40 +0100 Subject: [barbican][ironic][neutron] [QA][OpenstackSDK][swift] In-Reply-To: <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> References: <2B2B1E99-1139-4A1F-B3F0-229E240A8752.ref@ieee.org> <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> Message-ID: Hi Armstrong, We will push a release for ironic-prometheus-exporter this week. Thanks! Em seg., 15 de mar. de 2021 às 17:28, Armstrong Foundjem escreveu: > Hello! > > *Quick reminder that for deliverables following the > cycle-with-intermediary* > *model, the release team will use the latest Wallaby release available on* > *release week.* > > *The following deliverables have done a Wallaby release, but it was not* > *refreshed in the last two months:* > > ansible-role-lunasa-hsm > ironic-inspector > ironic-prometheus-exporter > ironic-ui > ovn-octavia-provider > patrole > python-openstackclient > swift > > > *You should consider making a new one very soon, so that we don't use an* > *outdated version for the final release.* > > - Armstrong Foundjem (armstrong) > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Mar 15 16:36:10 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 15 Mar 2021 17:36:10 +0100 Subject: [queens][nova] live migration error Message-ID: Hello all, we are facing a problem while migrating some virtual machines on centos 7 queens. If we create a new virtual machine it migrates. Old virtual machines got the following: 37eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Increasing downtime to 50 ms after 0 sec elapsed time 2021-03-15 17:26:44.428 82382 INFO nova.virt.libvirt.driver [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0) 2021-03-15 17:26:57.922 82382 INFO nova.compute.manager [req-83fd3d62-292e-4990-8f7a-47f404b557cc - - - - -] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] VM Paused (Lifecycle Event) 2021-03-15 17:26:58.617 82382 INFO nova.virt.libvirt.driver [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Migration operation has completed 2021-03-15 17:26:58.618 82382 INFO nova.compute.manager [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] _post_live_migration() is started.. 2021-03-15 17:26:58.658 82382 ERROR nova.virt.libvirt.driver [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Live Migration failure: operation failed: domain is not running: libvirtError: operation failed: domain is not running 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Post live migration at destination podto2-kvmae failed: InstanceNotFound_Remote: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. InstanceNotFound: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6505, in _post_live_migration 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] instance, block_migration, dest) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 783, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] instance=instance, block_migration=block_migration) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] retry=self.retry) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] timeout=timeout, retry=retry) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] retry=retry) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 550, in _send 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] raise result 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound_Remote: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] res = self.dispatcher.dispatch(message) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return self._do_dispatch(endpoint, method, ctxt, args) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] result = func(ctxt, **new_args) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in wrapped 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] function_name, call_dict, binary) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, self.value, self.tb) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in wrapped 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return f(self, context, *args, **kw) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1000, in decorated_function 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, *args, **kwargs) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 203, in decorated_function 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, *args, **kwargs) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6658, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 'destination host.', instance=instance) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, self.value, self.tb) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6653, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] block_device_info) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7844, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] self._host.get_guest(instance) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 526, in get_guest 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return libvirt_guest.Guest(self._get_domain(instance)) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 546, in _get_domain 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] raise exception.InstanceNotFound(instance_id=instance.uuid) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:05.179 82382 WARNING nova.compute.resource_tracker [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Instance not resizing, skipping migration.: InstanceNotFound_Remote: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:13.621 82382 INFO nova.compute.manager [-] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] VM Stopped (Lifecycle Event) 2021-03-15 17:27:13.711 82382 INFO nova.compute.manager [req-0dcd030c-d520-4223-91a6-f2fbc9f2444a - - - - -] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] During the sync_power process the instance has moved from host podto2-kvmae to host podto2-kvm01 The istances was on podto1-kvm01 and we try to migrate it on podto2-kvmae. It stops to migrate and goes in error state and stop to responding to ping requests. >From logs is seems it returned on podto2-kvm01 but it is in error state on podto2-kvmae. After hard rebooting it starts on podto2-kvmae and then we can migrate it where we want without errors. Please, any help ? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Mar 15 16:44:44 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 15 Mar 2021 11:44:44 -0500 Subject: [barbican][ironic][neutron] [QA][OpenstackSDK][swift] In-Reply-To: <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> References: <2B2B1E99-1139-4A1F-B3F0-229E240A8752.ref@ieee.org> <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> Message-ID: <17836c68af8.101fedd35538416.121854804594138725@ghanshyammann.com> ---- On Mon, 15 Mar 2021 10:36:27 -0500 Armstrong Foundjem wrote ---- > Hello! > Quick reminder that for deliverables following the cycle-with-intermediarymodel, the release team will use the latest Wallaby release available onrelease week. > The following deliverables have done a Wallaby release, but it was notrefreshed in the last two months: > ansible-role-lunasa-hsmironic-inspectorironic-prometheus-exporterironic-uiovn-octavia-providerpatrolepython-openstackclientswift Thanks Armstrong for the reminder, Qa team will do "patrole" release in the next week or so. I am working on fixing the current gate failure. -gmann > > You should consider making a new one very soon, so that we don't use anoutdated version for the final release. > - Armstrong Foundjem (armstrong) From kennelson11 at gmail.com Mon Mar 15 16:54:43 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 15 Mar 2021 09:54:43 -0700 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Yes! We should merge them even with the deadline past so they will be on the site to look back at. I'll review them later today. -Kendall (diablo_rojo) On Mon, Mar 15, 2021, 2:10 AM Herve Beraud wrote: > Hello Kendall, > > I assigned the topic `wallaby-cycle-highlights` to all highlights not yet > merged. Let me know if it's not too late for those and if you want to > continue with them. > > > https://review.opendev.org/q/topic:%22wallaby-cycle-highlight%22+(status:open%20OR%20status:merged) > > Thanks > > Le ven. 12 mars 2021 à 20:08, Kendall Nelson a > écrit : > >> Hello All! >> >> I know we are past the deadline, but I wanted to do one final call. If >> you have highlights you want included in the release marketing for Wallaby, >> you must have patches pushed to the releases repo by Sunday March 14th at >> 6:00 UTC. >> >> If you can't have it pushed by then but want to be included, please >> contact me directly. >> >> Thanks! >> >> -Kendall (diablo_rojo) >> >> On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson >> wrote: >> >>> Hello Everyone! >>> >>> It's time to start thinking about calling out 'cycle-highlights' in >>> your deliverables! I have no idea how we are here AGAIN ALREADY, alas, here >>> we be. >>> >>> As PTLs, you probably get many pings towards the end of every release >>> cycle by various parties (marketing, management, journalists, etc) asking >>> for highlights of what is new and what significant changes are coming in >>> the new release. By putting them all in the same place it makes them easy >>> to reference because they get compiled into a pretty website like this from >>> the last few releases: Stein[1], Train[2]. >>> >>> We don't need a fully fledged marketing message, just a few highlights >>> (3-4 ideally), from each project team. Looking through your release >>> notes might be a good place to start. >>> >>> *The deadline for cycle highlights is the end of the R-5 week [3] (next >>> week) on March 12th.* >>> >>> How To Reminder: >>> ------------------------- >>> >>> Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the >>> openstack/releases repo like this: >>> >>> cycle-highlights: >>> - Introduced new service to use unused host to mine bitcoin. >>> >>> The formatting options for this tag are the same as what you are >>> probably used to with Reno release notes. >>> >>> Also, you can check on the formatting of the output by either running >>> locally: >>> >>> tox -e docs >>> >>> And then checking the resulting doc/build/html/$RELEASE/highlights.html >>> file or the output of the build-openstack-sphinx-docs job under >>> html/$RELEASE/highlights.html. >>> >>> Feel free to add me as a reviewer on your patches. >>> >>> Can't wait to see you all have accomplished this release! >>> >>> Thanks :) >>> >>> -Kendall Nelson (diablo_rojo) >>> >>> [1] https://releases.openstack.org/stein/highlights.html >>> [2] https://releases.openstack.org/train/highlights.html >>> [3] htt >>> >>> https://releases.openstack.org/wallaby/schedule.html >>> >>> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Mar 15 17:59:07 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 15 Mar 2021 18:59:07 +0100 Subject: [queens][nova] live migration error In-Reply-To: References: Message-ID: Hello, looking at destination kvm host I got the following in instance log under /var/log/libvirt/qemu: 2021-03-15 11:48:31.996+0000: starting up libvirt version: 4.5.0, package: 36.el7_9.3 (CentOS BuildSystem , 2020-11-16-16:25:20, x86-01.bsys.centos.org), qemu version: 2.12.0qemu-kvm-ev-2.12.0-44.1.el7_8.1, kernel: 3.10.0-1160.15.2.el7.x86_64, hostname: podto2-kvmae LC_ALL=C \ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \ QEMU_AUDIO_DRV=none \ /usr/libexec/qemu-kvm \ -name guest=instance-00002a52,debug-threads=on \ -S \ -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-73-instance-00002a52/master-key.aes \ -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off \ -cpu Broadwell-IBRS,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on \ -m 4096 \ -realtime mlock=off \ -smp 2,sockets=2,cores=1,threads=1 \ -uuid c6ea7ed2-e7ce-4df6-a767-6bb95ae8fdc6 \ -smbios 'type=1,manufacturer=RDO,product=OpenStack Compute,version=17.0.11-1.el7,serial=3dec30fe-a31f-4ea6-971f-6f993589ef04,uuid=c6ea7ed2-e7ce-4df6-a767-6bb95ae8fdc6,family=Virtual Machine' \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=139,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -drive file=/var/lib/nova/instances/c6ea7ed2-e7ce-4df6-a767-6bb95ae8fdc6/disk.config,format=raw,if=none,id=drive-ide0-0-0,readonly=on,cache=none \ -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,write-cache=on \ -drive file=/var/lib/nova/mnt/7eb4b0178ee3ec9ad7cbbc20c62b1912/volume-d5c812c5-2c27-4e82-a38d-83fc79ab848e,format=raw,if=none,id=drive-virtio-disk0,serial=d5c812c5-2c27-4e82-a38d-83fc79ab848e,cache=none,aio=native \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on \ -netdev tap,fd=141,id=hostnet0,vhost=on,vhostfd=142 \ -device virtio-net-pci,host_mtu=9000,netdev=hostnet0,id=net0,mac=fa:16:3e:3f:e5:32,bus=pci.0,addr=0x3 \ -add-fd set=3,fd=144 \ -chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=143,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:55 \ -k en-us \ -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \ -incoming defer \ -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-03-15 11:48:31.996+0000: Domain id=73 is tainted: high-privileges 2021-03-15T11:48:32.163025Z qemu-kvm: -chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on: char device redirected to /dev/pts/57 (label charserial0) 2021-03-15T11:48:32.167206Z qemu-kvm: -drive file=/var/lib/nova/mnt/7eb4b0178ee3ec9ad7cbbc20c62b1912/volume-d5c812c5-2c27-4e82-a38d-83fc79ab848e,format=raw,if=none,id=drive-virtio-disk0,serial=d5c812c5-2c27-4e82-a38d-83fc79ab848e,cache=none,aio=native: 'serial' is deprecated, please use the corresponding option of '-device' instead 2021-03-15T11:48:37.779611Z qemu-kvm: Failed to load virtio_pci/modern_queue_state:desc 2021-03-15T11:48:37.780020Z qemu-kvm: Failed to load virtio_pci/modern_state:vqs 2021-03-15T11:48:37.780042Z qemu-kvm: Failed to load virtio/extra_state:extra_state 2021-03-15T11:48:37.780062Z qemu-kvm: Failed to load virtio-balloon:virtio 2021-03-15T11:48:37.780082Z qemu-kvm: error while loading state for instance 0x0 of device '0000:00:06.0/virtio-balloon' 2021-03-15T11:48:37.781465Z qemu-kvm: load of migration failed: Input/output error 2021-03-15 11:48:38.231+0000: shutting down, reason=crashed "instance-00002a52.log" 102L, 7122C But hard resetting the vm it starts Ignazio Il giorno lun 15 mar 2021 alle ore 17:36 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello all, > we are facing a problem while migrating some virtual machines on centos 7 > queens. > If we create a new virtual machine it migrates. > Old virtual machines got the following: > 37eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Increasing downtime to 50 ms after 0 > sec elapsed time > 2021-03-15 17:26:44.428 82382 INFO nova.virt.libvirt.driver > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Migration running for 0 secs, memory > 100% remaining; (bytes processed=0, remaining=0, total=0) > 2021-03-15 17:26:57.922 82382 INFO nova.compute.manager > [req-83fd3d62-292e-4990-8f7a-47f404b557cc - - - - -] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] VM Paused (Lifecycle Event) > 2021-03-15 17:26:58.617 82382 INFO nova.virt.libvirt.driver > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Migration operation has completed > 2021-03-15 17:26:58.618 82382 INFO nova.compute.manager > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] _post_live_migration() is started.. > 2021-03-15 17:26:58.658 82382 ERROR nova.virt.libvirt.driver > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Live Migration failure: operation > failed: domain is not running: libvirtError: operation failed: domain is > not running > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Post live migration at destination > podto2-kvmae failed: InstanceNotFound_Remote: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > InstanceNotFound: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not > be found. > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6505, in > _post_live_migration > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] instance, block_migration, dest) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 783, in > post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] instance=instance, > block_migration=block_migration) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, > in call > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] retry=self.retry) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, > in _send > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] timeout=timeout, retry=retry) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 559, in send > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] retry=retry) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 550, in _send > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] raise result > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound_Remote: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, > in _process_incoming > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] res = > self.dispatcher.dispatch(message) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line > 220, in dispatch > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return > self._do_dispatch(endpoint, method, ctxt, args) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line > 190, in _do_dispatch > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] result = func(ctxt, **new_args) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in > wrapped > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] function_name, call_dict, binary) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, > self.value, self.tb) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in > wrapped > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return f(self, context, *args, > **kw) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1000, in > decorated_function > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, > *args, **kwargs) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 203, in > decorated_function > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, > *args, **kwargs) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6658, in > post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] 'destination host.', > instance=instance) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, > self.value, self.tb) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6653, in > post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] block_device_info) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7844, > in post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] self._host.get_guest(instance) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 526, in > get_guest > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return > libvirt_guest.Guest(self._get_domain(instance)) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 546, in > _get_domain > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] raise > exception.InstanceNotFound(instance_id=instance.uuid) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:05.179 82382 WARNING nova.compute.resource_tracker > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Instance not resizing, skipping > migration.: InstanceNotFound_Remote: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > 2021-03-15 17:27:13.621 82382 INFO nova.compute.manager [-] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] VM Stopped (Lifecycle Event) > 2021-03-15 17:27:13.711 82382 INFO nova.compute.manager > [req-0dcd030c-d520-4223-91a6-f2fbc9f2444a - - - - -] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] During the sync_power process the > instance has moved from host podto2-kvmae to host podto2-kvm01 > > > The istances was on podto1-kvm01 and we try to migrate it on podto2-kvmae. > It stops to migrate and goes in error state and stop to responding to ping > requests. > From logs is seems it returned on podto2-kvm01 but it is in error state on > podto2-kvmae. > After hard rebooting it starts on podto2-kvmae and then we can migrate it > where we want without errors. > Please, any help ? > Thanks > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajitharobert01 at gmail.com Mon Mar 15 17:32:02 2021 From: ajitharobert01 at gmail.com (Ajitha Robert) Date: Mon, 15 Mar 2021 23:02:02 +0530 Subject: [cinder] wallaby feature freeze Message-ID: Hi team, This patch https://review.opendev.org/c/openstack/cinder/+/778886 requires a feature freeze exception. I am working on the the minor revisions and the CI will be running tomorrow. Thank you. -- *Regards,Ajitha R* -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajesh.r at zadarastorage.com Mon Mar 15 17:43:11 2021 From: rajesh.r at zadarastorage.com (Rajesh Ratnakaram) Date: Mon, 15 Mar 2021 23:13:11 +0530 Subject: [cinder] wallaby feature freeze Message-ID: <604f9caf.1c69fb81.760ef.2f02@mx.google.com> Hi, I have added missing features to Zadara cinder driver and posted: https://review.opendev.org/c/openstack/cinder/+/774463 The corresponding bp can be found at: https://blueprints.launchpad.net/cinder/+spec/zadara-wallaby-features I wanted to push the above changes to Wallaby release, hence requesting to add FFE for the above request. Thanks and Regards, Rajesh. Sent from Mail for Windows 10 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 15 20:17:55 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 21:17:55 +0100 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Thanks. Only one patch is still open but I saw that you commented on it. Le lun. 15 mars 2021 à 17:55, Kendall Nelson a écrit : > Yes! We should merge them even with the deadline past so they will be on > the site to look back at. > > I'll review them later today. > > -Kendall (diablo_rojo) > > On Mon, Mar 15, 2021, 2:10 AM Herve Beraud wrote: > >> Hello Kendall, >> >> I assigned the topic `wallaby-cycle-highlights` to all highlights not yet >> merged. Let me know if it's not too late for those and if you want to >> continue with them. >> >> >> https://review.opendev.org/q/topic:%22wallaby-cycle-highlight%22+(status:open%20OR%20status:merged) >> >> Thanks >> >> Le ven. 12 mars 2021 à 20:08, Kendall Nelson a >> écrit : >> >>> Hello All! >>> >>> I know we are past the deadline, but I wanted to do one final call. If >>> you have highlights you want included in the release marketing for Wallaby, >>> you must have patches pushed to the releases repo by Sunday March 14th at >>> 6:00 UTC. >>> >>> If you can't have it pushed by then but want to be included, please >>> contact me directly. >>> >>> Thanks! >>> >>> -Kendall (diablo_rojo) >>> >>> On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson >>> wrote: >>> >>>> Hello Everyone! >>>> >>>> It's time to start thinking about calling out 'cycle-highlights' in >>>> your deliverables! I have no idea how we are here AGAIN ALREADY, alas, here >>>> we be. >>>> >>>> As PTLs, you probably get many pings towards the end of every release >>>> cycle by various parties (marketing, management, journalists, etc) asking >>>> for highlights of what is new and what significant changes are coming in >>>> the new release. By putting them all in the same place it makes them easy >>>> to reference because they get compiled into a pretty website like this from >>>> the last few releases: Stein[1], Train[2]. >>>> >>>> We don't need a fully fledged marketing message, just a few highlights >>>> (3-4 ideally), from each project team. Looking through your release >>>> notes might be a good place to start. >>>> >>>> *The deadline for cycle highlights is the end of the R-5 week [3] (next >>>> week) on March 12th.* >>>> >>>> How To Reminder: >>>> ------------------------- >>>> >>>> Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the >>>> openstack/releases repo like this: >>>> >>>> cycle-highlights: >>>> - Introduced new service to use unused host to mine bitcoin. >>>> >>>> The formatting options for this tag are the same as what you are >>>> probably used to with Reno release notes. >>>> >>>> Also, you can check on the formatting of the output by either