From amy at demarco.com Mon Mar 1 00:08:09 2021 From: amy at demarco.com (Amy Marrich) Date: Sun, 28 Feb 2021 18:08:09 -0600 Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: <952196218.884082.1614451027288@mail.yahoo.com> References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> Message-ID: Deepesh, Forwarding to the OpenStack Discuss mailing list. Thanks, Amy (spotz) On Sat, Feb 27, 2021 at 12:42 PM Dees wrote: > Hi All, > > Deployed openstack all service yesterday and seemed to be running since > power cycling the openstack today. > > Then deployed and configured VM instance with networking but config > onv-chassis is reporting error, as I am new to openstack (having used it > number of years back) mind guiding where I should be looking any help will > be greatly appreciated. > > > > nova-compute/0* active idle 1 10.141.14.62 > Unit is ready > ntp/0* active idle 10.141.14.62 > 123/udp chrony: Ready > ovn-chassis/0* error idle 10.141.14.62 > hook failed: "config-changed" > > > Deployed VM instance using the following command line. > > > curl > http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img > | \ > openstack image create --public --container-format bare --disk-format > qcow2 \ > --property architecture=x86_64 --property hw_disk_bus=virtio \ > --property hw_vif_model=virtio "focal_x86_64" > > openstack flavor create --ram 512 --disk 4 m1.micro > > openstack network create Pub_Net --external --share --default \ > --provider-network-type vlan --provider-segment 200 > --provider-physical-network physnet1 > > openstack subnet create Pub_Subnet --allocation-pool > start=10.141.40.40,end=10.141.40.62 \ > --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \ > --network Pub_Net > > openstack network create Network1 --internal > openstack subnet create Subnet1 \ > --allocation-pool start=192.168.0.10,end=192.168.0.199 \ > --subnet-range 192.168.0.0/24 \ > --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \ > --network Network1 > > Kind regards, > Deepesh > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 1 06:33:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 1 Mar 2021 07:33:24 +0100 Subject: [release] Release countdown for week R-6 Mar 01 - Mar 05 Message-ID: Development Focus ----------------- Work on libraries should be wrapping up, in preparation for the various library-related deadlines coming up. Now is a good time to make decisions on deferring feature work to the next development cycle in order to be able to focus on finishing already-started feature work. General Information ------------------- We are now getting close to the end of the cycle, and will be gradually freezing feature work on the various deliverables that make up the OpenStack release. This coming week is the deadline for general libraries (except client libraries): their last feature release needs to happen before "Non-client library freeze" on 4 March, 2021. Only bugfixes releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/wallaby branching request with the review (as an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2 In the next weeks we will have deadlines for: * Client libraries (think python-*client libraries), which need to have their last feature release before "Client library freeze" (11 March, 2021) * Deliverables following a cycle-with-rc model (that would be most services), which observe a Feature freeze on that same date, 11 March, 2021. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. As we are getting to the point of creating stable/wallaby branches, this would be a good point for teams to review membership in their wallaby-stable-maint groups. Once the stable/wallaby branches are cut for a repo, the ability to approve any necessary backports into those branches for wallaby will be limited to the members of that stable team. If there are any questions about stable policy or stable team membership, please reach out in the #openstack-stable channel. Upcoming Deadlines & Dates -------------------------- Cross-project events: - Non-client library freeze: 4 March, 2021 (R-6 week) - Client library freeze: 11 March, 2021 (R-5 week) - Wallaby-3 milestone: 11 March, 2021 (R-5 week) - Cycle Highlights Due: 11 March 2021 (R-5 week) - Wallaby final release: 14 April, 2021 Project-specific events: - Cinder Driver Features Declaration: 12 March, 2021 (R-5 week) Thanks for your attention -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Mon Mar 1 08:03:12 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 1 Mar 2021 08:03:12 +0000 Subject: [puppet-openstack] stop using the ${pyvers} variable In-Reply-To: References: <77a37088-9f7e-92b6-e196-784ff3168a0e@debian.org> <21a825a7-7976-0735-0962-9b85a3005215@debian.org> , Message-ID: Hello, I don't really mind removing it if what you are saying about Python 4 is true, I don't know so I will take your word for it. I agree with Takashi, that we can remove it but should do it consistenly and then be done with it, just stopping to use it because it takes literally five seconds more to fix a patch seems like something one can live with until then. Remember that we've for years been working on consistency, cleaning up and removing duplication, I wouldn't want to start introducing then after that. So yes, let's remove it but require everything up until that point to be consistent, maybe a short timespan to squeeze all those changes into Wallaby though. Is this a change you want to drive Thomas? Best regards and have a good start of the week everybody. Tobias ________________________________ From: Takashi Kajinami Sent: Sunday, February 28, 2021 3:12:42 PM To: Thomas Goirand Cc: OpenStack Discuss Subject: Re: [puppet-openstack] stop using the ${pyvers} variable On Sun, Feb 28, 2021 at 8:22 PM Thomas Goirand > wrote: On 2/28/21 12:10 PM, Takashi Kajinami wrote: > > On Sun, Feb 28, 2021 at 3:32 AM Thomas Goirand > >> wrote: > > Hi, > > On 2/27/21 3:52 PM, Takashi Kajinami wrote: > > I have posted a comment on the said patch but I prefer using pyvers in > > that specific patch because; > > - The change seems to be a backport candidate and using pyvers > helps us > > backport the change > > to older branches like Train which still supports python 2 IIRC. > > Even Rocky is already using python3 in Debian/Ubuntu. The last distro > using the Python 2 version would be Stretch (which is long EOLed) and > Bionic (at this time, people should be moving to Focal, no ?, which IMO > are not targets for backports. > > Therefore, for this specific patch, even if you want to do a backport, > it doesn't make sense. > > Are you planning to do such a backport for the RPM world? > > We still have queens open for puppet-openstack modules. > IIRC rdo rocky is based on CentOS7 and Python2. > Also, I don't really like to see inconsistent implementation caused by > backport > knowing that we don't support python3 in these branches. > Anyway we can consider that when we actually backport the change. I'm a bit surprised that you still care about such an old release as Queens. Is this the release shipped with CentOS 7? RDO Queens anr RDO Rocky are based on CentOS 7. RDO Train supports both CentOS7 and CentOS8 IIRC. In any ways, thanks for letting me know, I have to admit I don't know much about the RPM side of things. In such case, I'm ok to keep the ${pyvers} variable for the CentOS case for a bit longer then, but can we agree when we stop using it? Also IMO, forcing it for Debian/Ubuntu doesn't make sense anymore. IMO we can think about backport separately(we can make any required changes in backport only) so we can get rid of pyvers in master for both CentOS and Ubuntu/Debian. However I prefer to deal with that removal separately and consistently, so that we won't create inconsistent implementation where some relies on pyvers and the others doesn't rely on pyvers. Thanks everyone for participating in this thread, Cheers, Thomas Goirand (zigo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Mar 1 08:53:02 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 1 Mar 2021 08:53:02 +0000 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: On Fri, 26 Feb 2021 at 17:08, Michal Arbet wrote: > > Hi, > > I found that keepalived VIP switch cause TCP connections that refuse to die on host where VIP was before it was switched. > > I filled a bug here -> https://bugs.launchpad.net/kolla-ansible/+bug/1917068 > Fixed here -> https://review.opendev.org/c/openstack/kolla-ansible/+/777772 > Video presentation of bug here -> https://download.kevko.ultimum.cloud/video_debug.mp4 > > I was just curious and wanted to ask here in openstack-discuss : > > If someone already seen this issue in past ? > If yes, do you tweak net.ipv4.tcp_retries2 kernel parameter ? > If no, how did you solve this ? Hi Michal, thanks for the investigation here. There is a nice tool, that I found far too late, that helps to help answer questions like this: https://codesearch.opendev.org/ > > Thank you, > Michal Arbet ( kevko ) > From thierry at openstack.org Mon Mar 1 09:11:09 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 1 Mar 2021 10:11:09 +0100 Subject: Moving ara code review to GitHub In-Reply-To: References: Message-ID: <8ff9bb87-45e6-4a16-1d51-02c161b9c21a@openstack.org> Laurent Dumont wrote: > [...] > I know that when I merged something for ARA, I ended up appreciating > Gerrit but there is a definite learning curve that I don't feel was > necessary to the process. That's a great point. I personally think Gerrit is more efficient, but it's definitely different from the PR-based approach which the largest public code forges use (GitHub, Gitlab). Learning Gerrit is totally worth it if you intend to contribute regularly. It's harder to justify for a drive-by contribution, so you might miss those because the drive-by contributor sometimes just won't go through the hassle. So for a project like ARA which is mostly feature-complete, is not super busy and would at this point most likely attract drive-by contributions fixing corner case bugs and adding corner case use cases, I certainly understand your choice. -- Thierry From marios at redhat.com Mon Mar 1 09:46:44 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 1 Mar 2021 11:46:44 +0200 Subject: [TripleO] Moving stable/rocky for *-tripleo-* repos to End of Life OK? In-Reply-To: References: Message-ID: On Fri, Feb 12, 2021 at 5:38 PM Marios Andreou wrote: > > > On Fri, Feb 5, 2021 at 4:40 PM Marios Andreou wrote: > >> Hello all, >> >> it's been ~ 2 months now since my initial mail about $subject [1] and >> just under a month since my last bump on the thread [2] and I haven't heard >> any objections so far. >> >> So I think it's now appropriate to move forward with [3] which tags the >> latest commits to the stable/rocky branch of all tripleo-repos [4] as >> 'rock-eol' (except archived things like instack/tripleo-ui). >> >> Once it merges we will no longer be able to land anything into >> stable/rocky for all tripleo repos and the stable/rocky branch will be >> deleted. >> >> So, last chance! If you object please go and -1 the patch at [3] and/or >> reply here >> >> (explicitly added some folks into cc for attention please) Thanks to Elod, I just updated/posted v2 of the proposal at https://review.opendev.org/c/openstack/releases/+/774244 Harald, Slawek, Tom, o/ please check the comments at https://review.opendev.org/c/openstack/releases/+/774244/1/deliverables/rocky/tripleo-heat-templates.yaml#62 . The question is - do you really need this on stable/rocky in and of itself, or, rather because that branch is still active, and we needed it on the previous, i.e. queens? Basically sanity check again so there is no confusion later ;) do you folks object to us declaring stable/rocky as EOL? thank you! regards, marios > > > bump - there has been some discussion on the proposal at > https://review.opendev.org/c/openstack/releases/+/774244 which is now > resolved. > > I just removed my blocking workflow -1 at releases/+/774244 so really > really last chance now ;) > > regards, marios > > > > >> thanks, marios >> >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019338.html >> [2] >> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019860.html >> [3] https://review.opendev.org/c/openstack/releases/+/774244 >> [4] https://releases.openstack.org/teams/tripleo.html >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Mon Mar 1 10:03:13 2021 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 1 Mar 2021 11:03:13 +0100 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: Hi, I really like the tool, thank you. also confirms my idea as this kernel option is also used by other projects where they explicitly mention case with keepalived/VIP switch. Now I definitively think this should be handled also in kolla-ansible. Thanks, Michal Arbet ( kevko ) Dne po 1. 3. 2021 9:53 uživatel Mark Goddard napsal: > On Fri, 26 Feb 2021 at 17:08, Michal Arbet > wrote: > > > > Hi, > > > > I found that keepalived VIP switch cause TCP connections that refuse > to die on host where VIP was before it was switched. > > > > I filled a bug here -> > https://bugs.launchpad.net/kolla-ansible/+bug/1917068 > > Fixed here -> > https://review.opendev.org/c/openstack/kolla-ansible/+/777772 > > Video presentation of bug here -> > https://download.kevko.ultimum.cloud/video_debug.mp4 > > > > I was just curious and wanted to ask here in openstack-discuss : > > > > If someone already seen this issue in past ? > > If yes, do you tweak net.ipv4.tcp_retries2 kernel parameter ? > > If no, how did you solve this ? > > Hi Michal, thanks for the investigation here. There is a nice tool, > that I found far too late, that helps to help answer questions like > this: https://codesearch.opendev.org/ > > > > > Thank you, > > Michal Arbet ( kevko ) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Mar 1 10:39:15 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 1 Mar 2021 10:39:15 +0000 Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> Message-ID: Hi, Few questions inline On Mon, Mar 1, 2021 at 12:09 AM Amy Marrich wrote: > > Deepesh, > > Forwarding to the OpenStack Discuss mailing list. > > Thanks, > > Amy (spotz) > > On Sat, Feb 27, 2021 at 12:42 PM Dees wrote: >> >> Hi All, >> >> Deployed openstack all service yesterday and seemed to be running since power cycling the openstack today. >> What did you use to deploy OpenStack ? Based on the command below I don't recognize that output. >> Then deployed and configured VM instance with networking but config onv-chassis is reporting error, as I am new to openstack (having used it number of years back) mind guiding where I should be looking any help will be greatly appreciated. >> >> >> >> nova-compute/0* active idle 1 10.141.14.62 Unit is ready >> ntp/0* active idle 10.141.14.62 123/udp chrony: Ready >> ovn-chassis/0* error idle 10.141.14.62 hook failed: "config-changed" >> I do not recognize this output or that error in particular, but I assume that ovn-chassis is the name of the process of the ovn-controller running on the node ? If so, the configuration of ovn-controller should be present in the local OVSDB instance, can you please paste the output of the following command: $ sudo ovs-vsctl list Open_VSwitch . Also, do you have any logs related to that process ? >> >> Deployed VM instance using the following command line. >> >> >> curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \ >> openstack image create --public --container-format bare --disk-format qcow2 \ >> --property architecture=x86_64 --property hw_disk_bus=virtio \ >> --property hw_vif_model=virtio "focal_x86_64" >> >> openstack flavor create --ram 512 --disk 4 m1.micro >> >> openstack network create Pub_Net --external --share --default \ >> --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 >> >> openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.40,end=10.141.40.62 \ >> --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \ >> --network Pub_Net >> >> openstack network create Network1 --internal >> openstack subnet create Subnet1 \ >> --allocation-pool start=192.168.0.10,end=192.168.0.199 \ >> --subnet-range 192.168.0.0/24 \ >> --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \ >> --network Network1 >> >> Kind regards, >> Deepesh >> _______________________________________________ >> Community mailing list >> Community at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/community From bcafarel at redhat.com Mon Mar 1 11:58:18 2021 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 1 Mar 2021 12:58:18 +0100 Subject: [neutron] Bug deputy report (week starting on 2021-02-22) Message-ID: Hello neutrinos, this is our first new bug deputy rotation for 2021, overall a quiet week with most bugs already fixed or good progress. DVR folks may be interested in the discussion ongoing on the single High bug, also I left one bug in Opinion on OVS agent main loop Critical * neutron.tests.unit.common.test_utils.TestThrottler test_throttler is failing - https://bugs.launchpad.net/neutron/+bug/1916572 Patch was quickly merged: https://review.opendev.org/c/openstack/neutron/+/777072 High * [dvr] bound port permanent arp entries never deleted - https://bugs.launchpad.net/neutron/+bug/1916761 Introduced with https://review.opendev.org/q/I538aa6d68fbb5ff8431f82ba76601ee34c1bb181 Lengthy discussion in the bug itself and suggested fix https://review.opendev.org/c/openstack/neutron/+/777616 Medium * [OVN][QOS] OVN DB QoS rule is not removed when a FIP is dissasociated - https://bugs.launchpad.net/neutron/+bug/1916470 Patch by ralonsoh merged https://review.opendev.org/c/openstack/neutron/+/776916 * A privsep daemon spawned by neutron-openvswitch-agent hangs when debug logging is enabled - https://bugs.launchpad.net/neutron/+bug/1896734 Actually reported earlier on charm and oslo, ralonsoh looking into it from neutron side * StaleDataError: DELETE statement on table 'standardattributes' expected to delete 2 row(s); 1 were matched - https://bugs.launchpad.net/neutron/+bug/1916889 Spotted by Liu while looking at dvr bug, patch sent to neutron-lib https://review.opendev.org/c/openstack/neutron-lib/+/777581 Low * ovn-octavia-provider can attempt to write protocol=None to OVSDB - https://bugs.launchpad.net/neutron/+bug/1916646 This appears in some functional test results, otherwiseguy sent https://review.opendev.org/c/openstack/ovn-octavia-provider/+/777201 Opinion * use deepcopy in function rpc_loop of ovs-agent - https://bugs.launchpad.net/neutron/+bug/1916761 My hunch is that we are fine here, but please chime in other opinions Passing the baton to our PTL for next week! -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Mar 1 12:16:14 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Mar 2021 13:16:14 +0100 Subject: Input for ETSI Hardware Platform Capability Registry Message-ID: <9A1BC726-FD91-4A37-B23E-8F146E8A7DE7@gmail.com> Hi OpenStack Community, I’m reaching out to you about a call for action that I received from the ETSI NFV group[1]. The group is currently working on a Hardware Platform Capability Registry[2] which is designed to contain information about hardware capabilities that a cloud platform offers to Virtual Network Functions (VNFs). They will use the entries of the registry as part of VNF Descriptors (VNFD). They are looking for information about hardware capabilities exposed through OpenStack in the following categories: CPU, memory, storage, network and logical node. They are looking for basic functions as well as information about accelerators. You can find further information about the registry on their wiki page[2]. I’m looking for input to submit to the registry or feedback on the approach to help this work item at ETSI. Please reach out to me or reply to this thread if your project exposes any information about the underlying hardware, which should be added to the above registry. Please let me know if you have any questions. Thanks and Best Regards, Ildikó [1] https://www.etsi.org/technologies/nfv [2] https://nfvwiki.etsi.org/index.php?title=Hardware_Platform_Capability_Registry ——— Ildikó Váncsa Ecosystem Technical Lead Open Infrastructure Foundation From tpb at dyncloud.net Mon Mar 1 12:21:51 2021 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 1 Mar 2021 07:21:51 -0500 Subject: [TripleO] Moving stable/rocky for *-tripleo-* repos to End of Life OK? In-Reply-To: References: Message-ID: <20210301122151.3o5cu4nloststneb@barron.net> On 01/03/21 11:46 +0200, Marios Andreou wrote: >On Fri, Feb 12, 2021 at 5:38 PM Marios Andreou wrote: > >> >> >> On Fri, Feb 5, 2021 at 4:40 PM Marios Andreou wrote: >> >>> Hello all, >>> >>> it's been ~ 2 months now since my initial mail about $subject [1] and >>> just under a month since my last bump on the thread [2] and I haven't heard >>> any objections so far. >>> >>> So I think it's now appropriate to move forward with [3] which tags the >>> latest commits to the stable/rocky branch of all tripleo-repos [4] as >>> 'rock-eol' (except archived things like instack/tripleo-ui). >>> >>> Once it merges we will no longer be able to land anything into >>> stable/rocky for all tripleo repos and the stable/rocky branch will be >>> deleted. >>> >>> So, last chance! If you object please go and -1 the patch at [3] and/or >>> reply here >>> >>> > >(explicitly added some folks into cc for attention please) > >Thanks to Elod, I just updated/posted v2 of the proposal at >https://review.opendev.org/c/openstack/releases/+/774244 > >Harald, Slawek, Tom, o/ please check the comments at >https://review.opendev.org/c/openstack/releases/+/774244/1/deliverables/rocky/tripleo-heat-templates.yaml#62 >. > >The question is - do you really need this on stable/rocky in and of itself, >or, rather because that branch is still active, and we needed it on the >previous, i.e. queens? > >Basically sanity check again so there is no confusion later ;) do you folks >object to us declaring stable/rocky as EOL? I have no objection to declaring stable/rocky as EOL - the change I proposed there was part of a series of backports from master back to stable/queens. I posted it to stable/rocky along the way since that branch was not EOL and I didn't want to leave a gap in the series of backports. > >thank you! > >regards, marios > > > > >> >> >> bump - there has been some discussion on the proposal at >> https://review.opendev.org/c/openstack/releases/+/774244 which is now >> resolved. >> >> I just removed my blocking workflow -1 at releases/+/774244 so really >> really last chance now ;) >> >> regards, marios >> >> >> >> >>> thanks, marios >>> >>> >>> [1] >>> http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019338.html >>> [2] >>> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019860.html >>> [3] https://review.opendev.org/c/openstack/releases/+/774244 >>> [4] https://releases.openstack.org/teams/tripleo.html >>> >> From marios at redhat.com Mon Mar 1 12:35:32 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 1 Mar 2021 14:35:32 +0200 Subject: [TripleO] Moving stable/rocky for *-tripleo-* repos to End of Life OK? In-Reply-To: <20210301122151.3o5cu4nloststneb@barron.net> References: <20210301122151.3o5cu4nloststneb@barron.net> Message-ID: On Mon, Mar 1, 2021 at 2:22 PM Tom Barron wrote: > On 01/03/21 11:46 +0200, Marios Andreou wrote: > >On Fri, Feb 12, 2021 at 5:38 PM Marios Andreou wrote: > > > >> > >> > >> On Fri, Feb 5, 2021 at 4:40 PM Marios Andreou > wrote: > >> > >>> Hello all, > >>> > >>> it's been ~ 2 months now since my initial mail about $subject [1] and > >>> just under a month since my last bump on the thread [2] and I haven't > heard > >>> any objections so far. > >>> > >>> So I think it's now appropriate to move forward with [3] which tags the > >>> latest commits to the stable/rocky branch of all tripleo-repos [4] as > >>> 'rock-eol' (except archived things like instack/tripleo-ui). > >>> > >>> Once it merges we will no longer be able to land anything into > >>> stable/rocky for all tripleo repos and the stable/rocky branch will be > >>> deleted. > >>> > >>> So, last chance! If you object please go and -1 the patch at [3] and/or > >>> reply here > >>> > >>> > > > >(explicitly added some folks into cc for attention please) > > > >Thanks to Elod, I just updated/posted v2 of the proposal at > >https://review.opendev.org/c/openstack/releases/+/774244 > > > >Harald, Slawek, Tom, o/ please check the comments at > > > https://review.opendev.org/c/openstack/releases/+/774244/1/deliverables/rocky/tripleo-heat-templates.yaml#62 > >. > > > >The question is - do you really need this on stable/rocky in and of > itself, > >or, rather because that branch is still active, and we needed it on the > >previous, i.e. queens? > > > >Basically sanity check again so there is no confusion later ;) do you > folks > >object to us declaring stable/rocky as EOL? > > I have no objection to declaring stable/rocky as EOL - the change I > proposed there was part of a series of backports from master back to > stable/queens. I posted it to stable/rocky along the way since that > branch was not EOL and I didn't want to leave a gap in the series of > backports. > > ACK thanks for confirming Tom > > > >thank you! > > > >regards, marios > > > > > > > > > >> > >> > >> bump - there has been some discussion on the proposal at > >> https://review.opendev.org/c/openstack/releases/+/774244 which is now > >> resolved. > >> > >> I just removed my blocking workflow -1 at releases/+/774244 so really > >> really last chance now ;) > >> > >> regards, marios > >> > >> > >> > >> > >>> thanks, marios > >>> > >>> > >>> [1] > >>> > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019338.html > >>> [2] > >>> > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019860.html > >>> [3] https://review.opendev.org/c/openstack/releases/+/774244 > >>> [4] https://releases.openstack.org/teams/tripleo.html > >>> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Mar 1 13:08:51 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 1 Mar 2021 14:08:51 +0100 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: On Mon, Mar 1, 2021 at 11:06 AM Michal Arbet wrote: > I really like the tool, thank you. also confirms my idea as this kernel option is also used by other projects where they explicitly mention case with keepalived/VIP switch. FWIW, I can see only StarlingX and Airship, none of general-purpose tools seem to mention customising this variable. -yoctozepto From balazs.gibizer at est.tech Mon Mar 1 13:31:20 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 01 Mar 2021 14:31:20 +0100 Subject: [nova][placement] Wallaby release Message-ID: <8KLAPQ.YM5K5P44TRVX2@est.tech> Hi, We are getting close to the Wallaby release. So I create a tracking etherpad[1] with the schedule and TODOs. One thing that I want to highlight is that we will hit Feature Freeze on 11th of March. As the timeframe between FF and RC1 is short I'm not plannig with FFEs. Patches that are approved before 11 March EOB can be rechecked or rebased if needed and then re-approved. If you have a patch that is really close but not approved before the deadline, and you think there are two cores that willing to review it before RC1, then please send a mail to the ML with [nova][FFE] subject prefix not later than 16th of March EOB. Cheers, gibi [1] https://etherpad.opendev.org/p/nova-wallaby-rc-potential From arne.wiebalck at cern.ch Mon Mar 1 13:41:17 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 1 Mar 2021 14:41:17 +0100 Subject: [baremetal-sig][ironic] Tue Mar 9, 2021, 2pm UTC: PTG input and Ironic Prometheus Exporter Message-ID: <9784dcfd-5cf3-5d0c-757b-858ba0481382@cern.ch> Dear all, The Bare Metal SIG will meet next week Tue Mar 9, 2021, at 2pm UTC on zoom. There will be two main points on the agenda: - A "topic-of-the-day" presentation/demo by Iury Gregory (iurygregory) on 'The Ironic Prometheus Exporter' and - Gathering (operator) input for the Ironic upstream team for the upcoming PTG: what are the pain points, what features should be added? The second item is also why this mail comes a week earlier than usual. Think about it, bring your items and/or add them to https://etherpad.opendev.or g/p/bare-metal-sig Everyone is welcome! Cheers, Arne From iurygregory at gmail.com Mon Mar 1 13:43:23 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 1 Mar 2021 14:43:23 +0100 Subject: [ironic] replacing the review priority list on the etherpad with hashtag Message-ID: Hello Ironicers! If you were present in the last weekly meeting, you are probably aware that we are using hashtags to track the patches that are in the priority list. If you are part of the ironic core group you can edit hashtags in any patch, if you are not core you can only add hashtags to the patches you are the Owner. In the gerrit UI you should be able to see Hashtags fields and a button ADD HASHTAG, the hashtag we are using is *ironic-week-prio* , so, if you have a patch that is in a good shape and ready for review you can add to the hashtag (just click in the ADD HASHTAG button, write the hashtag, click in SAVE). I will push a patch adding this information to our docs, I'm willing to do a session (or maybe two since we have people in different timezones) to explain. I will create a doodle and add to this thread. Thank you! -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Mar 1 15:49:53 2021 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 1 Mar 2021 08:49:53 -0700 Subject: [puppet-openstack] stop using the ${pyvers} variable In-Reply-To: <77a37088-9f7e-92b6-e196-784ff3168a0e@debian.org> References: <77a37088-9f7e-92b6-e196-784ff3168a0e@debian.org> Message-ID: On Sat, Feb 27, 2021 at 2:29 AM Thomas Goirand wrote: > Hi, > > Using the ${pyvers} so we can switch be between Python versions made > sense 1 or 2 years ago. However, I'm in the opinion that we should stop > using that, and switch to using python3 everywhere directly whenever > possible. Though Tobias seems to not agree (see [1]), so I'm raising the > topic in the list so we can discuss this together. > > It would only make sense to remove it going forward. I would not support dropping it for any stable branches. It's something we could work on in X unless someone wants to drive it for W. > Your thoughts? > Cheers, > > Thomas Goirand (zigo) > > [1] https://review.opendev.org/c/openstack/puppet-swift/+/777564 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Mon Mar 1 16:02:19 2021 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 1 Mar 2021 17:02:19 +0100 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: Well, but that doesn't mean it's right that they don't have it configured. If you google "net.ipv4.tcp_retries2 keepalive" and will read results, you will see that this option is widely used I think we have to discuss option value (not fix itself)...to find some golden middle .. https://www.programmersought.com/article/724162740/ https://knowledge.broadcom.com/external/article/142410/tuning-tcp-keepalive-for-inprogress-task.html https://www.ibm.com/support/knowledgecenter/ko/SSEPGG_9.7.0/com.ibm.db2.luw.admin.ha.doc/doc/t0058764.html?view=embed https://www.suse.com/support/kb/doc/?id=000019293 https://programmer.group/kubeadm-build-highly-available-kubernetes-1.15.1.html po 1. 3. 2021 v 14:09 odesílatel Radosław Piliszek < radoslaw.piliszek at gmail.com> napsal: > On Mon, Mar 1, 2021 at 11:06 AM Michal Arbet > wrote: > > I really like the tool, thank you. also confirms my idea as this kernel > option is also used by other projects where they explicitly mention case > with keepalived/VIP switch. > > FWIW, I can see only StarlingX and Airship, none of general-purpose > tools seem to mention customising this variable. > > -yoctozepto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Mar 1 16:07:26 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 1 Mar 2021 16:07:26 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack Message-ID: Hi all, As part of the Victoria PTG [0] the Neutron community agreed upon switching the default backend in Devstack to OVN. A lot of work has been done since, from porting the OVN devstack module to the DevStack tree, refactoring the DevStack module to install OVN from distro packages, implementing features to close the parity gap with ML2/OVS, fixing issues with tests and distros, etc... We are now very close to being able to make the switch and we've thought about sending this email to the broader community to raise awareness about this change as well as bring more attention to the patches that are current on review. Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is discontinued and/or not supported anymore. The ML2/OVS driver is still going to be developed and maintained by the upstream Neutron community. Below is a e per project explanation with relevant links and issues of where we stand with this work right now: * Keystone: Everything should be good for Keystone, the gate is happy with the changes. Here is the test patch: https://review.opendev.org/c/openstack/keystone/+/777963 * Glance: Everything should be good for Glace, the gate is happy with the changes. Here is the test patch: https://review.opendev.org/c/openstack/glance/+/748390 * Swift: Everything should be good for Swift, the gate is happy with the changes. Here is the test patch: https://review.opendev.org/c/openstack/swift/+/748403 * Ironic: Since chainloading iPXE by the OVN built-in DHCP server is work in progress, we've changed most of the Ironic jobs to explicitly enable ML2/OVS and everything is merged, so we should be good for Ironic too. Here is the test patch: https://review.opendev.org/c/openstack/ironic/+/748405 * Cinder: Cinder is almost complete. There's one test failure in the "tempest-slow-py3" job run on the "test_port_security_macspoofing_port" test. This failure is due to a bug in core OVN [1]. This bug has already been fixed upstream [2] and the fix has been backported down to the branch-20.03 [3] of the OVN project. However, since we install OVN from packages we are currently waiting for this fix to be included in the packages for Ubuntu Focal (it's based on OVN 20.03). I already contacted the package maintainer which has been very supportive of this work and will work on the package update, but he maintain a handful of backports in that package which is not yet included in OVN 20.03 upstream and he's now working with the core OVN community [4] to include it first in the branch and then create a new package for it. Hopefully this will happen soon. But for now we have a few options moving on with this issue: 1- Wait for the new package version 2- Mark the test as unstable until we get the new package version 3- Compile OVN from source instead of installing it from packages (OVN_BUILD_FROM_SOURCE=True in local.conf) What do you think about it ? Here is the test patch for Cinder: https://review.opendev.org/c/openstack/cinder/+/748227 * Nova: There are a few patches waiting for review for Nova, which are: 1- Adapting the live migration scripts to work with ML2/OVN: Basically the scripts were trying to stop the Neutron agent (q-agt) process which is not part of an ML2/OVN deployment. The patch changes the code to check if that system unit exists before trying to stop it. Patch: https://review.opendev.org/c/openstack/nova/+/776419 2- Explicitly set grenade job to ML2/OVS: This is a temporary change which can be removed one release cycle after we switch DevStack to ML2/OVN. Grenade will test updating from the release version to the master branch but, since the default of the released version is not ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job is not supported. Patch: https://review.opendev.org/c/openstack/nova/+/776934 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] therefore we are temporarily enabling ML2/OVS for this job until that feature lands in core OVN. Patch: https://review.opendev.org/c/openstack/nova/+/776944 I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these changes and he suggested keeping all the Nova jobs on ML2/OVS for now because he feels like a change in the default network driver a few weeks prior to the upstream code freeze can be concerning. We do not know yet precisely when we are changing the default due to the current patches we need to get merged but, if this is a shared feeling among the Nova community I can work on enabling ML2/OVS on all jobs in Nova until we get a new release in OpenStack. Here's the test patch for Nova: https://review.opendev.org/c/openstack/nova/+/776945 * DevStack: And this is the final patch that will make this all happen: https://review.opendev.org/c/openstack/devstack/+/735097 It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been a long and bumpy road to get to this point and I would like to say thanks to everyone involved so far and everyone that read the whole email, please let me know your thoughts. [0] https://etherpad.opendev.org/p/neutron-victoria-ptg [1] https://bugs.launchpad.net/tempest/+bug/1728886 [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html Cheers, Lucas From kennelson11 at gmail.com Mon Mar 1 17:44:16 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Mar 2021 09:44:16 -0800 Subject: [all] April 2021 PTG Dates & Registration Message-ID: Hello Everyone! I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. Can't wait to see you all there! -the Kendalls (diablo_rojo & wendallkaters) [1] https://www.openstack.org/ptg/ [2] https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Mar 1 18:07:35 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 1 Mar 2021 19:07:35 +0100 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: Hi Kendall & Kendall, While registering, I noticed the event date in Eventbrite is set in a way that the calendar invite is for a one-hour event on April 19. Could this please be corrected? Many thanks, Pierre Riteau (priteau) On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: > > Hello Everyone! > > I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! > > PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! > > Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. > > Can't wait to see you all there! > > -the Kendalls (diablo_rojo & wendallkaters) > > [1] https://www.openstack.org/ptg/ > [2] https://april2021-ptg.eventbrite.com From kendall at openstack.org Mon Mar 1 18:08:01 2021 From: kendall at openstack.org (Kendall Waters) Date: Mon, 1 Mar 2021 12:08:01 -0600 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: <592A8D48-531B-421F-9628-EA84B3581643@openstack.org> Oops! That could be my mistake. Thanks for flagging. I will take a look at it. Cheers, Kendall Kendall Waters Perez Marketing & Events Coordinator Open Infrastructure Foundation > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > Hi Kendall & Kendall, > > While registering, I noticed the event date in Eventbrite is set in a > way that the calendar invite is for a one-hour event on April 19. > Could this please be corrected? > > Many thanks, > Pierre Riteau (priteau) > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: >> >> Hello Everyone! >> >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! >> >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! >> >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. >> >> Can't wait to see you all there! >> >> -the Kendalls (diablo_rojo & wendallkaters) >> >> [1] https://www.openstack.org/ptg/ >> [2] https://april2021-ptg.eventbrite.com From smooney at redhat.com Mon Mar 1 18:10:54 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 01 Mar 2021 18:10:54 +0000 Subject: VIP switch causing connections that refuse to die In-Reply-To: References: Message-ID: <458239bc35bbc1471a683a2d503beb744d8a1a97.camel@redhat.com> On Mon, 2021-03-01 at 17:02 +0100, Michal Arbet wrote: > Well, but that doesn't mean it's right that they don't have it configured. > If you google "net.ipv4.tcp_retries2 keepalive" and will read results, you > will see that this option is widely used this is something that operators can fix themselve externally however. im not against haveign kolla or other tools be able to configur it automticlly persay but its not kolla-ansibles job to configure every possible tuneing. this one might make sense to do by default or optionally but host config is largely out of scope of kolla ansible. in ooo where that is ment to handel all host config as well as openstack configuration or other tools where that is expliclty in socpe may also want to wtwee it but likely it shoudl be configurable. > > I think we have to discuss option value (not fix itself)...to find some > golden middle .. > > https://www.programmersought.com/article/724162740/ > https://knowledge.broadcom.com/external/article/142410/tuning-tcp-keepalive-for-inprogress-task.html > https://www.ibm.com/support/knowledgecenter/ko/SSEPGG_9.7.0/com.ibm.db2.luw.admin.ha.doc/doc/t0058764.html?view=embed > https://www.suse.com/support/kb/doc/?id=000019293 > https://programmer.group/kubeadm-build-highly-available-kubernetes-1.15.1.html > > po 1. 3. 2021 v 14:09 odesílatel Radosław Piliszek < > radoslaw.piliszek at gmail.com> napsal: > > > On Mon, Mar 1, 2021 at 11:06 AM Michal Arbet > > wrote: > > > I really like the tool, thank you. also confirms my idea as this kernel > > option is also used by other projects where they explicitly mention case > > with keepalived/VIP switch. > > > > FWIW, I can see only StarlingX and Airship, none of general-purpose > > tools seem to mention customising this variable. > > > > -yoctozepto > > From kendall at openstack.org Mon Mar 1 18:10:27 2021 From: kendall at openstack.org (Kendall Waters) Date: Mon, 1 Mar 2021 12:10:27 -0600 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: Hi Pierre, Do you mind registering again and seeing if the problem is fixed? And if not, do you mind sending a screenshot so I can see what is going on? Thanks! Kendall Kendall Waters Perez Marketing & Events Coordinator Open Infrastructure Foundation > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > Hi Kendall & Kendall, > > While registering, I noticed the event date in Eventbrite is set in a > way that the calendar invite is for a one-hour event on April 19. > Could this please be corrected? > > Many thanks, > Pierre Riteau (priteau) > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: >> >> Hello Everyone! >> >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! >> >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! >> >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. >> >> Can't wait to see you all there! >> >> -the Kendalls (diablo_rojo & wendallkaters) >> >> [1] https://www.openstack.org/ptg/ >> [2] https://april2021-ptg.eventbrite.com From smooney at redhat.com Mon Mar 1 18:24:21 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 01 Mar 2021 18:24:21 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > Hi all, > > As part of the Victoria PTG [0] the Neutron community agreed upon > switching the default backend in Devstack to OVN. A lot of work has > been done since, from porting the OVN devstack module to the DevStack > tree, refactoring the DevStack module to install OVN from distro > packages, implementing features to close the parity gap with ML2/OVS, > fixing issues with tests and distros, etc... > > We are now very close to being able to make the switch and we've > thought about sending this email to the broader community to raise > awareness about this change as well as bring more attention to the > patches that are current on review. > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > discontinued and/or not supported anymore. The ML2/OVS driver is still > going to be developed and maintained by the upstream Neutron > community. can we ensure that this does not happen until the xena release. in generall i think its ok to change the default but not this late in the cycle. i would also like to ensure we keep at least one non ovn based multi node job in nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. right now the event/neutorn interaction is not the same during move operations. > > Below is a e per project explanation with relevant links and issues of > where we stand with this work right now: > > * Keystone: > > Everything should be good for Keystone, the gate is happy with the > changes. Here is the test patch: > https://review.opendev.org/c/openstack/keystone/+/777963 > > * Glance: > > Everything should be good for Glace, the gate is happy with the > changes. Here is the test patch: > https://review.opendev.org/c/openstack/glance/+/748390 > > * Swift: > > Everything should be good for Swift, the gate is happy with the > changes. Here is the test patch: > https://review.opendev.org/c/openstack/swift/+/748403 > > * Ironic: > > Since chainloading iPXE by the OVN built-in DHCP server is work in > progress, we've changed most of the Ironic jobs to explicitly enable > ML2/OVS and everything is merged, so we should be good for Ironic too. > Here is the test patch: > https://review.opendev.org/c/openstack/ironic/+/748405 > > * Cinder: > > Cinder is almost complete. There's one test failure in the > "tempest-slow-py3" job run on the > "test_port_security_macspoofing_port" test. > > This failure is due to a bug in core OVN [1]. This bug has already > been fixed upstream [2] and the fix has been backported down to the > branch-20.03 [3] of the OVN project. However, since we install OVN > from packages we are currently waiting for this fix to be included in > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > contacted the package maintainer which has been very supportive of > this work and will work on the package update, but he maintain a > handful of backports in that package which is not yet included in OVN > 20.03 upstream and he's now working with the core OVN community [4] to > include it first in the branch and then create a new package for it. > Hopefully this will happen soon. > > But for now we have a few options moving on with this issue: > > 1- Wait for the new package version > 2- Mark the test as unstable until we get the new package version > 3- Compile OVN from source instead of installing it from packages > (OVN_BUILD_FROM_SOURCE=True in local.conf) i dont think we should default to ovn untill a souce build is not required. compiling form souce while not supper expensice still adds time to the job execution and im not sure we should be paying that cost on every devstack job run. we could maybe compile it once and bake the package into the image or host it on a mirror but i think we should avoid this option if we have alternitives. > > What do you think about it ? > > Here is the test patch for Cinder: > https://review.opendev.org/c/openstack/cinder/+/748227 > > * Nova: > > There are a few patches waiting for review for Nova, which are: > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > the scripts were trying to stop the Neutron agent (q-agt) process > which is not part of an ML2/OVN deployment. The patch changes the code > to check if that system unit exists before trying to stop it. > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > which can be removed one release cycle after we switch DevStack to > ML2/OVN. Grenade will test updating from the release version to the > master branch but, since the default of the released version is not > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > is not supported. > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > therefore we are temporarily enabling ML2/OVS for this job until that > feature lands in core OVN. > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > because he feels like a change in the default network driver a few > weeks prior to the upstream code freeze can be concerning. We do not > know yet precisely when we are changing the default due to the current > patches we need to get merged but, if this is a shared feeling among > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > until we get a new release in OpenStack. yep this is still my view. i would suggest we do the work required in the repos but not merge it until the xena release is open. thats technically at RC1 so march 25th i think we can safely do the swich after that but i would not change the defualt in any project before then. > > Here's the test patch for Nova: > https://review.opendev.org/c/openstack/nova/+/776945 > > * DevStack: > > And this is the final patch that will make this all happen: > https://review.opendev.org/c/openstack/devstack/+/735097 > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > a long and bumpy road to get to this point and I would like to say > thanks to everyone involved so far and everyone that read the whole > email, please let me know your thoughts. thanks for working on this. > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > Cheers, > Lucas > From skaplons at redhat.com Mon Mar 1 19:25:42 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 1 Mar 2021 20:25:42 +0100 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: <20210301192542.emodvk55xnf4h447@p1.localdomain> Hi, On Mon, Mar 01, 2021 at 12:10:27PM -0600, Kendall Waters wrote: > Hi Pierre, > > Do you mind registering again and seeing if the problem is fixed? And if not, do you mind sending a screenshot so I can see what is going on? Yes, it's fine. I see Date as Mon, Apr 19, 2021 8:00 AM - Fri, Apr 23, 2021 4:00 PM CDT at the registration summary. > > Thanks! > Kendall > > Kendall Waters Perez > Marketing & Events Coordinator > Open Infrastructure Foundation > > > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > > > Hi Kendall & Kendall, > > > > While registering, I noticed the event date in Eventbrite is set in a > > way that the calendar invite is for a one-hour event on April 19. > > Could this please be corrected? > > > > Many thanks, > > Pierre Riteau (priteau) > > > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: > >> > >> Hello Everyone! > >> > >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! > >> > >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! > >> > >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. > >> > >> Can't wait to see you all there! > >> > >> -the Kendalls (diablo_rojo & wendallkaters) > >> > >> [1] https://www.openstack.org/ptg/ > >> [2] https://april2021-ptg.eventbrite.com > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pierre at stackhpc.com Mon Mar 1 19:41:06 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 1 Mar 2021 20:41:06 +0100 Subject: [all] April 2021 PTG Dates & Registration In-Reply-To: References: Message-ID: No need to register again, I can see the updated dates online and it even works when I click on the calendar event creation link in my order confirmation. Thanks for fixing it so quickly! On Mon, 1 Mar 2021 at 19:11, Kendall Waters wrote: > > Hi Pierre, > > Do you mind registering again and seeing if the problem is fixed? And if not, do you mind sending a screenshot so I can see what is going on? > > Thanks! > Kendall > > Kendall Waters Perez > Marketing & Events Coordinator > Open Infrastructure Foundation > > > On Mar 1, 2021, at 12:07 PM, Pierre Riteau wrote: > > > > Hi Kendall & Kendall, > > > > While registering, I noticed the event date in Eventbrite is set in a > > way that the calendar invite is for a one-hour event on April 19. > > Could this please be corrected? > > > > Many thanks, > > Pierre Riteau (priteau) > > > > On Mon, 1 Mar 2021 at 18:46, Kendall Nelson wrote: > >> > >> Hello Everyone! > >> > >> I'm sure you all have been anxiously awaiting the announcement of the dates for the next virtual PTG[1]! The PTG will take place April 19-23, 2021! > >> > >> PTG registration is now open[2]. Like last time, it is free, but we will again be using it to communicate details about the event (schedules, passwords, etc), so please take two minutes to register! > >> > >> Early next week we will send out info to mailing lists about signing up teams. Also, the same as last time, we will have an ethercalc signup and a survey to gather some other data about your team. > >> > >> Can't wait to see you all there! > >> > >> -the Kendalls (diablo_rojo & wendallkaters) > >> > >> [1] https://www.openstack.org/ptg/ > >> [2] https://april2021-ptg.eventbrite.com > From eacosta at uesc.br Mon Mar 1 19:46:45 2021 From: eacosta at uesc.br (Eduardo Almeida Costa) Date: Mon, 1 Mar 2021 16:46:45 -0300 Subject: Question about Ubuntu Server Message-ID: Hello everybody. At my workplace, we will migrate from CentOS to Ubuntu Server. However, reading about it on forums and official documentation, it was not clear to me which version of Ubuntu Server is best for the production medium, 18.04 or 20.04. Can someone tell me what better version I can implement for production servers? My best regards. Eduardo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Mar 1 19:52:29 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 1 Mar 2021 20:52:29 +0100 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: It will depend on the OpenStack version you will run in production. Train - https://governance.openstack.org/tc/reference/runtimes/train.html Ussuri - https://governance.openstack.org/tc/reference/runtimes/ussuri.html Victoria - https://governance.openstack.org/tc/reference/runtimes/victoria.html Em seg., 1 de mar. de 2021 às 20:48, Eduardo Almeida Costa escreveu: > Hello everybody. > > At my workplace, we will migrate from CentOS to Ubuntu Server. > > However, reading about it on forums and official documentation, it was not > clear to me which version of Ubuntu Server is best for the production > medium, 18.04 or 20.04. > > Can someone tell me what better version I can implement for production > servers? > > My best regards. > > Eduardo. > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From raubvogel at gmail.com Mon Mar 1 19:54:41 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Mon, 1 Mar 2021 14:54:41 -0500 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: So openstack does run in debin/ubuntu? On Mon, Mar 1, 2021 at 2:54 PM Iury Gregory wrote: > It will depend on the OpenStack version you will run in production. > > Train - https://governance.openstack.org/tc/reference/runtimes/train.html > Ussuri - > https://governance.openstack.org/tc/reference/runtimes/ussuri.html > Victoria - > https://governance.openstack.org/tc/reference/runtimes/victoria.html > > Em seg., 1 de mar. de 2021 às 20:48, Eduardo Almeida Costa < > eacosta at uesc.br> escreveu: > >> Hello everybody. >> >> At my workplace, we will migrate from CentOS to Ubuntu Server. >> >> However, reading about it on forums and official documentation, it was >> not clear to me which version of Ubuntu Server is best for the production >> medium, 18.04 or 20.04. >> >> Can someone tell me what better version I can implement for production >> servers? >> >> My best regards. >> >> Eduardo. >> >> > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Mar 1 19:57:07 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 1 Mar 2021 20:57:07 +0100 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: Yes, it does. Em seg., 1 de mar. de 2021 às 20:54, Mauricio Tavares escreveu: > So openstack does run in debin/ubuntu? > > On Mon, Mar 1, 2021 at 2:54 PM Iury Gregory wrote: > >> It will depend on the OpenStack version you will run in production. >> >> Train - https://governance.openstack.org/tc/reference/runtimes/train.html >> >> Ussuri - >> https://governance.openstack.org/tc/reference/runtimes/ussuri.html >> Victoria - >> https://governance.openstack.org/tc/reference/runtimes/victoria.html >> >> Em seg., 1 de mar. de 2021 às 20:48, Eduardo Almeida Costa < >> eacosta at uesc.br> escreveu: >> >>> Hello everybody. >>> >>> At my workplace, we will migrate from CentOS to Ubuntu Server. >>> >>> However, reading about it on forums and official documentation, it was >>> not clear to me which version of Ubuntu Server is best for the production >>> medium, 18.04 or 20.04. >>> >>> Can someone tell me what better version I can implement for production >>> servers? >>> >>> My best regards. >>> >>> Eduardo. >>> >>> >> >> -- >> >> >> *Att[]'sIury Gregory Melo Ferreira * >> *MSc in Computer Science at UFCG* >> *Part of the puppet-manager-core team in OpenStack* >> *Software Engineer at Red Hat Czech* >> *Social*: https://www.linkedin.com/in/iurygregory >> *E-mail: iurygregory at gmail.com * >> > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Mar 1 20:00:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Mar 2021 20:00:35 +0000 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <20210301200035.u5pwq5akocoto24w@yuggoth.org> On 2021-03-01 16:46:45 -0300 (-0300), Eduardo Almeida Costa wrote: > At my workplace, we will migrate from CentOS to Ubuntu Server. > > However, reading about it on forums and official documentation, it > was not clear to me which version of Ubuntu Server is best for the > production medium, 18.04 or 20.04. > > Can someone tell me what better version I can implement for > production servers? Both 18.04 and 20.04 are "long-term support" releases, meaning Canonical will continue to provide security updates and other bug fixes in them for longer than the intermediate versions. 20.04 is, as its number would seem to indicate, newer than 18.04 (by roughly two years), so I wouldn't choose the older version unless you specifically need to run older software on it which won't work on the newer one for some reason. Since you posted this to the OpenStack discussion mailing list, I assume you're planning to install some version of OpenStack on Ubuntu. If so, you should look at our tested runtimes to see which platform was used to test the version of OpenStack you want to run: https://governance.openstack.org/tc/reference/project-testing-interface.html#tested-runtimes -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Mon Mar 1 20:04:20 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Mar 2021 20:04:20 +0000 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <20210301200420.2ukdtyedumn4gkpa@yuggoth.org> On 2021-03-01 14:54:41 -0500 (-0500), Mauricio Tavares wrote: > So openstack does run in debin/ubuntu? [...] Canonical (the company which produces Ubuntu) even has a commercially supported product built around running OpenStack on Ubuntu: https://ubuntu.com/openstack The Debian community maintains a comprehensive distribution of OpenStack as well: https://wiki.debian.org/OpenStack Hope that helps. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From motosingh at yahoo.co.uk Mon Mar 1 20:48:29 2021 From: motosingh at yahoo.co.uk (Dees) Date: Mon, 1 Mar 2021 20:48:29 +0000 (UTC) Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> Message-ID: <1580253120.2031001.1614631709883@mail.yahoo.com> Thanks for your reply much appreciated, deployed OpenStack through JUJU. I managed to get around that issue by restarting services but I have fallen into another issue now, looks like I need to define network time somewhere on the config I am right? When I create a network trying to get out a second interface I am getting errors. Any suggestions, please? labuser at maas:~/openstack-base$ openstack network create Pub_Net1 --external --share --default --provider-network-type vlan --provider-segment 201 --provider-physical-network externalError while executing the command: BadRequestException: 400, Invalid input for operation: physical_network 'external' unknown for VLAN provider network. The first network segment  is ok labuser at maas:~/openstack-base$ openstack network create Pub_Net --external --share --default \   --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 Config on juju   ovn-chassis:    annotations:      gui-x: '120'      gui-y: '1030'    charm: cs:ovn-chassis-10    # *** Please update the `bridge-interface-mappings` to values suitable ***    # *** for thehardware used in your deployment.  See the referenced     ***    # *** documentation at the top of this file.                           ***    options:      ovn-bridge-mappings: physnet1:br-data external:br-ex      bridge-interface-mappings: br-data:eno2 br-ex:eno3 Status ubuntu at 1:~$ sudo ovs-vsctl showe3b50030-b436-4d26-95af-4abeea1097d5    Manager "ptcp:6640:127.0.0.1"        is_connected: true    Bridge br-data        fail_mode: standalone        datapath_type: system        Port eno2            Interface eno2                type: system        Port br-data            Interface br-data                type: internal    Bridge br-int        fail_mode: secure        datapath_type: system        Port br-int            Interface br-int                type: internal        Port ovn-comput-0            Interface ovn-comput-0                type: geneve                options: {csum="true", key=flow, remote_ip="10.141.14.36"}    Bridge br-ex        fail_mode: standalone        datapath_type: system        Port eno3            Interface eno3                type: system        Port br-ex            Interface br-ex                type: internal    ovs_version: "2.13.1" Kind regards,Deepesh On Monday, 1 March 2021, 10:39:51 GMT, Lucas Alvares Gomes wrote: Hi, Few questions inline On Mon, Mar 1, 2021 at 12:09 AM Amy Marrich wrote: > > Deepesh, > > Forwarding to the OpenStack Discuss mailing list. > > Thanks, > > Amy (spotz) > > On Sat, Feb 27, 2021 at 12:42 PM Dees wrote: >> >> Hi All, >> >> Deployed openstack all service yesterday and seemed to be running since power cycling the openstack today. >> What did you use to deploy OpenStack ? Based on the command below I don't recognize that output. >> Then deployed and configured VM instance with networking but config onv-chassis is reporting error, as I am new to openstack (having used it number of years back) mind guiding where I should be looking any help will be greatly appreciated. >> >> >> >> nova-compute/0*              active    idle      1        10.141.14.62                      Unit is ready >>  ntp/0*                    active    idle                10.141.14.62    123/udp            chrony: Ready >>  ovn-chassis/0*            error    idle                10.141.14.62                      hook failed: "config-changed" >> I do not recognize this output or that error in particular, but I assume that ovn-chassis is the name of the process of the ovn-controller running on the node ? If so, the configuration of ovn-controller should be present in the local OVSDB instance, can you please paste the output of the following command: $ sudo ovs-vsctl list Open_VSwitch . Also, do you have any logs related to that process ? >> >> Deployed VM instance using the following command line. >> >> >> curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \ >>    openstack image create --public --container-format bare --disk-format qcow2 \ >>    --property architecture=x86_64 --property hw_disk_bus=virtio \ >>    --property hw_vif_model=virtio "focal_x86_64" >> >> openstack flavor create --ram 512 --disk 4 m1.micro >> >> openstack network create Pub_Net --external --share --default \ >>    --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 >> >> openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.40,end=10.141.40.62 \ >>    --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \ >>    --network Pub_Net >> >> openstack network create Network1 --internal >> openstack subnet create Subnet1 \ >>    --allocation-pool start=192.168.0.10,end=192.168.0.199 \ >>    --subnet-range 192.168.0.0/24 \ >>    --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \ >>    --network Network1 >> >> Kind regards, >> Deepesh >> _______________________________________________ >> Community mailing list >> Community at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/community -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Mar 1 20:54:00 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Mar 2021 20:54:00 +0000 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <20210301205400.74opdnu26wdqfher@yuggoth.org> On 2021-03-01 14:54:41 -0500 (-0500), Mauricio Tavares wrote: > So openstack does run in debin/ubuntu? [...] This is also a useful related resource: https://www.openstack.org/marketplace/distros/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Mon Mar 1 20:59:18 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Mar 2021 12:59:18 -0800 Subject: [SIG][Containers][k8s] Merge? Retire? In-Reply-To: References: Message-ID: Excellent! I will put up a patch to update the chair to being you then? I will make sure to add you as a reviewer when I post it. I think since there is no current interest in the containers SIG, perhaps we just remove it and have only the k8s SIG? -Kendall On Tue, Feb 23, 2021 at 4:34 PM feilong wrote: > Hi Kendall, > > Thanks for driving this firstly. I was the Magnum PTL for several cycles > and I can see there are still very strong interest from the community > members about running k8s on OpenStack and/or the reverse. I'm happy to > contribute my time for the SIG to bridge the two communities if it's > needed. Cheers. > > > On 24/02/21 1:02 pm, Kendall Nelson wrote: > > Hello! > > As you might have noticed, we've been working on getting the sig > governance site updated with current chairs, sig statuses. etc. Two SIGs > that are in need of updates still. > > The k8s SIG's listed chairs have all moved on. The container SIG is still > listed as forming. While not the exact same goal/topic, perhaps these can > be merged? And if so, do we have any volunteers for chairs? > > The other option is to simply remove the Container SIG as its not > completely formed at this point and retire the k8s SIG. > > Thoughts? Volunteers? > > -Kendall (diablo_rojo) > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon Mar 1 23:17:11 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 1 Mar 2021 17:17:11 -0600 Subject: [openstack-helm] IRC Meeting Cancelled March 2nd Message-ID: Hello, Since there are no agenda items [0] for the IRC meeting tomorrow, March 2nd, we will cancel it. Our next IRC meeting will be March 9th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Tue Mar 2 01:02:57 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 1 Mar 2021 22:02:57 -0300 Subject: [cinder] bug deputy report for week of 2021-02-22 Message-ID: This is a bug report from 2021-02-22 to 2021-02-26. Most of these bugs were discussed at the Cinder meeting last Wednesday 2021-02-24. Critical: - High: - https://bugs.launchpad.net/os-brick/+bug/1915678: "iSCSI+Multipath: Volume attachment hangs if session scanning fails''. Assigned to Takashi Kajinami (kajinamit). Medium: - https://bugs.launchpad.net/os-brick/+bug/1916264: "NVMeOFConnector can't connect volume ". Assigned to Zohar Mamedov (zoharm). - https://bugs.launchpad.net/python-cinderclient/+bug/1915996: "Fetching server version fails to support passing client certificates". Assigned to Sri Harsha mekala (harshayahoo). - https://bugs.launchpad.net/cinder/+bug/1915800: " XtremIO does not support ports filtering". Assigned to Vladislav Belogrudov (vlad-belogrudov). Low: - https://bugs.launchpad.net/cinder/+bug/1916258: "[docs] Install and configure a storage node in cinder ". Assigned to Sofia Enriquez (enriquetaso). Incomplete:- Undecided/Unconfirmed: - https://bugs.launchpad.net/cinder/+bug/1916980: "cinder sends old db object when delete an attachment". Assigned to wu.chunyang (wuchunyang). - https://bugs.launchpad.net/cinder/+bug/1916843: "Backup create failed: RBD volume flatten too long causing mq to timed out". Unassigned. Not a bug:- Feel free to reply/reach me if I missed something. Regards Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Mar 2 04:59:35 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Mar 2021 20:59:35 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC March 2021 Election Season Message-ID: Election details: https://governance.openstack.org/election/ The nomination period officially begins Mar 02, 2021 23:45 UTC. Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. Due to circumstances of timing, PTL and TC elections for the coming cycle will run concurrently; deadlines for their nomination and voting activities are synchronized but will still use separate ballots. Please note, if only one candidate is nominated as PTL for a project team during the PTL nomination period, that candidate will win by acclaim, and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a project team's PTL position. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[1] so that we may address your concerns. Thank you, -Kendall Nelson (diablo_rojo) [1] https://governance.openstack.org/election/#election-officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Mar 2 07:41:54 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 02 Mar 2021 08:41:54 +0100 Subject: [neutron] Bug deputy report (week starting on 2021-02-22) In-Reply-To: References: Message-ID: <20622341.81JxD3pCW9@p1> Hi, Dnia poniedziałek, 1 marca 2021 12:58:18 CET Bernard Cafarelli pisze: > Hello neutrinos, > > this is our first new bug deputy rotation for 2021, overall a quiet week > with most bugs already fixed or good progress. > > DVR folks may be interested in the discussion ongoing on the single High > bug, also I left one bug in Opinion on OVS agent main loop > > Critical > * neutron.tests.unit.common.test_utils.TestThrottler test_throttler is > failing - https://bugs.launchpad.net/neutron/+bug/1916572 > Patch was quickly merged: > https://review.opendev.org/c/openstack/neutron/+/777072 > > High > * [dvr] bound port permanent arp entries never deleted - > https://bugs.launchpad.net/neutron/+bug/1916761 > Introduced with > https://review.opendev.org/q/I538aa6d68fbb5ff8431f82ba76601ee34c1bb181 > Lengthy discussion in the bug itself and suggested fix > https://review.opendev.org/c/openstack/neutron/+/777616 > > Medium > * [OVN][QOS] OVN DB QoS rule is not removed when a FIP is dissasociated - > https://bugs.launchpad.net/neutron/+bug/1916470 > Patch by ralonsoh merged > https://review.opendev.org/c/openstack/neutron/+/776916 > * A privsep daemon spawned by neutron-openvswitch-agent hangs when debug > logging is enabled - https://bugs.launchpad.net/neutron/+bug/1896734 > Actually reported earlier on charm and oslo, ralonsoh looking into it > from neutron side > * StaleDataError: DELETE statement on table 'standardattributes' expected > to delete 2 row(s); 1 were matched - > https://bugs.launchpad.net/neutron/+bug/1916889 > Spotted by Liu while looking at dvr bug, patch sent to neutron-lib > https://review.opendev.org/c/openstack/neutron-lib/+/777581 > > Low > * ovn-octavia-provider can attempt to write protocol=None to OVSDB - > https://bugs.launchpad.net/neutron/+bug/1916646 > This appears in some functional test results, > otherwiseguy sent > https://review.opendev.org/c/openstack/ovn-octavia-provider/+/777201 > > Opinion > * use deepcopy in function rpc_loop of ovs-agent - > https://bugs.launchpad.net/neutron/+bug/1916761 I think You pasted wrong link here. Probably it should be https:// bugs.launchpad.net/neutron/+bug/1916918, right? > My hunch is that we are fine here, but please chime in other opinions > > Passing the baton to our PTL for next week! > -- > Bernard Cafarelli -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bdobreli at redhat.com Tue Mar 2 09:05:32 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 2 Mar 2021 10:05:32 +0100 Subject: Puppet openstack modules In-Reply-To: References: Message-ID: On 2/27/21 4:06 PM, Takashi Kajinami wrote: > Ruby codes in puppet-openstack repos are used for the following three > purposes. >  1. unit tests and acceptance tests using serverspec framework (files > placed under spec) >  2. implementation of custom type, provider, and function And some modules do really heavy use of those customizations written in ruby, like puppet-pacemaker [0], which is 57% in Ruby and only 41% in Puppet. [0] https://github.com/openstack/puppet-pacemaker >  3. template files (We use ERB instead of pure Ruby about this, though) > > 1 is supposed to be used only for testing during deployment but 2 and 3 > can be used > in any production use case in combination with puppet manifest files to > manage > OpenStack deployments. > > > On Sat, Feb 27, 2021 at 5:01 AM Bessghaier, Narjes > > wrote: > > > Dear OpenStack team, > > My name is Narjes and I'm a PhD student at the University of > Montréal, Canada. > >  My current work consists of analyzing code reviews on the puppet > modules. I would like to precisely know what the ruby files are used > for in the puppet modules. As mentioned in the official website, > most of unit test are written in ruby. Are ruby files destined to > carry out units tests or destined for production code. > > I appreciate your help, > Thank you > -- Best regards, Bogdan Dobrelya, Irc #bogdando From admin at gsic.uva.es Tue Mar 2 09:13:49 2021 From: admin at gsic.uva.es (Cristina Mayo) Date: Tue, 2 Mar 2021 10:13:49 +0100 Subject: [swift] Unable to get or create objects Message-ID: Hello, I don't have a lot of knowledge about Openstack. I have installed Swift in my Openstack Ussuri cloud with one storage node but I am not be able to list or create objects. In the controller node I saw these errors: proxy-server: ERROR with Account server 10.20.20.1:6202/sdb re: Trying to HEAD /v1/AUTH_c2b7d6242f3140f09d283e8fbb88732a: Connection refused (txn: tx370fb5929a6c4cf397384-00603dfe35) proxy-server: Account HEAD returning 503 for [] (txn:tx370fb5929a6c4cf397384-00603dfe35) Any idea? Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From alistairncoles at gmail.com Tue Mar 2 10:15:53 2021 From: alistairncoles at gmail.com (Alistair Coles) Date: Tue, 2 Mar 2021 10:15:53 +0000 Subject: [swift] Unable to get or create objects In-Reply-To: References: Message-ID: Cristina You might like to join the #openstack-swift irc channel to get help ( https://wiki.openstack.org/wiki/IRC ) It sounds like you don't have an account server running, or it is running but listening on a different port than configured in your account ring. You can verify which swift services are running on your storage node using: swift-init status main and/or check which ports they are listening on using: lsof -i tcp +c 15 (you're looking for lines similar to 'swift-account-s 4309 vagrant 17u IPv4 3466818 0t0 TCP localhost:6012 (LISTEN)') The host:port should match what is in the account ring, on your controller node, which is shown using: swift-ring-builder /etc/swift/account.builder You can also try to make an http request from your controller node to the account server using something like: curl -I http://localhost:6012/recon/version HTTP/1.1 200 OK Content-Length: 28 Content-Type: application/json Date: Tue, 02 Mar 2021 10:03:47 GMT ( in your case curl -I http://10.20.20.1:6202/recon/version ) If necessary start services using: swift-init restart account A similar diagnosis can be used if you have problems with connections to container or object services. Alistair On Tue, Mar 2, 2021 at 9:18 AM Cristina Mayo wrote: > Hello, > > I don't have a lot of knowledge about Openstack. I have installed Swift > in my Openstack Ussuri cloud with one storage node but I am not be able to > list or create objects. In the controller node I saw these errors: > > proxy-server: ERROR with Account server 10.20.20.1:6202/sdb re: Trying to > HEAD /v1/AUTH_c2b7d6242f3140f09d283e8fbb88732a: Connection refused (txn: > tx370fb5929a6c4cf397384-00603dfe35) > proxy-server: Account HEAD returning 503 for [] > (txn:tx370fb5929a6c4cf397384-00603dfe35) > > Any idea? > > Thanks in advance! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Mar 2 10:25:34 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 2 Mar 2021 11:25:34 +0100 Subject: Question about Ubuntu Server In-Reply-To: References: Message-ID: <121f78af-dcca-8139-dc78-cf8d7909335b@debian.org> On 3/1/21 8:54 PM, Mauricio Tavares wrote: > So openstack does run in debin/ubuntu? Yes, and if you're an IRC person, I enjoy helping people. :) Join us on #debian-openstack on the OFTC network. If you didn't know about it, have a look here: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer This is fully in Debian Bullseye, which at this point, I would be advising to run OpenStack (even if it's not released yet, the fact that it is frozen is good enough for production, IMO). With this solution, you do not need *anything* outside of the Debian repositories (even the puppet modules are packaged). I like to say also that it's very flexible, and will install all the OpenStack components it can depending on what node type you install. I hope this helps, Cheers, Thomas Goirand (zigo) From bcafarel at redhat.com Tue Mar 2 14:10:39 2021 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Tue, 2 Mar 2021 15:10:39 +0100 Subject: [neutron] Bug deputy report (week starting on 2021-02-22) In-Reply-To: <20622341.81JxD3pCW9@p1> References: <20622341.81JxD3pCW9@p1> Message-ID: On Tue, 2 Mar 2021 at 08:42, Slawek Kaplonski wrote: > Hi, > > Dnia poniedziałek, 1 marca 2021 12:58:18 CET Bernard Cafarelli pisze: > > Hello neutrinos, > > > > this is our first new bug deputy rotation for 2021, overall a quiet week > > with most bugs already fixed or good progress. > > > > DVR folks may be interested in the discussion ongoing on the single High > > bug, also I left one bug in Opinion on OVS agent main loop > > > > Critical > > * neutron.tests.unit.common.test_utils.TestThrottler test_throttler is > > failing - https://bugs.launchpad.net/neutron/+bug/1916572 > > Patch was quickly merged: > > https://review.opendev.org/c/openstack/neutron/+/777072 > > > > High > > * [dvr] bound port permanent arp entries never deleted - > > https://bugs.launchpad.net/neutron/+bug/1916761 > > Introduced with > > https://review.opendev.org/q/I538aa6d68fbb5ff8431f82ba76601ee34c1bb181 > > Lengthy discussion in the bug itself and suggested fix > > https://review.opendev.org/c/openstack/neutron/+/777616 > > > > Medium > > * [OVN][QOS] OVN DB QoS rule is not removed when a FIP is dissasociated - > > https://bugs.launchpad.net/neutron/+bug/1916470 > > Patch by ralonsoh merged > > https://review.opendev.org/c/openstack/neutron/+/776916 > > * A privsep daemon spawned by neutron-openvswitch-agent hangs when debug > > logging is enabled - https://bugs.launchpad.net/neutron/+bug/1896734 > > Actually reported earlier on charm and oslo, ralonsoh looking into it > > from neutron side > > * StaleDataError: DELETE statement on table 'standardattributes' expected > > to delete 2 row(s); 1 were matched - > > https://bugs.launchpad.net/neutron/+bug/1916889 > > Spotted by Liu while looking at dvr bug, patch sent to neutron-lib > > https://review.opendev.org/c/openstack/neutron-lib/+/777581 > > > > Low > > * ovn-octavia-provider can attempt to write protocol=None to OVSDB - > > https://bugs.launchpad.net/neutron/+bug/1916646 > > This appears in some functional test results, > > otherwiseguy sent > > https://review.opendev.org/c/openstack/ovn-octavia-provider/+/777201 > > > > Opinion > > * use deepcopy in function rpc_loop of ovs-agent - > > https://bugs.launchpad.net/neutron/+bug/1916761 > > I think You pasted wrong link here. Probably it should be https:// > bugs.launchpad.net/neutron/+bug/1916918, right? > Exactly, I just realized it when checking the links for the IRC meeting! 1916918 is correct one here > > > My hunch is that we are fine here, but please chime in other opinions > > > > Passing the baton to our PTL for next week! > > -- > > Bernard Cafarelli > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Mar 2 14:28:19 2021 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 2 Mar 2021 19:58:19 +0530 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> Message-ID: Hi, On Thu, Sep 12, 2019 at 12:30 AM Jeremy Stanley wrote: > > On 2019-09-09 12:53:26 +0530 (+0530), Yatin Karel wrote: > [...] > > Can someone from Release or Infra Team can do the needful of > > removing stable/ocata and stable/pike branch for TripleO projects > > being EOLed for pike/ocata in > > https://review.opendev.org/#/c/677478/ and > > https://review.opendev.org/#/c/678154/. > > I've attempted to extract the lists of projects from the changes you > linked. I believe you're asking to have the stable/ocata branch > deleted from these projects: > > openstack/instack-undercloud > openstack/instack > openstack/os-apply-config > openstack/os-cloud-config > openstack/os-collect-config > openstack/os-net-config > openstack/os-refresh-config > openstack/puppet-tripleo > openstack/python-tripleoclient > openstack/tripleo-common > openstack/tripleo-heat-templates > openstack/tripleo-image-elements > openstack/tripleo-puppet-elements > openstack/tripleo-ui > openstack/tripleo-validations > > And the stable/pike branch deleted from these projects: > > openstack/instack-undercloud > openstack/instack > openstack/os-apply-config > openstack/os-collect-config > openstack/os-net-config > openstack/os-refresh-config > openstack/paunch > openstack/puppet-tripleo > openstack/python-tripleoclient > openstack/tripleo-common > openstack/tripleo-heat-templates > openstack/tripleo-image-elements > openstack/tripleo-puppet-elements > openstack/tripleo-ui > openstack/tripleo-validations > > Can you confirm? Also, have you checked for and abandoned all open > changes on the affected branches? I totally missed this mail, in today's Tripleo meeting it was raised so get back to this again. @Jeremy yes the list is correct. These branches were EOLed long ago is it still necessary to abandon all open reviews? Anyway, I will get those cleaned. > -- > Jeremy Stanley Thanks and Regards Yatin Karel From fungi at yuggoth.org Tue Mar 2 15:04:51 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 2 Mar 2021 15:04:51 +0000 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> Message-ID: <20210302150451.ksnug53wzup747t4@yuggoth.org> On 2021-03-02 19:58:19 +0530 (+0530), Yatin Karel wrote: [...] > is it still necessary to abandon all open reviews? [...] Gerrit will not allow deletion of a branch if there are any changes still open for it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Tue Mar 2 15:53:26 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 2 Mar 2021 17:53:26 +0200 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20210302150451.ksnug53wzup747t4@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> <20210302150451.ksnug53wzup747t4@yuggoth.org> Message-ID: On Tue, Mar 2, 2021 at 5:06 PM Jeremy Stanley wrote: > On 2021-03-02 19:58:19 +0530 (+0530), Yatin Karel wrote: > [...] > > is it still necessary to abandon all open reviews? > [...] > > thanks ykarel for bringing this up in today's tripleo irc meeting - I completely missed this thread > Gerrit will not allow deletion of a branch if there are any changes > still open for it. > thank you and ack Jeremy - I just abandoned a couple of changes I had there (eg https://review.opendev.org/c/openstack/instack-undercloud/+/777368) to remove deprecated things from zuul layouts, but instead removing the branch is a better solution no more zuul layout to worry about ;) - one of those is currently waiting in the gate ( https://review.opendev.org/c/openstack/os-apply-config/+/777533) so i didn't hit abandon ... if it fails for whatever reason then I can abandon that one too. marios > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken at jots.org Tue Mar 2 20:31:07 2021 From: ken at jots.org (Ken D'Ambrosio) Date: Tue, 02 Mar 2021 15:31:07 -0500 Subject: anti-affinity: what're the mechanics? Message-ID: <371d077a5d2e99be4df344eceb25c43f@jots.org> Hey, all. Turns out we really need anti-affinity running on our (very freaking old -- Juno) clouds. I'm trying to find docs that describe its functionality, and am failing. If I enable it, and (say) have 10 hypervisors, and 12 VMs to fire off, what happens when VM #11 goes to fire? Does it fail, or does the scheduler just continue to at least *try* to maintain as few possible on each hypervisor? Thanks! -Ken From motosingh at yahoo.co.uk Tue Mar 2 22:00:17 2021 From: motosingh at yahoo.co.uk (Dees) Date: Tue, 2 Mar 2021 22:00:17 +0000 (UTC) Subject: [openstack-community] Ovn chassis error possibly networking config error. In-Reply-To: <1580253120.2031001.1614631709883@mail.yahoo.com> References: <952196218.884082.1614451027288.ref@mail.yahoo.com> <952196218.884082.1614451027288@mail.yahoo.com> <1580253120.2031001.1614631709883@mail.yahoo.com> Message-ID: <1178477318.2869706.1614722417156@mail.yahoo.com> Hi All, I got around that problem too. I have a new problem now as I need to assign two external networks to VM with provider segment as vlan. Any guidance please? So I tried this it doesn't work, not sure why. openstack network create Pub_Net --external --share --default \   --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.10,end=10.141.40.20 \   --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \   --network Pub_Net NET_ID1=$(openstack network list | grep -w Pub_Net| awk '{ print $2 }') openstack server create --image 'focal_x86_64' --flavor m1.micro \   --key-name User1-key --security-group Allow_SSHPing --nic net-id=$NET_ID1 \   focal-1 But which router and floating IP it works but as I need two public networks with only one floating IP allowed is an issue.----------------------------------------------------------------------------------------------------------------------------------------------- openstack network create Pub_Net --external --share --default \   --provider-network-type vlan --provider-segment 200 --provider-physical-network physnet1 openstack subnet create Pub_Subnet --allocation-pool start=10.141.40.10,end=10.141.40.20 \   --subnet-range 10.141.40.0/26 --no-dhcp --gateway 10.141.40.1 \   --network Pub_Net openstack network create Network1 --internalopenstack subnet create Subnet1 \   --allocation-pool start=192.168.0.10,end=192.168.0.199 \   --subnet-range 192.168.0.0/24 \   --gateway 192.168.0.1 --dns-nameserver 10.0.0.3 \   --network Network1 openstack router create Router1openstack router add subnet Router1 Subnet1openstack router set Router1 --external-gateway Pub_Net NET_ID1=$(openstack network list | grep Network1 | awk '{ print $2 }') openstack server create --image 'focal_x86_64' --flavor m1.micro \   --key-name User1-key --security-group Allow_SSHPing --nic net-id=$NET_ID1 \   focal-1 FLOATING_IP=$(openstack floating ip create -f value -c floating_ip_address Pub_Net) openstack server add floating ip focal-1 $FLOATING_IP Kind regards,Deepesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankelezhang at gmail.com Tue Mar 2 07:33:04 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Tue, 2 Mar 2021 15:33:04 +0800 Subject: Some questions about Ironic bare metal Message-ID: Hi, I have included Ironic service into my rocky OpenStack platform. Using the IPMI driver. The cleaning network and the provisioning network are my provider network. I have some questions about Ironic deleting and inspecting. 1、every time I delete my baremetal nodes, I need to delete the associated servers first. the servers delete successfully, but the associated servers filed in nodes are still exist. So I need to set nodes to maintenance mode before I can delete bare metal nodes. what's more, the port list in 'openstack port list' which belong to the nodes can not be deleted automatically. How can I delete nodes correctly? 2、If the new created baremetal nodes' system disks have exist OS data.I cannot inspect them, I need to clean them first, but the cleaning step need MAC address of the nodes and the MAC addresses are obtained by inspecting. So what should I do? I don't want to fill in the MAC addresses manually. I have got the PXE boot but was immediately plugged into the existing system as: [image: image.png] 0.1s later: [image: image.png] Looking forward to your help. Ankele. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 190407 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 106333 bytes Desc: not available URL: From fungi at yuggoth.org Tue Mar 2 23:12:33 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 2 Mar 2021 23:12:33 +0000 Subject: [all][elections][ptl][tc] IMPORTANT Note About March 2021 Elections Message-ID: <20210302231232.akpm65nswdpmygug@yuggoth.org> Please note that to be eligible to vote in upcoming Technical Committee and Project Team Lead elections, you must make sure that your Preferred Email Address in Gerrit appears somewhere in your Open Infrastructure Foundation Individual Member profile. To check this, visit https://review.opendev.org/settings/#EmailAddresses while authenticated to Gerrit and make a note of which address has the Preferred button selected. Next visit https://openstackid.org/accounts/user/profile and (after authenticating) make sure that address appears in at least one of the Email, Second Email, or Third Email fields. If not, add it to one of the available Email fields there and click the Save button. This requirement has changed slightly from before (when any address known to Gerrit was sufficient), because of slightly stricter API handling in the newer Gerrit release we're now running. If you have any questions, please feel free to reply or reach out to the technical election officials in the #openstack-election IRC channel on Freenode. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Tue Mar 2 23:46:18 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 2 Mar 2021 15:46:18 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC April 2021 Nominations Kickoff Message-ID: Nominations for OpenStack PTLs (Project Team Leads) and TC (Technical Committee) positions (5 positions) are now open and will remain open until Mar 09, 2021 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at https://governance.openstack.org/election/#how-to-submit-a-candidacy Please make sure to follow the candidacy file naming convention: candidates/xena// (for example, "candidates/xena/TC/stacker at example.org"). The name of the file should match an email address for your current OpenStack Foundation Individual Membership. Take this opportunity to ensure that your OSF member profile contains current information: https://www.openstack.org/profile/ Any OpenStack Foundation Individual Member can propose their candidacy for an available, directly-elected seat on the Technical Committee. In order to be an eligible candidate for PTL you must be an OpenStack Foundation Individual Member. PTL candidates must also have contributed to the corresponding team during the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC. Your Gerrit account must also have a verified email address matching the one used in your candidacy filename. Both PTL and TC elections will be held from Mar 12, 2021 23:45 UTC through to Mar 19, 2021 23:45 UTC. The electorate for the TC election are the OpenStack Foundation Individual Members who have a code contribution to one of the official teams over the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC, as well as any Extra ATCs who are acknowledged by the TC. The electorate for a PTL election are the OpenStack Foundation Individual Members who have a code contribution over the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC, in a deliverable repository maintained by the team which the PTL would lead, as well as the Extra ATCs who are acknowledged by the TC for that specific team. The list of project teams can be found at https://governance.openstack.org/tc/reference/projects/ and their individual team pages include lists of corresponding Extra ATCs. Please find below the timeline: nomination starts @ Mar 02, 2021 23:45 UTC nomination ends @ Mar 09, 2021 23:45 UTC campaigning starts @ Mar 09, 2021 23:45 UTC campaigning ends @ Mar 11, 2021 23:45 UTC elections start @ Mar 12, 2021 23:45 UTC elections end @ Mar 19, 2021 23:45 UTC Shortly after election officials approve candidates, they will be listed on the https://governance.openstack.org/election/ page. The electorate is requested to confirm their email addresses in Gerrit prior to 2021-03-08 00:00:00+00:00, so that the emailed ballots are sent to the correct email address. This email address should match one which was provided in your foundation member profile as well. Gerrit account information and OSF member profiles can be updated at https://review.openstack.org/#/settings/contact and https://www.openstack.org/profile/ accordingly. If you have any questions please be sure to either ask them on the mailing list or to the elections officials: https://governance.openstack.org/election/#election-officials -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 00:00:10 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 00:00:10 +0000 Subject: anti-affinity: what're the mechanics? In-Reply-To: <371d077a5d2e99be4df344eceb25c43f@jots.org> References: <371d077a5d2e99be4df344eceb25c43f@jots.org> Message-ID: On Tue, 2021-03-02 at 15:31 -0500, Ken D'Ambrosio wrote: > Hey, all. Turns out we really need anti-affinity running on our (very > freaking old -- Juno) clouds. I'm trying to find docs that describe its > functionality, and am failing. If I enable it, and (say) have 10 > hypervisors, and 12 VMs to fire off, what happens when VM #11 goes to > fire? Does it fail, or does the scheduler just continue to at least > *try* to maintain as few possible on each hypervisor? in juno i belive we only have hard anti affinity via the filter. i belive it predates the soft affinit/anti-affinity filter so it will error out. the behavior i belive will depend on if you ddi a multi create or booted the vms serally if you do it serially then you should be able to boot 10 vms. if you do a multi create then it depends on if you set the min value or not. if you dont set the min value i think only 10 will boot and the last two will error. if you set --min 12 --max 12 i think they all will go to error or be deleted. i have not checked that but i belive we are ment to try and role back in that case. the soft affinity weigher was added by https://github.com/openstack/nova/commit/72ba18468e62370522e07df796f5ff74ae13e8c9 in mitaka. if you want to be able to boot all 12 then you need a weigher like that. for the most part you can proably backport that directly from master and use it in juno as i dont think we have matirally altered the way the filters work that much bu the weigher like the filters are also plugabl so you can backport it externally and load it if you wanted too. that is proably your best bet. > > Thanks! > > -Ken > From juliaashleykreger at gmail.com Wed Mar 3 00:04:47 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 2 Mar 2021 16:04:47 -0800 Subject: Some questions about Ironic bare metal In-Reply-To: References: Message-ID: Greetintgs, replies inline! On Tue, Mar 2, 2021 at 2:48 PM Ankele zhang wrote: > Hi, > I have included Ironic service into my rocky OpenStack platform. Using the > IPMI driver. The cleaning network and the provisioning network are my > provider network. > I have some questions about Ironic deleting and inspecting. > > 1、every time I delete my baremetal nodes, I need to delete the associated > servers first. the servers delete successfully, but the associated servers > filed in nodes are still exist. So I need to set nodes to maintenance mode > before I can delete bare metal nodes. what's more, the port list in > 'openstack port list' which belong to the nodes can not be deleted > automatically. How can I delete nodes correctly? > > Okay, are you trying `openstack baremetal node delete` before unprovisioning the instances? Basically, if you're integrated with nova, the instance has to be unprovisioned. Even with ironic on it's own, `openstack baremetal node unprovision` is what you're looking for most likely. Since we're managing physical machines, we never really want people to delete baremetal nodes from ironic except as some sort of permanent last resort or removal from ironic. Any need to do so we consider to be a bug, if that makes sense. The nodes in ironic will change states upon unprovision to "available" instead of "active" which indicates that it is deployed. A little different, but it all comes down to management and tracking of distinct long living physical machines. So Ironic does *not* delete the port if it is pre-created, because the port can realistically be moved elsewhere and the MAC address can be reset. In some cases nova doesn't delete a port upon un-provision, so it kind of depends how you reached that point if the services involved will remove the port. If you're doing it manually to provision a server, it will still need to be removed. > 2、If the new created baremetal nodes' system disks have exist OS data.I > cannot inspect them, I need to clean them first, but the cleaning step need > MAC address of the nodes and the MAC addresses are obtained by inspecting. > So what should I do? I don't want to fill in the MAC addresses manually. > I have got the PXE boot but was immediately plugged into the existing > system as: > Inspection is optional, but a MAC address is functionally required to identify the machine since BMC identification by address is not reliable on all hardware vendors. This is even more so with the case that you have an existing operating system on the machine. Granted, you may want to check your inspection PXE configuration and PXE templates, since I guess your default falls back to the disk where instead you can have the configuration fall to inspection. Generally, most people tend to use iPXE because it is a bit more powerful for things such as this. > [image: image.png] > 0.1s later: > [image: image.png] > > Looking forward to your help. > > Ankele. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 190407 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 106333 bytes Desc: not available URL: From gouthampravi at gmail.com Wed Mar 3 00:56:28 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 2 Mar 2021 16:56:28 -0800 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > Hi all, > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > switching the default backend in Devstack to OVN. A lot of work has > > been done since, from porting the OVN devstack module to the DevStack > > tree, refactoring the DevStack module to install OVN from distro > > packages, implementing features to close the parity gap with ML2/OVS, > > fixing issues with tests and distros, etc... > > > > We are now very close to being able to make the switch and we've > > thought about sending this email to the broader community to raise > > awareness about this change as well as bring more attention to the > > patches that are current on review. > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > going to be developed and maintained by the upstream Neutron > > community. > can we ensure that this does not happen until the xena release. > in generall i think its ok to change the default but not this late in the cycle. > i would also like to ensure we keep at least one non ovn based multi node job in > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. > right now the event/neutorn interaction is not the same during move operations. > > > > Below is a e per project explanation with relevant links and issues of > > where we stand with this work right now: > > > > * Keystone: > > > > Everything should be good for Keystone, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > * Glance: > > > > Everything should be good for Glace, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > * Swift: > > > > Everything should be good for Swift, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > * Ironic: > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > progress, we've changed most of the Ironic jobs to explicitly enable > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > Here is the test patch: > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > * Cinder: > > > > Cinder is almost complete. There's one test failure in the > > "tempest-slow-py3" job run on the > > "test_port_security_macspoofing_port" test. > > > > This failure is due to a bug in core OVN [1]. This bug has already > > been fixed upstream [2] and the fix has been backported down to the > > branch-20.03 [3] of the OVN project. However, since we install OVN > > from packages we are currently waiting for this fix to be included in > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > contacted the package maintainer which has been very supportive of > > this work and will work on the package update, but he maintain a > > handful of backports in that package which is not yet included in OVN > > 20.03 upstream and he's now working with the core OVN community [4] to > > include it first in the branch and then create a new package for it. > > Hopefully this will happen soon. > > > > But for now we have a few options moving on with this issue: > > > > 1- Wait for the new package version > > 2- Mark the test as unstable until we get the new package version > > 3- Compile OVN from source instead of installing it from packages > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > i dont think we should default to ovn untill a souce build is not required. > compiling form souce while not supper expensice still adds time to the job > execution and im not sure we should be paying that cost on every devstack job run. > > we could maybe compile it once and bake the package into the image or host it on a mirror > but i think we should avoid this option if we have alternitives. > > > > What do you think about it ? > > > > Here is the test patch for Cinder: > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > * Nova: > > > > There are a few patches waiting for review for Nova, which are: > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > the scripts were trying to stop the Neutron agent (q-agt) process > > which is not part of an ML2/OVN deployment. The patch changes the code > > to check if that system unit exists before trying to stop it. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > which can be removed one release cycle after we switch DevStack to > > ML2/OVN. Grenade will test updating from the release version to the > > master branch but, since the default of the released version is not > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > is not supported. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > therefore we are temporarily enabling ML2/OVS for this job until that > > feature lands in core OVN. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > because he feels like a change in the default network driver a few > > weeks prior to the upstream code freeze can be concerning. We do not > > know yet precisely when we are changing the default due to the current > > patches we need to get merged but, if this is a shared feeling among > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > until we get a new release in OpenStack. > yep this is still my view. > i would suggest we do the work required in the repos but not merge it until the xena release > is open. thats technically at RC1 so march 25th > i think we can safely do the swich after that but i would not change the defualt in any project > before then. > > > > Here's the test patch for Nova: > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > * DevStack: > > > > And this is the final patch that will make this all happen: > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > a long and bumpy road to get to this point and I would like to say > > thanks to everyone involved so far and everyone that read the whole > > email, please let me know your thoughts. > thanks for working on this. > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > Cheers, > > Lucas ++ Thank you indeed for working diligently on this important change. Please do note that devstack, and the base job that you're modifying is used by many other projects besides the ones that you have enumerated in the subject line. I suggest using [all] as a better subject line indicator to get the attention of folks like me who have filters based on the subject line. Also, the network substrate is important for the project I help maintain: Manila, which provides shared file systems over a network - so I followed your lead and submitted a dependent patch. I hope to reach out to you in case we see some breakages: https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > > > From anlin.kong at gmail.com Wed Mar 3 02:59:32 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 3 Mar 2021 15:59:32 +1300 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: Hi, Thanks for all your hard work on this. I'm wondering is there any doc proposed for devstack to tell people who are not interested in OVN to keep the current devstack behaviour? I have a feeling that using OVN as default Neutron driver would break the CI jobs for some projects like Octavia, Trove, etc. which rely on ovs port for the set up. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi wrote: > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > Hi all, > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > switching the default backend in Devstack to OVN. A lot of work has > > > been done since, from porting the OVN devstack module to the DevStack > > > tree, refactoring the DevStack module to install OVN from distro > > > packages, implementing features to close the parity gap with ML2/OVS, > > > fixing issues with tests and distros, etc... > > > > > > We are now very close to being able to make the switch and we've > > > thought about sending this email to the broader community to raise > > > awareness about this change as well as bring more attention to the > > > patches that are current on review. > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > going to be developed and maintained by the upstream Neutron > > > community. > > can we ensure that this does not happen until the xena release. > > in generall i think its ok to change the default but not this late in > the cycle. > > i would also like to ensure we keep at least one non ovn based multi > node job in > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > merged and possible after. > > right now the event/neutorn interaction is not the same during move > operations. > > > > > > Below is a e per project explanation with relevant links and issues of > > > where we stand with this work right now: > > > > > > * Keystone: > > > > > > Everything should be good for Keystone, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > * Glance: > > > > > > Everything should be good for Glace, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > * Swift: > > > > > > Everything should be good for Swift, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > * Ironic: > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > Here is the test patch: > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > * Cinder: > > > > > > Cinder is almost complete. There's one test failure in the > > > "tempest-slow-py3" job run on the > > > "test_port_security_macspoofing_port" test. > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > been fixed upstream [2] and the fix has been backported down to the > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > from packages we are currently waiting for this fix to be included in > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > contacted the package maintainer which has been very supportive of > > > this work and will work on the package update, but he maintain a > > > handful of backports in that package which is not yet included in OVN > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > include it first in the branch and then create a new package for it. > > > Hopefully this will happen soon. > > > > > > But for now we have a few options moving on with this issue: > > > > > > 1- Wait for the new package version > > > 2- Mark the test as unstable until we get the new package version > > > 3- Compile OVN from source instead of installing it from packages > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > i dont think we should default to ovn untill a souce build is not > required. > > compiling form souce while not supper expensice still adds time to the > job > > execution and im not sure we should be paying that cost on every > devstack job run. > > > > we could maybe compile it once and bake the package into the image or > host it on a mirror > > but i think we should avoid this option if we have alternitives. > > > > > > What do you think about it ? > > > > > > Here is the test patch for Cinder: > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > * Nova: > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > to check if that system unit exists before trying to stop it. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > which can be removed one release cycle after we switch DevStack to > > > ML2/OVN. Grenade will test updating from the release version to the > > > master branch but, since the default of the released version is not > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > is not supported. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > feature lands in core OVN. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > because he feels like a change in the default network driver a few > > > weeks prior to the upstream code freeze can be concerning. We do not > > > know yet precisely when we are changing the default due to the current > > > patches we need to get merged but, if this is a shared feeling among > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > until we get a new release in OpenStack. > > yep this is still my view. > > i would suggest we do the work required in the repos but not merge it > until the xena release > > is open. thats technically at RC1 so march 25th > > i think we can safely do the swich after that but i would not change the > defualt in any project > > before then. > > > > > > Here's the test patch for Nova: > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > * DevStack: > > > > > > And this is the final patch that will make this all happen: > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > a long and bumpy road to get to this point and I would like to say > > > thanks to everyone involved so far and everyone that read the whole > > > email, please let me know your thoughts. > > thanks for working on this. > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > [2] > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > > [3] > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > > [4] > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > > [5] > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > Cheers, > > > Lucas > > ++ Thank you indeed for working diligently on this important change. > > Please do note that devstack, and the base job that you're modifying > is used by many other projects besides the ones that you have > enumerated in the subject line. > I suggest using [all] as a better subject line indicator to get the > attention of folks like me who have filters based on the subject line. > Also, the network substrate is important for the project I help > maintain: Manila, which provides shared file systems over a network - > so I followed your lead and submitted a dependent patch. I hope to > reach out to you in case we see some breakages: > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 04:01:24 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 04:01:24 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > Hi, > > Thanks for all your hard work on this. > > I'm wondering is there any doc proposed for devstack to tell people who are > not interested in OVN to keep the current devstack behaviour? I have a > feeling that using OVN as default Neutron driver would break the CI jobs > for some projects like Octavia, Trove, etc. which rely on ovs port for the > set up. well ovn is just an alternivie contoler for ovs. so ovn replace the neutron l2 agent it does not replace ovs. project like octavia or trove that deploy loadblances or dbs in vms should not be able to observe a difference. they may still want to deploy ml2/ovs but unless they are doing something directly on the host like adding port directly to ovs because they are not using vms they should not be aware of this change. my reticense to cahnge at this point in the cyle for nova is motiveated maily by gate stablity. the active contibutes to nova dont really have experince with ovn and how to debug it in the gate. we also are getting close to FF when we tend to get a lot of patches and the gate stablity become even more imporant so adding a new vaiabrly to that mix but swaping out the networkbacked between now and the wallaby release seams problematic. in any cases swapping back should ideallly be as simple as setting Q_AGENT=openvswitch. i have not looked at the patches but to swap betwween ovs and linuxbidge you just define Q_AGENT=linuxbridge for the most part so im expecting that we would just enable Q_AGENT=ovn or somthing simialr for ovn. i know ovn used to have its own devstack plugin but if we are makeing it the default that means it need to be support nativly in devstack not as a plugin so useing Q_AGENT=ovn to enable it and make that the new default would seam to be the simplest way to manage that. but yes documenting how to enabel the old behavior is still important the example nova patch shows how to hardcode the old behavior https://review.opendev.org/c/openstack/nova/+/776944/5/.zuul.yaml#234 it seams to be doing this a little more explictly then i would like but its not that hard. i would suggest adding a second sample loca.conf in devstack for standard ovs deployments. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi > wrote: > > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > > Hi all, > > > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > > switching the default backend in Devstack to OVN. A lot of work has > > > > been done since, from porting the OVN devstack module to the DevStack > > > > tree, refactoring the DevStack module to install OVN from distro > > > > packages, implementing features to close the parity gap with ML2/OVS, > > > > fixing issues with tests and distros, etc... > > > > > > > > We are now very close to being able to make the switch and we've > > > > thought about sending this email to the broader community to raise > > > > awareness about this change as well as bring more attention to the > > > > patches that are current on review. > > > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > > going to be developed and maintained by the upstream Neutron > > > > community. > > > can we ensure that this does not happen until the xena release. > > > in generall i think its ok to change the default but not this late in > > the cycle. > > > i would also like to ensure we keep at least one non ovn based multi > > node job in > > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > > merged and possible after. > > > right now the event/neutorn interaction is not the same during move > > operations. > > > > > > > > Below is a e per project explanation with relevant links and issues of > > > > where we stand with this work right now: > > > > > > > > * Keystone: > > > > > > > > Everything should be good for Keystone, the gate is happy with the > > > > changes. Here is the test patch: > > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > > > * Glance: > > > > > > > > Everything should be good for Glace, the gate is happy with the > > > > changes. Here is the test patch: > > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > > > * Swift: > > > > > > > > Everything should be good for Swift, the gate is happy with the > > > > changes. Here is the test patch: > > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > > > * Ironic: > > > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > > Here is the test patch: > > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > > > * Cinder: > > > > > > > > Cinder is almost complete. There's one test failure in the > > > > "tempest-slow-py3" job run on the > > > > "test_port_security_macspoofing_port" test. > > > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > > been fixed upstream [2] and the fix has been backported down to the > > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > > from packages we are currently waiting for this fix to be included in > > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > > contacted the package maintainer which has been very supportive of > > > > this work and will work on the package update, but he maintain a > > > > handful of backports in that package which is not yet included in OVN > > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > > include it first in the branch and then create a new package for it. > > > > Hopefully this will happen soon. > > > > > > > > But for now we have a few options moving on with this issue: > > > > > > > > 1- Wait for the new package version > > > > 2- Mark the test as unstable until we get the new package version > > > > 3- Compile OVN from source instead of installing it from packages > > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > i dont think we should default to ovn untill a souce build is not > > required. > > > compiling form souce while not supper expensice still adds time to the > > job > > > execution and im not sure we should be paying that cost on every > > devstack job run. > > > > > > we could maybe compile it once and bake the package into the image or > > host it on a mirror > > > but i think we should avoid this option if we have alternitives. > > > > > > > > What do you think about it ? > > > > > > > > Here is the test patch for Cinder: > > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > > > * Nova: > > > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > > to check if that system unit exists before trying to stop it. > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > > which can be removed one release cycle after we switch DevStack to > > > > ML2/OVN. Grenade will test updating from the release version to the > > > > master branch but, since the default of the released version is not > > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > > is not supported. > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > > feature lands in core OVN. > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > > because he feels like a change in the default network driver a few > > > > weeks prior to the upstream code freeze can be concerning. We do not > > > > know yet precisely when we are changing the default due to the current > > > > patches we need to get merged but, if this is a shared feeling among > > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > > until we get a new release in OpenStack. > > > yep this is still my view. > > > i would suggest we do the work required in the repos but not merge it > > until the xena release > > > is open. thats technically at RC1 so march 25th > > > i think we can safely do the swich after that but i would not change the > > defualt in any project > > > before then. > > > > > > > > Here's the test patch for Nova: > > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > > > * DevStack: > > > > > > > > And this is the final patch that will make this all happen: > > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > > a long and bumpy road to get to this point and I would like to say > > > > thanks to everyone involved so far and everyone that read the whole > > > > email, please let me know your thoughts. > > > thanks for working on this. > > > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > > [2] > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > > > [3] > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > > > [4] > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > > > [5] > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > > > Cheers, > > > > Lucas > > > > ++ Thank you indeed for working diligently on this important change. > > > > Please do note that devstack, and the base job that you're modifying > > is used by many other projects besides the ones that you have > > enumerated in the subject line. > > I suggest using [all] as a better subject line indicator to get the > > attention of folks like me who have filters based on the subject line. > > Also, the network substrate is important for the project I help > > maintain: Manila, which provides shared file systems over a network - > > so I followed your lead and submitted a dependent patch. I hope to > > reach out to you in case we see some breakages: > > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > > > > > > > > > > > > > > > > > From skaplons at redhat.com Wed Mar 3 07:32:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 03 Mar 2021 08:32:31 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <3222930.vunY0J1ypg@p1> Hi, Dnia środa, 3 marca 2021 05:01:24 CET Sean Mooney pisze: > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell people who > > are > > not interested in OVN to keep the current devstack behaviour? I have a > > feeling that using OVN as default Neutron driver would break the CI jobs > > for some projects like Octavia, Trove, etc. which rely on ovs port for the > > set up. You can look on how we set some of our jobs to use ML2/OVS: https:// review.opendev.org/c/openstack/neutron-tempest-plugin/+/749503/26/zuul.d/ master_jobs.yaml#121 > > well ovn is just an alternivie contoler for ovs. > so ovn replace the neutron l2 agent it does not replace ovs. > project like octavia or trove that deploy loadblances or dbs in vms should > not be able to observe a difference. they may still want to deploy ml2/ovs > but unless they are doing something directly on the host like adding port > directly to ovs because they are not using vms they should not be aware of > this change. > > my reticense to cahnge at this point in the cyle for nova is motiveated > maily by gate stablity. the active contibutes to nova dont really have > experince with ovn and how to debug it in the gate. we also are getting > close to FF when we tend to get a lot of patches and the gate stablity > become even more imporant so adding a new vaiabrly to that mix but swaping > out the networkbacked between now and the wallaby release seams > problematic. > > in any cases swapping back should ideallly be as simple as setting > Q_AGENT=openvswitch. > > i have not looked at the patches but to swap betwween ovs and linuxbidge you > just define Q_AGENT=linuxbridge for the most part so im expecting that we > would just enable Q_AGENT=ovn or somthing simialr for ovn. > > i know ovn used to have its own devstack plugin but if we are makeing it the > default that means it need to be support nativly in devstack not as a > plugin so useing Q_AGENT=ovn to enable it and make that the new default > would seam to be the simplest way to manage that. > > but yes documenting how to enabel the old behavior is still important > the example nova patch shows how to hardcode the old behavior > https://review.opendev.org/c/openstack/nova/+/776944/5/.zuul.yaml#234 > it seams to be doing this a little more explictly then i would like but its > not that hard. i would suggest adding a second sample loca.conf in devstack > for standard ovs deployments. > > --- > > Lingxian Kong > > Senior Cloud Engineer (Catalyst Cloud) > > Trove PTL (OpenStack) > > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi > > > > wrote: > > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > > > Hi all, > > > > > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > > > switching the default backend in Devstack to OVN. A lot of work has > > > > > been done since, from porting the OVN devstack module to the > > > > > DevStack > > > > > tree, refactoring the DevStack module to install OVN from distro > > > > > packages, implementing features to close the parity gap with > > > > > ML2/OVS, > > > > > fixing issues with tests and distros, etc... > > > > > > > > > > We are now very close to being able to make the switch and we've > > > > > thought about sending this email to the broader community to raise > > > > > awareness about this change as well as bring more attention to the > > > > > patches that are current on review. > > > > > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > > > discontinued and/or not supported anymore. The ML2/OVS driver is > > > > > still > > > > > going to be developed and maintained by the upstream Neutron > > > > > community. > > > > > > > > can we ensure that this does not happen until the xena release. > > > > in generall i think its ok to change the default but not this late in > > > > > > the cycle. > > > > > > > i would also like to ensure we keep at least one non ovn based multi > > > > > > node job in > > > > > > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > > > > > > merged and possible after. > > > > > > > right now the event/neutorn interaction is not the same during move > > > > > > operations. > > > > > > > > Below is a e per project explanation with relevant links and issues > > > > > of > > > > > where we stand with this work right now: > > > > > > > > > > * Keystone: > > > > > > > > > > Everything should be good for Keystone, the gate is happy with the > > > > > changes. Here is the test patch: > > > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > > > > > * Glance: > > > > > > > > > > Everything should be good for Glace, the gate is happy with the > > > > > changes. Here is the test patch: > > > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > > > > > * Swift: > > > > > > > > > > Everything should be good for Swift, the gate is happy with the > > > > > changes. Here is the test patch: > > > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > > > > > * Ironic: > > > > > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > > > ML2/OVS and everything is merged, so we should be good for Ironic > > > > > too. > > > > > Here is the test patch: > > > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > > > > > * Cinder: > > > > > > > > > > Cinder is almost complete. There's one test failure in the > > > > > "tempest-slow-py3" job run on the > > > > > "test_port_security_macspoofing_port" test. > > > > > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > > > been fixed upstream [2] and the fix has been backported down to the > > > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > > > from packages we are currently waiting for this fix to be included > > > > > in > > > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > > > contacted the package maintainer which has been very supportive of > > > > > this work and will work on the package update, but he maintain a > > > > > handful of backports in that package which is not yet included in > > > > > OVN > > > > > 20.03 upstream and he's now working with the core OVN community [4] > > > > > to > > > > > include it first in the branch and then create a new package for it. > > > > > Hopefully this will happen soon. > > > > > > > > > > But for now we have a few options moving on with this issue: > > > > > > > > > > 1- Wait for the new package version > > > > > 2- Mark the test as unstable until we get the new package version > > > > > 3- Compile OVN from source instead of installing it from packages > > > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > > > > > > i dont think we should default to ovn untill a souce build is not > > > > > > required. > > > > > > > compiling form souce while not supper expensice still adds time to the > > > > > > job > > > > > > > execution and im not sure we should be paying that cost on every > > > > > > devstack job run. > > > > > > > we could maybe compile it once and bake the package into the image or > > > > > > host it on a mirror > > > > > > > but i think we should avoid this option if we have alternitives. > > > > > > > > > What do you think about it ? > > > > > > > > > > Here is the test patch for Cinder: > > > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > > > > > * Nova: > > > > > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: > > > > > Basically > > > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > > > which is not part of an ML2/OVN deployment. The patch changes the > > > > > code > > > > > to check if that system unit exists before trying to stop it. > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > > > which can be removed one release cycle after we switch DevStack to > > > > > ML2/OVN. Grenade will test updating from the release version to the > > > > > master branch but, since the default of the released version is not > > > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade > > > > > job > > > > > is not supported. > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > > > minimum bandwidth feature which is not yet supported by ML2/OVN > > > > > [5][6] > > > > > therefore we are temporarily enabling ML2/OVS for this job until > > > > > that > > > > > feature lands in core OVN. > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about > > > > > these > > > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for > > > > > now > > > > > because he feels like a change in the default network driver a few > > > > > weeks prior to the upstream code freeze can be concerning. We do not > > > > > know yet precisely when we are changing the default due to the > > > > > current > > > > > patches we need to get merged but, if this is a shared feeling among > > > > > the Nova community I can work on enabling ML2/OVS on all jobs in > > > > > Nova > > > > > until we get a new release in OpenStack. > > > > > > > > yep this is still my view. > > > > i would suggest we do the work required in the repos but not merge it > > > > > > until the xena release > > > > > > > is open. thats technically at RC1 so march 25th > > > > i think we can safely do the swich after that but i would not change > > > > the > > > > > > defualt in any project > > > > > > > before then. > > > > > > > > > Here's the test patch for Nova: > > > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > > > > > * DevStack: > > > > > > > > > > And this is the final patch that will make this all happen: > > > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's > > > > > been > > > > > a long and bumpy road to get to this point and I would like to say > > > > > thanks to everyone involved so far and everyone that read the whole > > > > > email, please let me know your thoughts. > > > > > > > > thanks for working on this. > > > > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > > > [2] > > > > > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.47 > > > 3776-1-numans at ovn.org/> > > > > > > [3] > > > > > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab3 > > > 9380cb> > > > > > > [4] > > > > > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961. > > > html> > > > > > > [5] > > > > > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe > > > 76a9ee1/gate/post_test_hook.sh#L129-L143> > > > > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > > > > > Cheers, > > > > > Lucas > > > > > > ++ Thank you indeed for working diligently on this important change. > > > > > > Please do note that devstack, and the base job that you're modifying > > > is used by many other projects besides the ones that you have > > > enumerated in the subject line. > > > I suggest using [all] as a better subject line indicator to get the > > > attention of folks like me who have filters based on the subject line. > > > Also, the network substrate is important for the project I help > > > maintain: Manila, which provides shared file systems over a network - > > > so I followed your lead and submitted a dependent patch. I hope to > > > reach out to you in case we see some breakages: > > > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From syedammad83 at gmail.com Wed Mar 3 07:31:07 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 3 Mar 2021 12:31:07 +0500 Subject: [victoria][nova] Instance Live Migration Message-ID: Hi, I am trying to configure live migration of instance backed by lvm iscsi storage. I have done below mentioned config in nova.conf. [libvirt] virt_type = kvm volume_use_multipath = True live_migration_uri = qemu+ssh://nova@%s/system The live migration is being failed with below logs. 2021-03-03 07:15:51.917 706290 INFO nova.compute.manager [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Took 6.02 seconds for pre_live_migration on destination host kvm12-a1-khi01. 2021-03-03 07:15:52.029 706290 ERROR nova.virt.libvirt.driver [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+ssh://nova at kvm12-a1-khi01/system: Cannot recv data: Host key verification failed.: Connection reset by peer: libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+ssh://nova at kvm12-a1-khi01/system: Cannot recv data: Host key verification failed.: Connection reset by peer 2021-03-03 07:15:52.444 706290 ERROR nova.virt.libvirt.driver [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Migration operation has aborted 2021-03-03 07:15:52.489 706290 INFO nova.compute.manager [-] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Swapping old allocation on dict_keys(['3e5679c8-165e-4c35-9de5-ef0d3af0f9d8']) held by migration f519f768-5a62-4f17-9ac0-1cc2c814ec3a for instance 2021-03-03 07:15:55.927 706290 WARNING nova.compute.manager [req-116cf2ef-4720-421e-9aad-75451e3deafe b95c0ad0fb9848598988b5bff99ca55b abf41db7e6dc429086713f4b46cd3655 - default default] [instance: 7de891d3-c712-434c-b39f-08e512e46e83] Received unexpected event network-vif-unplugged-df7666d6-add3-4638-98e1-14db30122c3f for instance with vm_state active and task_state None. I am able to login from host1 to host2 via ssh without password. Also I have disabled hostkey check in ssh config of nova. (~/.ssh/config) Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null I am able to connect host2 from host 1 and vice versa via virsh uri. nova at kvm10-a1-khi01:~$ virsh -c qemu+ssh://nova at kvm12-a1-khi01/system Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # exit No sure where i am missing something. - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Mar 3 08:44:50 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 3 Mar 2021 09:44:50 +0100 Subject: anti-affinity: what're the mechanics? In-Reply-To: <371d077a5d2e99be4df344eceb25c43f@jots.org> References: <371d077a5d2e99be4df344eceb25c43f@jots.org> Message-ID: On Tue, Mar 2, 2021 at 9:44 PM Ken D'Ambrosio wrote: > Hey, all. Turns out we really need anti-affinity running on our (very > freaking old -- Juno) clouds. I'm trying to find docs that describe its > functionality, and am failing. If I enable it, and (say) have 10 > hypervisors, and 12 VMs to fire off, what happens when VM #11 goes to > fire? Does it fail, or does the scheduler just continue to at least > *try* to maintain as few possible on each hypervisor? > > No, it's a hard-stop. To be clear, that means that all the instances *from the same instance group* (with a anti-affinity policy) are not mixed between the same compute nodes. With your example, this means that if you have 10 compute nodes, an anti-affinity instance group can support up to 10 instances, because an eleven created instance from the same group would get a NoValidHost. This being said, you can of course create more than 10 instances, provided they don't share the same group. For what said Sean, soft-anti-affinity was a new feature that was provided by the Liberty timeframe. The difference with hard anti-affinity is that it's no longer an hardstop, you can have more than 10 instances within your group, it's just that the scheduler will try to spread them correctly (using weighers) between all your computes. FWIW, I wouldn't address backporting the feature as it's not only providing soft-affiinity and soft-anti-affinity weighers but it also adds those policies into the os-servergroup API. You should rather think about upgrading to Liberty if you do really care about soft-affinity or not use the filter. There are other possibilities for spreading instances between computes (for example using aggregates) that don't give you NoValidHosts exceptions on boots. -Sylvain Thanks! > > -Ken > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Wed Mar 3 10:32:11 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 3 Mar 2021 10:32:11 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Mon, Mar 1, 2021 at 6:25 PM Sean Mooney wrote: > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > Hi all, > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > switching the default backend in Devstack to OVN. A lot of work has > > been done since, from porting the OVN devstack module to the DevStack > > tree, refactoring the DevStack module to install OVN from distro > > packages, implementing features to close the parity gap with ML2/OVS, > > fixing issues with tests and distros, etc... > > > > We are now very close to being able to make the switch and we've > > thought about sending this email to the broader community to raise > > awareness about this change as well as bring more attention to the > > patches that are current on review. > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > going to be developed and maintained by the upstream Neutron > > community. > can we ensure that this does not happen until the xena release. > in generall i think its ok to change the default but not this late in the cycle. > i would also like to ensure we keep at least one non ovn based multi node job in > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. > right now the event/neutorn interaction is not the same during move operations. I think it's fair to wait for the new release cycle to start since we are just a few weeks away and then we can flip the default in DevStack. I will state this in the last DevStack patch and set the workflow -1 until then. That said, I also think that other patches could be merged before that, those are just adapting a few scripts to work with ML2/OVN and enabling ML2/OVS explicitly where it makes sense. That way, when time comes, we will just need to merge the DevStack patch. > > > > Below is a e per project explanation with relevant links and issues of > > where we stand with this work right now: > > > > * Keystone: > > > > Everything should be good for Keystone, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > * Glance: > > > > Everything should be good for Glace, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > * Swift: > > > > Everything should be good for Swift, the gate is happy with the > > changes. Here is the test patch: > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > * Ironic: > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > progress, we've changed most of the Ironic jobs to explicitly enable > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > Here is the test patch: > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > * Cinder: > > > > Cinder is almost complete. There's one test failure in the > > "tempest-slow-py3" job run on the > > "test_port_security_macspoofing_port" test. > > > > This failure is due to a bug in core OVN [1]. This bug has already > > been fixed upstream [2] and the fix has been backported down to the > > branch-20.03 [3] of the OVN project. However, since we install OVN > > from packages we are currently waiting for this fix to be included in > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > contacted the package maintainer which has been very supportive of > > this work and will work on the package update, but he maintain a > > handful of backports in that package which is not yet included in OVN > > 20.03 upstream and he's now working with the core OVN community [4] to > > include it first in the branch and then create a new package for it. > > Hopefully this will happen soon. > > > > But for now we have a few options moving on with this issue: > > > > 1- Wait for the new package version > > 2- Mark the test as unstable until we get the new package version > > 3- Compile OVN from source instead of installing it from packages > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > i dont think we should default to ovn untill a souce build is not required. > compiling form souce while not supper expensice still adds time to the job > execution and im not sure we should be paying that cost on every devstack job run. > > we could maybe compile it once and bake the package into the image or host it on a mirror > but i think we should avoid this option if we have alternitives. Since this patch https://review.opendev.org/c/openstack/devstack/+/763402 we no longer default to compiling OVN from source anymore, it's installed using the distro packages now. Yeah the alternatives are not straight forward, I was talking to some core OVN folks yesterday regarding the backports proposed by Canonical to the 20.03 branch and they seem to be fine with it, it needs more reviews since there are around ~20 patches being backported there. But I hope they are going to be looking into it and we should get a new OVN package for Ubuntu Focal soon. > > > > What do you think about it ? > > > > Here is the test patch for Cinder: > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > * Nova: > > > > There are a few patches waiting for review for Nova, which are: > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > the scripts were trying to stop the Neutron agent (q-agt) process > > which is not part of an ML2/OVN deployment. The patch changes the code > > to check if that system unit exists before trying to stop it. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > which can be removed one release cycle after we switch DevStack to > > ML2/OVN. Grenade will test updating from the release version to the > > master branch but, since the default of the released version is not > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > is not supported. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > therefore we are temporarily enabling ML2/OVS for this job until that > > feature lands in core OVN. > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > because he feels like a change in the default network driver a few > > weeks prior to the upstream code freeze can be concerning. We do not > > know yet precisely when we are changing the default due to the current > > patches we need to get merged but, if this is a shared feeling among > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > until we get a new release in OpenStack. > yep this is still my view. > i would suggest we do the work required in the repos but not merge it until the xena release > is open. thats technically at RC1 so march 25th > i think we can safely do the swich after that but i would not change the defualt in any project > before then. Yeah, no problem waiting from my side, but hoping we can keep reviewing the rest of the patches until then. > > > > Here's the test patch for Nova: > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > * DevStack: > > > > And this is the final patch that will make this all happen: > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > a long and bumpy road to get to this point and I would like to say > > thanks to everyone involved so far and everyone that read the whole > > email, please let me know your thoughts. > thanks for working on this. Thanks for the inputs > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > Cheers, > > Lucas > > > > > From lucasagomes at gmail.com Wed Mar 3 10:35:09 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 3 Mar 2021 10:35:09 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Wed, Mar 3, 2021 at 12:57 AM Goutham Pacha Ravi wrote: > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > Hi all, > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > switching the default backend in Devstack to OVN. A lot of work has > > > been done since, from porting the OVN devstack module to the DevStack > > > tree, refactoring the DevStack module to install OVN from distro > > > packages, implementing features to close the parity gap with ML2/OVS, > > > fixing issues with tests and distros, etc... > > > > > > We are now very close to being able to make the switch and we've > > > thought about sending this email to the broader community to raise > > > awareness about this change as well as bring more attention to the > > > patches that are current on review. > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > going to be developed and maintained by the upstream Neutron > > > community. > > can we ensure that this does not happen until the xena release. > > in generall i think its ok to change the default but not this late in the cycle. > > i would also like to ensure we keep at least one non ovn based multi node job in > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. > > right now the event/neutorn interaction is not the same during move operations. > > > > > > Below is a e per project explanation with relevant links and issues of > > > where we stand with this work right now: > > > > > > * Keystone: > > > > > > Everything should be good for Keystone, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > * Glance: > > > > > > Everything should be good for Glace, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > * Swift: > > > > > > Everything should be good for Swift, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > * Ironic: > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > Here is the test patch: > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > * Cinder: > > > > > > Cinder is almost complete. There's one test failure in the > > > "tempest-slow-py3" job run on the > > > "test_port_security_macspoofing_port" test. > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > been fixed upstream [2] and the fix has been backported down to the > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > from packages we are currently waiting for this fix to be included in > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > contacted the package maintainer which has been very supportive of > > > this work and will work on the package update, but he maintain a > > > handful of backports in that package which is not yet included in OVN > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > include it first in the branch and then create a new package for it. > > > Hopefully this will happen soon. > > > > > > But for now we have a few options moving on with this issue: > > > > > > 1- Wait for the new package version > > > 2- Mark the test as unstable until we get the new package version > > > 3- Compile OVN from source instead of installing it from packages > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > i dont think we should default to ovn untill a souce build is not required. > > compiling form souce while not supper expensice still adds time to the job > > execution and im not sure we should be paying that cost on every devstack job run. > > > > we could maybe compile it once and bake the package into the image or host it on a mirror > > but i think we should avoid this option if we have alternitives. > > > > > > What do you think about it ? > > > > > > Here is the test patch for Cinder: > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > * Nova: > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > to check if that system unit exists before trying to stop it. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > which can be removed one release cycle after we switch DevStack to > > > ML2/OVN. Grenade will test updating from the release version to the > > > master branch but, since the default of the released version is not > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > is not supported. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > feature lands in core OVN. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > because he feels like a change in the default network driver a few > > > weeks prior to the upstream code freeze can be concerning. We do not > > > know yet precisely when we are changing the default due to the current > > > patches we need to get merged but, if this is a shared feeling among > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > until we get a new release in OpenStack. > > yep this is still my view. > > i would suggest we do the work required in the repos but not merge it until the xena release > > is open. thats technically at RC1 so march 25th > > i think we can safely do the swich after that but i would not change the defualt in any project > > before then. > > > > > > Here's the test patch for Nova: > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > * DevStack: > > > > > > And this is the final patch that will make this all happen: > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > a long and bumpy road to get to this point and I would like to say > > > thanks to everyone involved so far and everyone that read the whole > > > email, please let me know your thoughts. > > thanks for working on this. > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ > > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb > > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html > > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > Cheers, > > > Lucas > > ++ Thank you indeed for working diligently on this important change. > > Please do note that devstack, and the base job that you're modifying > is used by many other projects besides the ones that you have > enumerated in the subject line. > I suggest using [all] as a better subject line indicator to get the > attention of folks like me who have filters based on the subject line. > Also, the network substrate is important for the project I help > maintain: Manila, which provides shared file systems over a network - > so I followed your lead and submitted a dependent patch. I hope to > reach out to you in case we see some breakages: > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > Thanks for the suggestion, you are right the [ALL] would make sense. I will try to raise more awareness as possible on this change in the default. Also thanks for proposing the test patch for Manilla, I see the gate is happy there! But yeah, feel free to reach out to me if anything breaks and I will glad looking into it. > > > > > > > > > > From lucasagomes at gmail.com Wed Mar 3 10:42:52 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 3 Mar 2021 10:42:52 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: <3222930.vunY0J1ypg@p1> References: <3222930.vunY0J1ypg@p1> Message-ID: On Wed, Mar 3, 2021 at 7:35 AM Slawek Kaplonski wrote: > > Hi, > > Dnia środa, 3 marca 2021 05:01:24 CET Sean Mooney pisze: > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > > Hi, > > > > > > Thanks for all your hard work on this. > > > > > > I'm wondering is there any doc proposed for devstack to tell people who > > > are > > > not interested in OVN to keep the current devstack behaviour? I have a > > > feeling that using OVN as default Neutron driver would break the CI jobs > > > for some projects like Octavia, Trove, etc. which rely on ovs port for the > > > set up. > > You can look on how we set some of our jobs to use ML2/OVS: https:// > review.opendev.org/c/openstack/neutron-tempest-plugin/+/749503/26/zuul.d/ > master_jobs.yaml#121 Thanks Lingxian and Slaweq for the suggestion/replies. Apart from what Slaweq pointed out, the DevStack documentation have a sample config file (https://docs.openstack.org/devstack/latest/#create-a-local-conf) as part of the steps to deploy it, I was thinking I could propose another sample file that would enable ML2/OVS for the deployment and add a note in that doc; if you think it's worth it. Also what gives us more confidence regarding projects like Octavia is that TripleO already uses ML2/OVN as the default driver for their overcloud, so many of these projects are already being tested/used with OVN. > > > > > well ovn is just an alternivie contoler for ovs. > > so ovn replace the neutron l2 agent it does not replace ovs. > > project like octavia or trove that deploy loadblances or dbs in vms should > > not be able to observe a difference. they may still want to deploy ml2/ovs > > but unless they are doing something directly on the host like adding port > > directly to ovs because they are not using vms they should not be aware of > > this change. > > > > my reticense to cahnge at this point in the cyle for nova is motiveated > > maily by gate stablity. the active contibutes to nova dont really have > > experince with ovn and how to debug it in the gate. we also are getting > > close to FF when we tend to get a lot of patches and the gate stablity > > become even more imporant so adding a new vaiabrly to that mix but swaping > > out the networkbacked between now and the wallaby release seams > > problematic. > > > > in any cases swapping back should ideallly be as simple as setting > > Q_AGENT=openvswitch. > > > > i have not looked at the patches but to swap betwween ovs and linuxbidge you > > just define Q_AGENT=linuxbridge for the most part so im expecting that we > > would just enable Q_AGENT=ovn or somthing simialr for ovn. > > > > i know ovn used to have its own devstack plugin but if we are makeing it the > > default that means it need to be support nativly in devstack not as a > > plugin so useing Q_AGENT=ovn to enable it and make that the new default > > would seam to be the simplest way to manage that. > > > > but yes documenting how to enabel the old behavior is still important > > the example nova patch shows how to hardcode the old behavior > > https://review.opendev.org/c/openstack/nova/+/776944/5/.zuul.yaml#234 > > it seams to be doing this a little more explictly then i would like but its > > not that hard. i would suggest adding a second sample loca.conf in devstack > > for standard ovs deployments. > > > --- > > > Lingxian Kong > > > Senior Cloud Engineer (Catalyst Cloud) > > > Trove PTL (OpenStack) > > > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > > > > > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi > > > > > > wrote: > > > > On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > > > > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > > > > Hi all, > > > > > > > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > > > > switching the default backend in Devstack to OVN. A lot of work has > > > > > > been done since, from porting the OVN devstack module to the > > > > > > DevStack > > > > > > tree, refactoring the DevStack module to install OVN from distro > > > > > > packages, implementing features to close the parity gap with > > > > > > ML2/OVS, > > > > > > fixing issues with tests and distros, etc... > > > > > > > > > > > > We are now very close to being able to make the switch and we've > > > > > > thought about sending this email to the broader community to raise > > > > > > awareness about this change as well as bring more attention to the > > > > > > patches that are current on review. > > > > > > > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > > > > discontinued and/or not supported anymore. The ML2/OVS driver is > > > > > > still > > > > > > going to be developed and maintained by the upstream Neutron > > > > > > community. > > > > > > > > > > can we ensure that this does not happen until the xena release. > > > > > in generall i think its ok to change the default but not this late in > > > > > > > > the cycle. > > > > > > > > > i would also like to ensure we keep at least one non ovn based multi > > > > > > > > node job in > > > > > > > > > nova until https://review.opendev.org/c/openstack/nova/+/602432 is > > > > > > > > merged and possible after. > > > > > > > > > right now the event/neutorn interaction is not the same during move > > > > > > > > operations. > > > > > > > > > > Below is a e per project explanation with relevant links and issues > > > > > > of > > > > > > where we stand with this work right now: > > > > > > > > > > > > * Keystone: > > > > > > > > > > > > Everything should be good for Keystone, the gate is happy with the > > > > > > changes. Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > > > > > > > * Glance: > > > > > > > > > > > > Everything should be good for Glace, the gate is happy with the > > > > > > changes. Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > > > > > > > * Swift: > > > > > > > > > > > > Everything should be good for Swift, the gate is happy with the > > > > > > changes. Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > > > > > > > * Ironic: > > > > > > > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > > > > ML2/OVS and everything is merged, so we should be good for Ironic > > > > > > too. > > > > > > Here is the test patch: > > > > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > > > > > > > * Cinder: > > > > > > > > > > > > Cinder is almost complete. There's one test failure in the > > > > > > "tempest-slow-py3" job run on the > > > > > > "test_port_security_macspoofing_port" test. > > > > > > > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > > > > been fixed upstream [2] and the fix has been backported down to the > > > > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > > > > from packages we are currently waiting for this fix to be included > > > > > > in > > > > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > > > > contacted the package maintainer which has been very supportive of > > > > > > this work and will work on the package update, but he maintain a > > > > > > handful of backports in that package which is not yet included in > > > > > > OVN > > > > > > 20.03 upstream and he's now working with the core OVN community [4] > > > > > > to > > > > > > include it first in the branch and then create a new package for it. > > > > > > Hopefully this will happen soon. > > > > > > > > > > > > But for now we have a few options moving on with this issue: > > > > > > > > > > > > 1- Wait for the new package version > > > > > > 2- Mark the test as unstable until we get the new package version > > > > > > 3- Compile OVN from source instead of installing it from packages > > > > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > > > > > > > > i dont think we should default to ovn untill a souce build is not > > > > > > > > required. > > > > > > > > > compiling form souce while not supper expensice still adds time to the > > > > > > > > job > > > > > > > > > execution and im not sure we should be paying that cost on every > > > > > > > > devstack job run. > > > > > > > > > we could maybe compile it once and bake the package into the image or > > > > > > > > host it on a mirror > > > > > > > > > but i think we should avoid this option if we have alternitives. > > > > > > > > > > > What do you think about it ? > > > > > > > > > > > > Here is the test patch for Cinder: > > > > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > > > > > > > * Nova: > > > > > > > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: > > > > > > Basically > > > > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > > > > which is not part of an ML2/OVN deployment. The patch changes the > > > > > > code > > > > > > to check if that system unit exists before trying to stop it. > > > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > > > > which can be removed one release cycle after we switch DevStack to > > > > > > ML2/OVN. Grenade will test updating from the release version to the > > > > > > master branch but, since the default of the released version is not > > > > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade > > > > > > job > > > > > > is not supported. > > > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > > > > minimum bandwidth feature which is not yet supported by ML2/OVN > > > > > > [5][6] > > > > > > therefore we are temporarily enabling ML2/OVS for this job until > > > > > > that > > > > > > feature lands in core OVN. > > > > > > > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about > > > > > > these > > > > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for > > > > > > now > > > > > > because he feels like a change in the default network driver a few > > > > > > weeks prior to the upstream code freeze can be concerning. We do not > > > > > > know yet precisely when we are changing the default due to the > > > > > > current > > > > > > patches we need to get merged but, if this is a shared feeling among > > > > > > the Nova community I can work on enabling ML2/OVS on all jobs in > > > > > > Nova > > > > > > until we get a new release in OpenStack. > > > > > > > > > > yep this is still my view. > > > > > i would suggest we do the work required in the repos but not merge it > > > > > > > > until the xena release > > > > > > > > > is open. thats technically at RC1 so march 25th > > > > > i think we can safely do the swich after that but i would not change > > > > > the > > > > > > > > defualt in any project > > > > > > > > > before then. > > > > > > > > > > > Here's the test patch for Nova: > > > > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > > > > > > > * DevStack: > > > > > > > > > > > > And this is the final patch that will make this all happen: > > > > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's > > > > > > been > > > > > > a long and bumpy road to get to this point and I would like to say > > > > > > thanks to everyone involved so far and everyone that read the whole > > > > > > email, please let me know your thoughts. > > > > > > > > > > thanks for working on this. > > > > > > > > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > > > > [2] > > > > > > > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.47 > > > > 3776-1-numans at ovn.org/> > > > > > > > [3] > > > > > > > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab3 > > > > 9380cb> > > > > > > > [4] > > > > > > > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961. > > > > html> > > > > > > > [5] > > > > > > > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe > > > > 76a9ee1/gate/post_test_hook.sh#L129-L143> > > > > > > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > > > > > > > Cheers, > > > > > > Lucas > > > > > > > > ++ Thank you indeed for working diligently on this important change. > > > > > > > > Please do note that devstack, and the base job that you're modifying > > > > is used by many other projects besides the ones that you have > > > > enumerated in the subject line. > > > > I suggest using [all] as a better subject line indicator to get the > > > > attention of folks like me who have filters based on the subject line. > > > > Also, the network substrate is important for the project I help > > > > maintain: Manila, which provides shared file systems over a network - > > > > so I followed your lead and submitted a dependent patch. I hope to > > > > reach out to you in case we see some breakages: > > > > https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From lyarwood at redhat.com Wed Mar 3 10:46:52 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 3 Mar 2021 10:46:52 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? Message-ID: Hello all, I recently landed a fix in Cirros [1] to allow it to be used by the nova-next job when it is configured to use the q35 machine type [2], something that we would like to make the default sometime in the future. With no release of Cirros on the horizon I was wondering if I could instead host a build of the image somewhere for use solely by the nova-next job? If this isn't possible I'll go back to the Cirros folks upstream and ask for a release but given my change is the only one to land there in almost a year I wanted to ask about this approach first. Many thanks in advance, Lee [1] https://github.com/cirros-dev/cirros/pull/65 [2] https://review.opendev.org/c/openstack/nova/+/708701 From mbultel at redhat.com Wed Mar 3 10:49:50 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Wed, 3 Mar 2021 11:49:50 +0100 Subject: [TripleO][Validation] Validation CLI simplification Message-ID: Hi TripleO Folks, I'm raising this topic to the ML because it appears we have some divergence regarding some design around the way the Validations should be used with and without TripleO and I wanted to have a larger audience, in particular PTL and core thoughts around this topic. The current situation is: We have an openstack tripleo validator set of sub commands to handle Validation (run, list ...). The CLI validation is taking several parameters as an entry point and in particular the stack/plan, Openstack authentication and static inventory file. By asking the stack/plan name, the CLI is trying to verify and understand if the plan or the stack is valid, if the Overcloud exists somewhere in the cloud before passing that to the tripleo-ansible-inventory script and trying to generate a static inventory file in regard to what --plan or stack has been passed. The code is mainly here: [1]. This behavior implies several constraints: * Validation CLI needs Openstack authentication in order to do those checks * It introduces some complexity in the Validation code part: querying Heat to get the plan name to be sure the name provided is correct, get the status of the stack... In case of Standalone deployment, it adds more complexity then. * This code is only valid for "standard" deployments and usage meaning it doesn't work for Standalone, for some Upgrade and FFU stages and needs to be bypassed for pre-undercloud deployment. * We hit several blockers around this part of code. My proposal is the following: Since we are thinking of the future of Validation and we want something more robust, stronger, simpler, usable and efficient, I propose to get rid of the plan/stack and authentication functionalities in the Validation code, and only ask for a valid inventory provided by the user. I propose as well to create a new entry point in the TripleO CLI to generate a static inventory such as: openstack tripleo inventory generate --output-file my-inv.yaml and then: openstack tripleo validator run --validation my-validation --inventory my-inv.yaml By doing that, I think we gain a lot in simplification, it's more robust, and Validation will only do what it aims for: wrapp Ansible execution to provide better logging information and history. The main concerns about this approach is that the user will have to provide a valid inventory to the Validation CLI. I understand the point of view of getting something fully autonomous, and the way of just kicking *one* command and the Validation can be *magically* executed against your cloud, but I think the less complex the Validation code is, the more robust, stable and usable it will be. Deferring a specific entry point for the inventory, which is a key part of post deployment action, seems something more clear and robust as well. This part of code could be shared and used for any other usages instead of calling the inventory script stored into tripleo-validations. It could then use the tripleo-common inventory library directly with tripleoclient, instead of calling from client -> tripleo-validations/scripts -> query tripleo-common inventory library. I know it changes a little bit the usage (adding one command line in the execution process for getting a valid inventory) but it's going in a less buggy and racy direction. And the inventory should be generated only once, or at least at any big major cloud change. So, I'm glad to get your thoughts on that topic and your overall views around this topic. Thanks, Mathieu [1] https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Mar 3 10:50:59 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 3 Mar 2021 23:50:59 +1300 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney wrote: > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell people who > are > > not interested in OVN to keep the current devstack behaviour? I have a > > feeling that using OVN as default Neutron driver would break the CI jobs > > for some projects like Octavia, Trove, etc. which rely on ovs port for > the > > set up. > > well ovn is just an alternivie contoler for ovs. > so ovn replace the neutron l2 agent it does not replace ovs. > project like octavia or trove that deploy loadblances or dbs in vms should > not be able to observe a difference. > they may still want to deploy ml2/ovs but unless they are doing something > directly on the host like adding port > directly to ovs because they are not using vms they should not be aware of > this change. > Yes, they are. Please see https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 as an example for Octavia. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Mar 3 11:21:03 2021 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 3 Mar 2021 16:51:03 +0530 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20210302150451.ksnug53wzup747t4@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> <20210302150451.ksnug53wzup747t4@yuggoth.org> Message-ID: Hi, On Tue, Mar 2, 2021 at 8:40 PM Jeremy Stanley wrote: > > On 2021-03-02 19:58:19 +0530 (+0530), Yatin Karel wrote: > [...] > > is it still necessary to abandon all open reviews? > [...] > > Gerrit will not allow deletion of a branch if there are any changes > still open for it. > -- Ok wasn't aware of it, Thanks for sharing. We have cleaned up all open reviews, please proceed with branch deletion. > Jeremy Stanley Thanks and Regards Yatin Karel From goody698 at gmail.com Wed Mar 3 12:03:44 2021 From: goody698 at gmail.com (Heib, Mohammad (Nokia - IL/Kfar Sava)) Date: Wed, 3 Mar 2021 14:03:44 +0200 Subject: Train/SR-IOV direct port mode live-migration - question Message-ID: <78875ebc-6556-c63b-66a8-68850b748785@gmail.com> Hi All, we recently moved to openstack Train and according to the version release notes the SR-IOV direct port live-migration support was added to this version. so we tried to do some live-migration to vm with two SR-IOV direct ports attached to bond interface on the vm (no vSwitch interface or a indirect only direct ports) but seems that we have an issue with maintaining the network connectivity during and after the live-migration, according to [1] i see that in order to maintain network connectivity during the migration we have to add one indirect or vswitch port to the bond and this interface will carry the traffic during the migration and once the migration completed the direct port will back to be the primary slave on the target VM bond. my question is if we don't add the indirect port we will lose the connectivity during the migration which makes sense to me, but why the connectivity does not back after the migration completed successfully, and why the bond master boot up with no slaves on the target VM? according to some documents and blueprints that I found in google (including [2],[3]) the guest os will receive a virtual hotplug add even and the bond master will enslave those devices and connectivity will back which is not the case here. so I wondering maybe I need to add some scripts to handle those events (if so where to add them) or some network flags to the ifcfg-* files or i need to use a specific guest os? [1] : https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/libvirt-neutron-sriov-livemigration.html [2] :https://www.researchgate.net/publication/228722278_Live_migration_with_pass-through_device_for_Linux_VM [3] :https://openstack.nimeyo.com/72653/openstack-nova-neutron-live-migration-with-direct-passthru *bond master ifcfg file:* DEVICE=bond1 BONDING_OPTS=mode=active-backup HOTPLUG=yes TYPE=Bond BONDING_MASTER=yes BOOTPROTO=none NAME=bond1 ONBOOT=yes *slaves ifcfg files:* TYPE=Ethernet DEVICE=eth0 HOTPLUG=no ONBOOT=no MASTER=bond1 SLAVE=yes BOOTPROTO=none NM_CONTROLLED=no *guest OS:* CentOS Linux release 7.7.1908 (Core) * * */Thanks in advance for any help :)/,* -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Mar 3 12:45:32 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 3 Mar 2021 14:45:32 +0200 Subject: [TripleO][Validation] Validation CLI simplification In-Reply-To: References: Message-ID: On Wed, Mar 3, 2021 at 12:51 PM Mathieu Bultel wrote: > Hi TripleO Folks, > > I'm raising this topic to the ML because it appears we have some > divergence regarding some design around the way the Validations should be > used with and without TripleO and I wanted to have a larger audience, in > particular PTL and core thoughts around this topic. > > The current situation is: > We have an openstack tripleo validator set of sub commands to handle > Validation (run, list ...). > The CLI validation is taking several parameters as an entry point and in > particular the stack/plan, Openstack authentication and static inventory > file. > > By asking the stack/plan name, the CLI is trying to verify and understand > if the plan or the stack is valid, if the Overcloud exists somewhere in the > cloud before passing that to the tripleo-ansible-inventory script and > trying to generate a static inventory file in regard to what --plan or > stack has been passed. > Sorry if silly question but, can't we just make 'validate the stack status' as one of the validations? In fact you already have something like that there https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/roles/stack_health/tasks/main.yml#L2 . Then only this validation will require the stack name passed in instead of on every validation run. BTW as an aside we should probably remove 'plan' from that code altogether given the recent 'remove swift and overcloud plan' work from ramishra/cloudnull and co @ https://review.opendev.org/q/topic:%22env_merging%22+(status:open%20OR%20status:merged) > The code is mainly here: [1]. > > This behavior implies several constraints: > * Validation CLI needs Openstack authentication in order to do those checks > * It introduces some complexity in the Validation code part: querying Heat > to get the plan name to be sure the name provided is correct, get the > status of the stack... In case of Standalone deployment, it adds more > complexity then. > * This code is only valid for "standard" deployments and usage meaning it > doesn't work for Standalone, for some Upgrade and FFU stages and needs to > be bypassed for pre-undercloud deployment. > * We hit several blockers around this part of code. > > My proposal is the following: > > Since we are thinking of the future of Validation and we want something > more robust, stronger, simpler, usable and efficient, I propose to get rid > of the plan/stack and authentication functionalities in the Validation > code, and only ask for a valid inventory provided by the user. > I propose as well to create a new entry point in the TripleO CLI to > generate a static inventory such as: > openstack tripleo inventory generate --output-file my-inv.yaml > and then: > openstack tripleo validator run --validation my-validation --inventory > my-inv.yaml > > By doing that, I think we gain a lot in simplification, it's more robust, > and Validation will only do what it aims for: wrapp Ansible execution to > provide better logging information and history. > > The main concerns about this approach is that the user will have to > provide a valid inventory to the Validation CLI. > I understand the point of view of getting something fully autonomous, and > the way of just kicking *one* command and the Validation can be *magically* > executed against your cloud, but I think the less complex the Validation > code is, the more robust, stable and usable it will be. > > Deferring a specific entry point for the inventory, which is a key part of > post deployment action, seems something more clear and robust as well. > This part of code could be shared and used for any other usages instead of > calling the inventory script stored into tripleo-validations. It could then > use the tripleo-common inventory library directly with tripleoclient, > instead of calling from client -> tripleo-validations/scripts -> query > tripleo-common inventory library. > > I know it changes a little bit the usage (adding one command line in the > execution process for getting a valid inventory) but it's going in a less > buggy and racy direction. > And the inventory should be generated only once, or at least at any big > major cloud change. > > So, I'm glad to get your thoughts on that topic and your overall views > around this topic. > The proposal sounds sane to me, but just to be clear by "authentication functionalities" are you referring specifically to the '--ssh-user' argument ( https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/tripleo_validations/tripleo_validator.py#L243)? i.e. we will already have that in the generated static inventory so no need to have in on the CLI? If the only cost is that we have to have an extra step for generating the inventory then IMO it is worth doing. I would however be interested to hear from those that are objecting to the proposal about why it is a bad idea ;) since you said there has been a divergence in opinions over the design regards, marios > > Thanks, > Mathieu > > [1] > https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 12:52:40 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 12:52:40 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: Message-ID: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> On Wed, 2021-03-03 at 10:46 +0000, Lee Yarwood wrote: > Hello all, > > I recently landed a fix in Cirros [1] to allow it to be used by the > nova-next job when it is configured to use the q35 machine type [2], > something that we would like to make the default sometime in the > future. > > With no release of Cirros on the horizon I was wondering if I could > instead host a build of the image somewhere for use solely by the > nova-next job? > > If this isn't possible I'll go back to the Cirros folks upstream and > ask for a release but given my change is the only one to land there in > almost a year I wanted to ask about this approach first. am im not sure where would be the best location but when i have tought about it in the past i came up with a few options that may or may not work. i know that the nodepool images are hosted publiclaly but i can never remberer where. if we wraped the cirror build in a dib element that might be an approch we coudl take. build it woith nodepool build then mirror that to the cloud and use it. the other idea i had for this in the past was to put the image on tarballs.openstack.org we have the ironic ipa image there already so if we did a one of build we coud maybe see if infra were ok with hosting it there and the mirroing that to our cloud providres to avoid the need to pull it. the other option might be to embed it in the nodepool images. while not all jobs will need it most will so if we had a static copy of it we could inject it into the image in the devstack data dir or anogher whle know cache dir in the image and just have devstack use that instead of downloading it. > > Many thanks in advance, > > Lee > > [1] https://github.com/cirros-dev/cirros/pull/65 > [2] https://review.opendev.org/c/openstack/nova/+/708701 > > From smooney at redhat.com Wed Mar 3 13:05:25 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 13:05:25 +0000 Subject: Train/SR-IOV direct port mode live-migration - question In-Reply-To: <78875ebc-6556-c63b-66a8-68850b748785@gmail.com> References: <78875ebc-6556-c63b-66a8-68850b748785@gmail.com> Message-ID: On Wed, 2021-03-03 at 14:03 +0200, Heib, Mohammad (Nokia - IL/Kfar Sava) wrote: > Hi All, > > we recently moved to openstack Train and according to the version > release notes the SR-IOV direct port live-migration support was added to > this version. > > so we tried to do some live-migration to vm with two SR-IOV direct ports > attached to bond interface on the vm (no vSwitch interface or a indirect > only direct ports) but seems that we have an issue with maintaining the > network yes that is expected. live migration with direct mode sriov involvs hot unpluggin the interface before the migration and hot plugging it after. nic vendors do not actully support sriov migration so we developed a workaround for that hardware limiation by using pcie hotplug. > > connectivity during and after the live-migration, according to [1] i see > that in order to maintain network connectivity during the migration we > have to add one indirect or vswitch port to the bond and this interface > will carry the > traffic during the migration and once the migration completed the direct > port will back to be the primary slave on the target VM bond. yes this is correct > > > my question is if we don't add the indirect port we will lose the > connectivity during the migration which makes sense to me, > > but why the connectivity does not back after the migration completed > successfully, and why the bond master boot up with no slaves on the > target VM? that sound like your networking setup in the guest is not correctly detecting the change. e.g. you are missing a udev rule or your network manager or systemd-netowrkd configuriton is only running on first boot but not when an interface is added/removed > > according to some documents and blueprints that I found in google > (including [2],[3]) the guest os will receive a virtual hotplug add even > and the > > bond master will enslave those devices and connectivity will back which > is not the case here. > so I wondering maybe I need to add some scripts to handle those events > (if so where to add them) or some network flags to the ifcfg-* files or > i need to use a specific guest os? you would need to but i do not have files for this unfortunetly. > > > [1] : > https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/libvirt-neutron-sriov-livemigration.html > > > [2] > :https://www.researchgate.net/publication/228722278_Live_migration_with_pass-through_device_for_Linux_VM > > > [3] > :https://openstack.nimeyo.com/72653/openstack-nova-neutron-live-migration-with-direct-passthru > > > > *bond master ifcfg file:* > > DEVICE=bond1 > BONDING_OPTS=mode=active-backup > HOTPLUG=yes > TYPE=Bond > BONDING_MASTER=yes > BOOTPROTO=none > NAME=bond1 > ONBOOT=yes > > *slaves ifcfg files:* > > TYPE=Ethernet > DEVICE=eth0 > HOTPLUG=no try setting this to yes i suspect you need to mark all the slave interfaces as hotplugable since they are what will be hotplugged. you might also want to set onboot but hotplug seams more relevent. > ONBOOT=no > MASTER=bond1 > SLAVE=yes > BOOTPROTO=none > NM_CONTROLLED=no > > *guest OS:* > > CentOS Linux release 7.7.1908 (Core) > > * > * > > */Thanks in advance for any help :)/,* > > > From mbultel at redhat.com Wed Mar 3 13:13:38 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Wed, 3 Mar 2021 14:13:38 +0100 Subject: [TripleO][Validation] Validation CLI simplification In-Reply-To: References: Message-ID: Thank you Marios for the response. On Wed, Mar 3, 2021 at 1:45 PM Marios Andreou wrote: > > > On Wed, Mar 3, 2021 at 12:51 PM Mathieu Bultel wrote: > >> Hi TripleO Folks, >> >> I'm raising this topic to the ML because it appears we have some >> divergence regarding some design around the way the Validations should be >> used with and without TripleO and I wanted to have a larger audience, in >> particular PTL and core thoughts around this topic. >> >> The current situation is: >> We have an openstack tripleo validator set of sub commands to handle >> Validation (run, list ...). >> The CLI validation is taking several parameters as an entry point and in >> particular the stack/plan, Openstack authentication and static inventory >> file. >> >> By asking the stack/plan name, the CLI is trying to verify and understand >> if the plan or the stack is valid, if the Overcloud exists somewhere in the >> cloud before passing that to the tripleo-ansible-inventory script and >> trying to generate a static inventory file in regard to what --plan or >> stack has been passed. >> > > > Sorry if silly question but, can't we just make 'validate the stack > status' as one of the validations? In fact you already have something like > that there > https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/roles/stack_health/tasks/main.yml#L2 > . Then only this validation will require the stack name passed in instead > of on every validation run. > > BTW as an aside we should probably remove 'plan' from that code altogether > given the recent 'remove swift and overcloud plan' work from > ramishra/cloudnull and co @ > https://review.opendev.org/q/topic:%22env_merging%22+(status:open%20OR%20status:merged) > > Not exactly, the main goal of checking if the --stack/plan value is correct and if the stack provided exists and is right is for getting a valid Ansible inventory to execute the Validations. Meaning that all the extra checks in the code in [1] is made for generating the inventory, which imho should not belong to the Validation CLI but to something else, and Validation should consider that the --inventory file is correct (because of the reasons mentioned earlier). > > >> The code is mainly here: [1]. >> >> This behavior implies several constraints: >> * Validation CLI needs Openstack authentication in order to do those >> checks >> * It introduces some complexity in the Validation code part: querying >> Heat to get the plan name to be sure the name provided is correct, get the >> status of the stack... In case of Standalone deployment, it adds more >> complexity then. >> * This code is only valid for "standard" deployments and usage meaning it >> doesn't work for Standalone, for some Upgrade and FFU stages and needs to >> be bypassed for pre-undercloud deployment. >> * We hit several blockers around this part of code. >> >> My proposal is the following: >> >> Since we are thinking of the future of Validation and we want something >> more robust, stronger, simpler, usable and efficient, I propose to get rid >> of the plan/stack and authentication functionalities in the Validation >> code, and only ask for a valid inventory provided by the user. >> I propose as well to create a new entry point in the TripleO CLI to >> generate a static inventory such as: >> openstack tripleo inventory generate --output-file my-inv.yaml >> and then: >> openstack tripleo validator run --validation my-validation --inventory >> my-inv.yaml >> >> By doing that, I think we gain a lot in simplification, it's more robust, >> and Validation will only do what it aims for: wrapp Ansible execution to >> provide better logging information and history. >> >> The main concerns about this approach is that the user will have to >> provide a valid inventory to the Validation CLI. >> I understand the point of view of getting something fully autonomous, and >> the way of just kicking *one* command and the Validation can be *magically* >> executed against your cloud, but I think the less complex the Validation >> code is, the more robust, stable and usable it will be. >> >> Deferring a specific entry point for the inventory, which is a key part >> of post deployment action, seems something more clear and robust as well. >> This part of code could be shared and used for any other usages instead >> of calling the inventory script stored into tripleo-validations. It could >> then use the tripleo-common inventory library directly with tripleoclient, >> instead of calling from client -> tripleo-validations/scripts -> query >> tripleo-common inventory library. >> >> I know it changes a little bit the usage (adding one command line in the >> execution process for getting a valid inventory) but it's going in a less >> buggy and racy direction. >> And the inventory should be generated only once, or at least at any big >> major cloud change. >> >> So, I'm glad to get your thoughts on that topic and your overall views >> around this topic. >> > > > The proposal sounds sane to me, but just to be clear by "authentication > functionalities" are you referring specifically to the '--ssh-user' > argument ( > https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/tripleo_validations/tripleo_validator.py#L243)? > i.e. we will already have that in the generated static inventory so no need > to have in on the CLI? > The authentication in the current CLI is only needed to get the stack output in order to generate the inventory. If the user provides his inventory, no authentication, no heat stack check and then Validation can run everywhere on every stage of a TripleO deployment (early without any TripleO bits, or in the middle of LEAP Upgrade for example). > > If the only cost is that we have to have an extra step for generating the > inventory then IMO it is worth doing. I would however be interested to hear > from those that are objecting to the proposal about why it is a bad idea ;) > since you said there has been a divergence in opinions over the design > > regards, marios > > > >> >> Thanks, >> Mathieu >> >> [1] >> https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 3 13:22:31 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Mar 2021 13:22:31 +0000 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <5fc0ff7968480dd70d3dd14cfc4f9ebea0c2529d.camel@redhat.com> On Wed, 2021-03-03 at 23:50 +1300, Lingxian Kong wrote: > On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney wrote: > > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > > Hi, > > > > > > Thanks for all your hard work on this. > > > > > > I'm wondering is there any doc proposed for devstack to tell people who > > are > > > not interested in OVN to keep the current devstack behaviour? I have a > > > feeling that using OVN as default Neutron driver would break the CI jobs > > > for some projects like Octavia, Trove, etc. which rely on ovs port for > > the > > > set up. > > > > well ovn is just an alternivie contoler for ovs. > > so ovn replace the neutron l2 agent it does not replace ovs. > > project like octavia or trove that deploy loadblances or dbs in vms should > > not be able to observe a difference. > > they may still want to deploy ml2/ovs but unless they are doing something > > directly on the host like adding port > > directly to ovs because they are not using vms they should not be aware of > > this change. > > > > Yes, they are. > > Please see > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 as > an example for Octavia. right but octavia really should not be doing that. i mean it will still work in the sense that ovn shoudl be abel to correalte that port to this hos if you have correcly bound that port too the host but adding internal ports to br-int to provide network connectivy seams like a hack. ml2/ovn still calls the amin ovs bridge br-int so the port add will work as it did before. the br-int is owned by neutron and operators are not allowed to add flows or interface to that brdige normally. so in a real deployment i woudl hope that the woiuld not be adding this port and assigning a mac or ip rules like this. conenctiviy shoudl be provided via the br-ex /phsyical netowrk using provider networking right? in anycase you can swap back to ml2/ovs for ocatvia jobs. if you have correctly bound the manament port to the current host then i think ovn shoudl be able to install the correct openflow rules to allow it to function however so it may be as simipar as extending the elif to work with ovn. the port create on https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L458 does seam to have set the host properly its still kind of troubling to me to see something like this being down our side an openstack agent on the host. for example i would have expected an ml2 l2 openvswigh agent extntion to create the prot via os-vif or similar in respocne to the api action instead of manually doing this. > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) From fungi at yuggoth.org Wed Mar 3 13:41:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 13:41:35 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> Message-ID: <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> On 2021-03-03 12:52:40 +0000 (+0000), Sean Mooney wrote: [...] > i know that the nodepool images are hosted publiclaly but i can > never remberer where. if we wraped the cirror build in a dib > element that might be an approch we coudl take. build it woith > nodepool build then mirror that to the cloud and use it. The images we build from Nodepool 1. are created with diskimage-builder (does DIB support creating Cirros images?), and are uploaded to Glance in all our donor-providers (are we likely to actually boot it directly in them?). > the other idea i had for this in the past was to put the image on > tarballs.openstack.org we have the ironic ipa image there already > so if we did a one of build we coud maybe see if infra were ok > with hosting it there and the mirroing that to our cloud providres > to avoid the need to pull it. The files hosted on the tarballs site (including the IPA images) are built by Zuul jobs. If you can figure out what project that job should reside in and a cadence to run it, that might be an option. > the other option might be to embed it in the nodepool images. > while not all jobs will need it most will so if we had a static > copy of it we could inject it into the image in the devstack data > dir or anogher whle know cache dir in the image and just have > devstack use that instead of downloading it. We already do that for official Cirros images: https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images If you look in the build logs of a DevStack-based job, you'll see it check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img and then not wget that image over the network because it exists on the filesystem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From balazs.gibizer at est.tech Wed Mar 3 14:08:22 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Wed, 03 Mar 2021 15:08:22 +0100 Subject: [election][nova] PTL Candidacy for Wallaby Message-ID: Hi, I would like to continue serving as the Nova PTL in the Xena cycle. In Wallaby I hope I helped to keep Nova alive and kicking and I intend to continue what I have started. My employer is supportive to continue spending time on PTL and core duties. If I am elected then I will try to focus on the following areas: * Keeping the Nova bug backlog under control * Working on changes that help the maintainability and stability of Nova * Finishing up things that are already ongoing before starting new items Also, I'd like to note that if elected then Xena will be my 3rd consecutive term as Nova PTL so I will spend some time during the cycle to find my successor for the Y cycle. Cheers, gibi From marios at redhat.com Wed Mar 3 16:36:47 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 3 Mar 2021 18:36:47 +0200 Subject: [election][TripleO] PTL candidacy for Xena Message-ID: (nomination posted to https://review.opendev.org/c/openstack/election/+/778500) Hello I would like to continue serving as PTL for TripleO during the Xena cycle. Wallaby was my first PTL experience and it has been challenging and rewarding. I've learnt a lot about release processes and other repo related admin (like moving projects to the Independent release cycle). As I said in the W candidacy nomination [1], nowadays TripleO works in self-contained squads that drive technical decisions. We've had a lot of work across these squads (as always) during W with progress in a number of areas including moving port/network creation outside of Heat (ports v2), tripleo-ceph/tripleo-ceph-client and switching out ceph-ansible for cephadm, removing swift and the deployment plan, BGP support with frrouter and using ephemeral Heat for the overcloud deployment. I'd like to continue what i started in W, which is, to increase socialisation across tripleo squads and try to reign in the multitude of repos that we have created (and some now abandoned) over the years. After a well attended Wallaby PTG we have reinstated our traditional IRC meetings in #tripleo. I've started to "tidy up" our repos, moving some of the older and no longer used projects to 'independent' (os-refresh-config and friends, tripleo-ipsec) and started the process to mark older branches as EOL, starting with Rocky. There is a lot more to do here and I will be happy to do it if you give me the opportunity, thanks for your consideration, marios [1] https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/Tripleo/marios%40redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Wed Mar 3 17:33:59 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 3 Mar 2021 17:33:59 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> Message-ID: On Wed, 3 Mar 2021 at 13:45, Jeremy Stanley wrote: > > On 2021-03-03 12:52:40 +0000 (+0000), Sean Mooney wrote: > [...] > > i know that the nodepool images are hosted publiclaly but i can > > never remberer where. if we wraped the cirror build in a dib > > element that might be an approch we coudl take. build it woith > > nodepool build then mirror that to the cloud and use it. > > The images we build from Nodepool 1. are created with > diskimage-builder (does DIB support creating Cirros images?), and > are uploaded to Glance in all our donor-providers (are we likely to > actually boot it directly in them?). > > > the other idea i had for this in the past was to put the image on > > tarballs.openstack.org we have the ironic ipa image there already > > so if we did a one of build we coud maybe see if infra were ok > > with hosting it there and the mirroing that to our cloud providres > > to avoid the need to pull it. > > The files hosted on the tarballs site (including the IPA images) are > built by Zuul jobs. If you can figure out what project that job > should reside in and a cadence to run it, that might be an option. > > > the other option might be to embed it in the nodepool images. > > while not all jobs will need it most will so if we had a static > > copy of it we could inject it into the image in the devstack data > > dir or anogher whle know cache dir in the image and just have > > devstack use that instead of downloading it. > > We already do that for official Cirros images: > > https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images > > If you look in the build logs of a DevStack-based job, you'll see it > check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img > and then not wget that image over the network because it exists on > the filesystem. Thanks both, So this final option looks like the most promising. It would not however involve any independent build steps in nodepool or zuul. If infra are okay with this I can host the dev build I have with my change applied and post a change to cache it alongside the official images. Thanks again, Lee From skaplons at redhat.com Wed Mar 3 18:24:19 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 03 Mar 2021 19:24:19 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <5567179.ztC2WF7GR3@p1> Hi, Dnia środa, 3 marca 2021 11:32:11 CET Lucas Alvares Gomes pisze: > On Mon, Mar 1, 2021 at 6:25 PM Sean Mooney wrote: > > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > > > Hi all, > > > > > > As part of the Victoria PTG [0] the Neutron community agreed upon > > > switching the default backend in Devstack to OVN. A lot of work has > > > been done since, from porting the OVN devstack module to the DevStack > > > tree, refactoring the DevStack module to install OVN from distro > > > packages, implementing features to close the parity gap with ML2/OVS, > > > fixing issues with tests and distros, etc... > > > > > > We are now very close to being able to make the switch and we've > > > thought about sending this email to the broader community to raise > > > awareness about this change as well as bring more attention to the > > > patches that are current on review. > > > > > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > > > discontinued and/or not supported anymore. The ML2/OVS driver is still > > > going to be developed and maintained by the upstream Neutron > > > community. > > > > can we ensure that this does not happen until the xena release. > > in generall i think its ok to change the default but not this late in the > > cycle. i would also like to ensure we keep at least one non ovn based > > multi node job in nova until > > https://review.opendev.org/c/openstack/nova/+/602432 is merged and > > possible after. right now the event/neutorn interaction is not the same > > during move operations. > I think it's fair to wait for the new release cycle to start since we > are just a few weeks away and then we can flip the default in > DevStack. I will state this in the last DevStack patch and set the > workflow -1 until then. That said, I also think that other patches > could be merged before that, those are just adapting a few scripts to > work with ML2/OVN and enabling ML2/OVS explicitly where it makes > sense. That way, when time comes, we will just need to merge the > DevStack patch. +1. Let's do that very early in Xena cycle :) > > > > Below is a e per project explanation with relevant links and issues of > > > where we stand with this work right now: > > > > > > * Keystone: > > > > > > Everything should be good for Keystone, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/keystone/+/777963 > > > > > > * Glance: > > > > > > Everything should be good for Glace, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/glance/+/748390 > > > > > > * Swift: > > > > > > Everything should be good for Swift, the gate is happy with the > > > changes. Here is the test patch: > > > https://review.opendev.org/c/openstack/swift/+/748403 > > > > > > * Ironic: > > > > > > Since chainloading iPXE by the OVN built-in DHCP server is work in > > > progress, we've changed most of the Ironic jobs to explicitly enable > > > ML2/OVS and everything is merged, so we should be good for Ironic too. > > > Here is the test patch: > > > https://review.opendev.org/c/openstack/ironic/+/748405 > > > > > > * Cinder: > > > > > > Cinder is almost complete. There's one test failure in the > > > "tempest-slow-py3" job run on the > > > "test_port_security_macspoofing_port" test. > > > > > > This failure is due to a bug in core OVN [1]. This bug has already > > > been fixed upstream [2] and the fix has been backported down to the > > > branch-20.03 [3] of the OVN project. However, since we install OVN > > > from packages we are currently waiting for this fix to be included in > > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > > > contacted the package maintainer which has been very supportive of > > > this work and will work on the package update, but he maintain a > > > handful of backports in that package which is not yet included in OVN > > > 20.03 upstream and he's now working with the core OVN community [4] to > > > include it first in the branch and then create a new package for it. > > > Hopefully this will happen soon. > > > > > > But for now we have a few options moving on with this issue: > > > > > > 1- Wait for the new package version > > > 2- Mark the test as unstable until we get the new package version > > > 3- Compile OVN from source instead of installing it from packages > > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > > > > i dont think we should default to ovn untill a souce build is not > > required. > > compiling form souce while not supper expensice still adds time to the job > > execution and im not sure we should be paying that cost on every devstack > > job run. > > > > we could maybe compile it once and bake the package into the image or host > > it on a mirror but i think we should avoid this option if we have > > alternitives. > > Since this patch > https://review.opendev.org/c/openstack/devstack/+/763402 we no longer > default to compiling OVN from source anymore, it's installed using the > distro packages now. > > Yeah the alternatives are not straight forward, I was talking to some > core OVN folks yesterday regarding the backports proposed by Canonical > to the 20.03 branch and they seem to be fine with it, it needs more > reviews since there are around ~20 patches being backported there. But > I hope they are going to be looking into it and we should get a new > OVN package for Ubuntu Focal soon. > > > > What do you think about it ? > > > > > > Here is the test patch for Cinder: > > > https://review.opendev.org/c/openstack/cinder/+/748227 > > > > > > * Nova: > > > > > > There are a few patches waiting for review for Nova, which are: > > > > > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically > > > the scripts were trying to stop the Neutron agent (q-agt) process > > > which is not part of an ML2/OVN deployment. The patch changes the code > > > to check if that system unit exists before trying to stop it. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > > > > > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > > > which can be removed one release cycle after we switch DevStack to > > > ML2/OVN. Grenade will test updating from the release version to the > > > master branch but, since the default of the released version is not > > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > > > is not supported. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > > > > > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] > > > therefore we are temporarily enabling ML2/OVS for this job until that > > > feature lands in core OVN. > > > > > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > > > > > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these > > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > > > because he feels like a change in the default network driver a few > > > weeks prior to the upstream code freeze can be concerning. We do not > > > know yet precisely when we are changing the default due to the current > > > patches we need to get merged but, if this is a shared feeling among > > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > > > until we get a new release in OpenStack. > > > > yep this is still my view. > > i would suggest we do the work required in the repos but not merge it > > until the xena release is open. thats technically at RC1 so march 25th > > i think we can safely do the swich after that but i would not change the > > defualt in any project before then. > > Yeah, no problem waiting from my side, but hoping we can keep > reviewing the rest of the patches until then. > > > > Here's the test patch for Nova: > > > https://review.opendev.org/c/openstack/nova/+/776945 > > > > > > * DevStack: > > > > > > And this is the final patch that will make this all happen: > > > https://review.opendev.org/c/openstack/devstack/+/735097 > > > > > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > > > a long and bumpy road to get to this point and I would like to say > > > thanks to everyone involved so far and everyone that read the whole > > > email, please let me know your thoughts. > > > > thanks for working on this. > > Thanks for the inputs > > > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > > > [2] > > > https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.4 > > > 73776-1-numans at ovn.org/ [3] > > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab > > > 39380cb [4] > > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961 > > > .html [5] > > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99f > > > e76a9ee1/gate/post_test_hook.sh#L129-L143 [6] > > > https://docs.openstack.org/neutron/latest/ovn/gaps.html > > > > > > Cheers, > > > Lucas -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Wed Mar 3 18:25:56 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 03 Mar 2021 19:25:56 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <25317159.dUgV3lgUje@p1> Hi, Dnia środa, 3 marca 2021 11:50:59 CET Lingxian Kong pisze: > On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney wrote: > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > > Hi, > > > > > > Thanks for all your hard work on this. > > > > > > I'm wondering is there any doc proposed for devstack to tell people who > > > > are > > > > > not interested in OVN to keep the current devstack behaviour? I have a > > > feeling that using OVN as default Neutron driver would break the CI jobs > > > for some projects like Octavia, Trove, etc. which rely on ovs port for > > > > the > > > > > set up. > > > > well ovn is just an alternivie contoler for ovs. > > so ovn replace the neutron l2 agent it does not replace ovs. > > project like octavia or trove that deploy loadblances or dbs in vms should > > not be able to observe a difference. > > they may still want to deploy ml2/ovs but unless they are doing something > > directly on the host like adding port > > directly to ovs because they are not using vms they should not be aware of > > this change. > > Yes, they are. > > Please see > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 as > an example for Octavia. I'm really not an Octavia expert but AFAIK we are testing Octavia with ML2/OVN in TripleO jobs already as we switched default neutron backend in TripleO to be ML2/OVN somewhere around Stein cycle IIRC. > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Wed Mar 3 19:27:46 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 19:27:46 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> Message-ID: <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> On 2021-03-03 17:33:59 +0000 (+0000), Lee Yarwood wrote: > On Wed, 3 Mar 2021 at 13:45, Jeremy Stanley wrote: [...] > > https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images > > > > If you look in the build logs of a DevStack-based job, you'll see it > > check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img > > and then not wget that image over the network because it exists on > > the filesystem. > > Thanks both, > > So this final option looks like the most promising. It would not > however involve any independent build steps in nodepool or zuul. If > infra are okay with this I can host the dev build I have with my > change applied and post a change to cache it alongside the official > images. Yeah, I don't see that being much different from hitting download.cirros-cloud.net during out image builds. If you're forking or otherwise making unofficial builds of Cirros though, you might want to name the files something which makes that clear so people seeing it in job build logs know it's not an official Cirros provided version. I believe our Nodepool builders also cache such downloads locally, so you'll probably see them downloaded once each from handful of builders and, not again until the next time you update the filename in that DIB element. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From narjes.bessghaier.1 at ens.etsmtl.ca Wed Mar 3 15:14:25 2021 From: narjes.bessghaier.1 at ens.etsmtl.ca (Bessghaier, Narjes) Date: Wed, 3 Mar 2021 15:14:25 +0000 Subject: Puppet openstack modules In-Reply-To: References: , Message-ID: Thank you for the clarification. Get Outlook for Android ________________________________ From: Takashi Kajinami Sent: Saturday, February 27, 2021 10:06:58 AM To: Bessghaier, Narjes Cc: openstack-discuss at lists.openstack.org Subject: Re: Puppet openstack modules > 1 is supposed to be used only for testing during deployment Sorry I meant to say 1 is supposed to be used only for testing during "development" On Sun, Feb 28, 2021 at 12:06 AM Takashi Kajinami > wrote: Ruby codes in puppet-openstack repos are used for the following three purposes. 1. unit tests and acceptance tests using serverspec framework (files placed under spec) 2. implementation of custom type, provider, and function 3. template files (We use ERB instead of pure Ruby about this, though) 1 is supposed to be used only for testing during deployment but 2 and 3 can be used in any production use case in combination with puppet manifest files to manage OpenStack deployments. On Sat, Feb 27, 2021 at 5:01 AM Bessghaier, Narjes > wrote: Dear OpenStack team, My name is Narjes and I'm a PhD student at the University of Montréal, Canada. My current work consists of analyzing code reviews on the puppet modules. I would like to precisely know what the ruby files are used for in the puppet modules. As mentioned in the official website, most of unit test are written in ruby. Are ruby files destined to carry out units tests or destined for production code. I appreciate your help, Thank you -- ---------- Takashi Kajinami Senior Software Maintenance Engineer Customer Experience and Engagement Red Hat email: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankit at aptira.com Wed Mar 3 18:25:25 2021 From: ankit at aptira.com (Ankit Goel) Date: Wed, 3 Mar 2021 18:25:25 +0000 Subject: Need help to get rally test suite Message-ID: Hello Experts, Need some help on rally. I have installed Openstack rally on centos. Now I need to run some benchmarking test suite. I need to know is there any pre-existing test suite which covers majority of test cases for Openstack control plane. So that I can just run that test suite. I remember earlier we use to get the samples under rally/samples/tasks/scenarios for each component but now I am not seeing it but I am seeing some dummy files. So from where I can get those samples json/yaml files which is required to run the rally tests. Awaiting for your response. Thanks in Advance Regards, Ankit Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Wed Mar 3 19:29:27 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Wed, 3 Mar 2021 14:29:27 -0500 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <6572eb3b-3cd5-a3d7-10ac-eb8b8ef34bf3@gmail.com> On 3/3/21 5:50 AM, Lingxian Kong wrote: > On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney > wrote: > > On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell > people who are > > not interested in OVN to keep the current devstack behaviour? I > have a > > feeling that using OVN as default Neutron driver would break the > CI jobs > > for some projects like Octavia, Trove, etc. which rely on ovs > port for the > > set up. > > well ovn is just an alternivie contoler for ovs. > so ovn replace the neutron l2 agent it does not replace ovs. > project like octavia or trove that deploy loadblances or dbs in vms > should not be able to observe a difference. > they may still want to deploy ml2/ovs but unless they are doing > something directly on the host like adding port > directly to ovs because they are not using vms they should not be > aware of this change. > > > Yes, they are. > > Please see > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 > as an example for Octavia. The code to do this configuration was all migrated to the neutron repository, with the last bit of cleanup for the Octavia code you highlighted here: https://review.opendev.org/c/openstack/octavia/+/718192 It just needs a final push to get it merged. -Brian From peter.matulis at canonical.com Wed Mar 3 19:36:01 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 3 Mar 2021 14:36:01 -0500 Subject: [docs] Project guides in PDF format Message-ID: Hello, I understand that there was a push [1] to make PDFs available for download from the published pages on docs.openstack.org for the various guides (Sphinx projects). Is there any steam left in this initiative? The next best thing is to have something at the repository level that people can work with. Currently, there are a limited number of guides that can be converted to PDF from the openstack-manuals repository. What is the suggested implementation for individual projects? Thanks, Peter Matulis [1]: https://etherpad.opendev.org/p/train-pdf-support-goal -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 3 19:45:26 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 19:45:26 +0000 Subject: [docs] Project guides in PDF format In-Reply-To: References: Message-ID: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> On 2021-03-03 14:36:01 -0500 (-0500), Peter Matulis wrote: > I understand that there was a push [1] to make PDFs available for download > from the published pages on docs.openstack.org for the various guides > (Sphinx projects). Is there any steam left in this initiative? > > The next best thing is to have something at the repository level that > people can work with. Currently, there are a limited number of guides that > can be converted to PDF from the openstack-manuals repository. What is the > suggested implementation for individual projects? > > [1]: https://etherpad.opendev.org/p/train-pdf-support-goal I may be misunderstanding, but can you elaborate on what's missing which you expected to find? There are, for example, PDFs like: https://docs.openstack.org/nova/latest/doc-nova.pdf Is it that we're not building those for some projects yet which you want added, or that there are other non-project-specific documents which need similar treatment, or simply that we're not doing enough to make the existence of the PDF versions discoverable? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From peter.matulis at canonical.com Wed Mar 3 20:05:10 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 3 Mar 2021 15:05:10 -0500 Subject: [docs] Project guides in PDF format In-Reply-To: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> Message-ID: On Wed, Mar 3, 2021 at 2:47 PM Jeremy Stanley wrote: > > I may be misunderstanding, but can you elaborate on what's missing > which you expected to find? There are, for example, PDFs like: > > https://docs.openstack.org/nova/latest/doc-nova.pdf > > Is it that we're not building those for some projects yet which you > want added, or that there are other non-project-specific documents > which need similar treatment, or simply that we're not doing enough > to make the existence of the PDF versions discoverable? > How do I get a download PDF link like what is available in the published pages of the Nova project? Where is that documented? In short, yes, I am interested in having downloadable PDFs for the projects that I maintain: https://opendev.org/openstack/charm-guide https://opendev.org/openstack/charm-deployment-guide -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 3 20:30:27 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 20:30:27 +0000 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> Message-ID: <20210303203027.c3pgopms57zf4ehk@yuggoth.org> On 2021-03-03 15:05:10 -0500 (-0500), Peter Matulis wrote: [...] > How do I get a download PDF link like what is available in the > published pages of the Nova project? Where is that documented? > > In short, yes, I am interested in having downloadable PDFs for the > projects that I maintain: > > https://opendev.org/openstack/charm-guide > https://opendev.org/openstack/charm-deployment-guide The official goal document is still available here: https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html Some technical detail can also be found in the earlier docs spec: https://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html A bit of spelunking in Git history turns up, for example, this change implementing PDF generation for openstack/ironic (you can find plenty more if you hunt): https://review.opendev.org/680585 I expect you would just do something similar to that. If memory serves (it's been a couple years now), each project hit slightly different challenges as no two bodies of documentation are every quite the same. You'll likely have to dig deep occasionally in Sphinx and LaTeX examples to iron things out. One thing which would have been nice as an output of that cycle goal was if the PTI section for documentation was updated with related technical guidance on building PDFs, but it's rather lacking in that department still: https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation If you can come up with a succinct summary for what's needed, I expect adding it there would be really useful to others too. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tpb at dyncloud.net Wed Mar 3 20:52:28 2021 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 3 Mar 2021 15:52:28 -0500 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: <6572eb3b-3cd5-a3d7-10ac-eb8b8ef34bf3@gmail.com> References: <6572eb3b-3cd5-a3d7-10ac-eb8b8ef34bf3@gmail.com> Message-ID: <20210303205228.c6c6dvraxdgoo6ba@barron.net> On 03/03/21 14:29 -0500, Brian Haley wrote: >On 3/3/21 5:50 AM, Lingxian Kong wrote: >>On Wed, Mar 3, 2021 at 5:15 PM Sean Mooney >> wrote: >> >> On Wed, 2021-03-03 at 15:59 +1300, Lingxian Kong wrote: >> > Hi, >> > >> > Thanks for all your hard work on this. >> > >> > I'm wondering is there any doc proposed for devstack to tell >> people who are >> > not interested in OVN to keep the current devstack behaviour? I >> have a >> > feeling that using OVN as default Neutron driver would break the >> CI jobs >> > for some projects like Octavia, Trove, etc. which rely on ovs >> port for the >> > set up. >> >> well ovn is just an alternivie contoler for ovs. >> so ovn replace the neutron l2 agent it does not replace ovs. >> project like octavia or trove that deploy loadblances or dbs in vms >> should not be able to observe a difference. >> they may still want to deploy ml2/ovs but unless they are doing >> something directly on the host like adding port >> directly to ovs because they are not using vms they should not be >> aware of this change. >> >> >>Yes, they are. >> >>Please see https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L466 >>as an example for Octavia. > >The code to do this configuration was all migrated to the neutron >repository, with the last bit of cleanup for the Octavia code you >highlighted here: > >https://review.opendev.org/c/openstack/octavia/+/718192 > >It just needs a final push to get it merged. > >-Brian > There's similar plug-into-ovs code used by Manila container and generic backend drivers [1], [2], in the manila codebase, not migrated into neutron as done for octavia. Putting my downstream hat on, I'll remark that we tested Manila with OVN and run it that way now in production but downstream these drivers are not supported so I don't think we know they will work in upstream CI. Incidentally, I have always thought and said that it seems to me to be a layer violation for Manila to manipulate the ovs switches directly. We should be calling a neutron API. This would also free us of the topology restriction of having to run the Manila share service on the node with the OVS switch in question. [1] https://github.com/openstack/manila/blob/master/manila/share/drivers/container/driver.py#L245 [2] https://github.com/openstack/manila/blob/master/manila/network/linux/ovs_lib.py#L51 -- Tom From iwienand at redhat.com Wed Mar 3 22:31:08 2021 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 4 Mar 2021 09:31:08 +1100 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> Message-ID: On Wed, Mar 03, 2021 at 07:27:46PM +0000, Jeremy Stanley wrote: > Yeah, I don't see that being much different from hitting > download.cirros-cloud.net during out image builds. If you're forking > or otherwise making unofficial builds of Cirros though, you might > want to name the files something which makes that clear so people > seeing it in job build logs know it's not an official Cirros > provided version. I'd personally prefer to build it in a job and publish it so that you can see what's been done. Feels like a similar situation to the RPMs we build for openafs (see ~[1]). If you put your patch somewhere, you can use a file-matcher to just rebuild/upload when something changes. I see the argument that one binary blob from upstream is about as trustworthy as any other, though. -i [1] https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/playbooks/openafs-rpm-package-build From lyarwood at redhat.com Wed Mar 3 22:31:22 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 3 Mar 2021 22:31:22 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> Message-ID: On Wed, 3 Mar 2021 at 19:33, Jeremy Stanley wrote: > > On 2021-03-03 17:33:59 +0000 (+0000), Lee Yarwood wrote: > > On Wed, 3 Mar 2021 at 13:45, Jeremy Stanley wrote: > [...] > > > https://opendev.org/openstack/project-config/src/branch/master/nodepool/elements/cache-devstack/source-repository-images > > > > > > If you look in the build logs of a DevStack-based job, you'll see it > > > check, e.g., /opt/stack/devstack/files/cirros-0.4.0-x86_64-disk.img > > > and then not wget that image over the network because it exists on > > > the filesystem. > > > > Thanks both, > > > > So this final option looks like the most promising. It would not > > however involve any independent build steps in nodepool or zuul. If > > infra are okay with this I can host the dev build I have with my > > change applied and post a change to cache it alongside the official > > images. > > Yeah, I don't see that being much different from hitting > download.cirros-cloud.net during out image builds. If you're forking > or otherwise making unofficial builds of Cirros though, you might > want to name the files something which makes that clear so people > seeing it in job build logs know it's not an official Cirros > provided version. I believe our Nodepool builders also cache such > downloads locally, so you'll probably see them downloaded once each > from handful of builders and, not again until the next time you > update the filename in that DIB element. Okay I've pushed the following for this: Add custom cirros image with ahci module enabled to cache https://review.opendev.org/c/openstack/project-config/+/778590 I've gone for cirros-0.5.1-dev-ahci-x86_64-disk.img as the image name but we can hash that out in the review. Many thanks again! Lee From emiller at genesishosting.com Wed Mar 3 22:37:30 2021 From: emiller at genesishosting.com (Eric K. Miller) Date: Wed, 3 Mar 2021 16:37:30 -0600 Subject: [kolla][kolla-ansible] distro mixing on hosts Message-ID: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> Hi, We have been pretty consistent in deploying CentOS 7 on physical hardware where Kolla Ansible deploys its containers. However, we're going to be switching to Debian 10.8. It looks like Kolla Ansible likely won't care which distro is installed on the physical hardware, and so some nodes could have CentOS 7 and others have Debian 10.8. Long-term, we'll replace the CentOS 7 machines with Debian 10.8 and re-deploy. Kolla Container images will still be CentOS 7, but we will be recreating Kolla images with Debian and deploying those long term too. Is this supported? Any obvious issues we should be aware of? We will have the same interface names, so we will be pretty consistent in terms of the environment - but wanted to see if there was any gotchas that we didn't think of. Also, I guess I should ask if there are any specific Debian 10.8 issues we should be aware of? Has anybody else done this? :) Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 3 22:38:03 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Mar 2021 22:38:03 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> Message-ID: <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> On 2021-03-04 09:31:08 +1100 (+1100), Ian Wienand wrote: [...] > I'd personally prefer to build it in a job and publish it so that you > can see what's been done. [...] Sure, all things being equal I'd also prefer a transparent automated build with a periodic refresh, but that seems like something we can make incremental improvement on. Of course, you're a more active reviewer on DevStack than I am, so I'm happy to follow your lead if it's something you feel strongly about. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From bdobreli at redhat.com Thu Mar 4 08:28:45 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 4 Mar 2021 09:28:45 +0100 Subject: [docs] Project guides in PDF format In-Reply-To: <20210303203027.c3pgopms57zf4ehk@yuggoth.org> References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> Message-ID: On 3/3/21 9:30 PM, Jeremy Stanley wrote: > On 2021-03-03 15:05:10 -0500 (-0500), Peter Matulis wrote: > [...] >> How do I get a download PDF link like what is available in the >> published pages of the Nova project? Where is that documented? On each project's documentation page, there is a "Download PDF" button, at the top, near to the "Report a Bug". >> >> In short, yes, I am interested in having downloadable PDFs for the >> projects that I maintain: >> >> https://opendev.org/openstack/charm-guide >> https://opendev.org/openstack/charm-deployment-guide > > The official goal document is still available here: > > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html > > Some technical detail can also be found in the earlier docs spec: > > https://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html > > A bit of spelunking in Git history turns up, for example, this > change implementing PDF generation for openstack/ironic (you can > find plenty more if you hunt): > > https://review.opendev.org/680585 > > I expect you would just do something similar to that. If memory > serves (it's been a couple years now), each project hit slightly > different challenges as no two bodies of documentation are every > quite the same. You'll likely have to dig deep occasionally in > Sphinx and LaTeX examples to iron things out. > > One thing which would have been nice as an output of that cycle goal > was if the PTI section for documentation was updated with related > technical guidance on building PDFs, but it's rather lacking in that > department still: > > https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation > > If you can come up with a succinct summary for what's needed, I > expect adding it there would be really useful to others too. > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jean-francois.taltavull at elca.ch Thu Mar 4 09:07:25 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Thu, 4 Mar 2021 09:07:25 +0000 Subject: Need help to get rally test suite In-Reply-To: References: Message-ID: Hello Ankit, Take a look here : https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios/ Regards, Jean-Francois From: Ankit Goel Sent: mercredi, 3 mars 2021 19:25 To: openstack-dev at lists.openstack.org Subject: Need help to get rally test suite Hello Experts, Need some help on rally. I have installed Openstack rally on centos. Now I need to run some benchmarking test suite. I need to know is there any pre-existing test suite which covers majority of test cases for Openstack control plane. So that I can just run that test suite. I remember earlier we use to get the samples under rally/samples/tasks/scenarios for each component but now I am not seeing it but I am seeing some dummy files. So from where I can get those samples json/yaml files which is required to run the rally tests. Awaiting for your response. Thanks in Advance Regards, Ankit Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 4 09:08:50 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 4 Mar 2021 10:08:50 +0100 Subject: =?UTF-8?Q?=5Belection=5D=5Brelease=5D_Herv=C3=A9_Beraud_candidacy_for_Rele?= =?UTF-8?Q?ase_Management_PTL_for_Xena?= Message-ID: (nomination posted to https://review.opendev.org/c/openstack/election/+/778623) Hello, I would like to continue serving as PTL for Release Management during the Xena cycle. Wallaby was my first PTL experience and it allowed me to deeply improve my knowledge about Openstack. I'd like to continue to work on growing the core team to constitute experienced team members to whom we could pass the torch. Anything we can do to help anyone to join the team and know what to do and how to do it will be better for us all. We have a lot of our processes mostly automated, but we also have few unfinished business from the Wallaby cycle. So as a PTL I have the intention to make the team focus on finishing these work. It has been an interesting, active and rewarding release cycle, and I am excited to continue learning and applying those lessons to keep things moving and looking for any ways I can add to the already great body of work we have in our release. I will be happy to do it if you give me the opportunity. Thanks for your consideration, Hervé -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Mar 4 09:12:31 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 4 Mar 2021 09:12:31 +0000 Subject: [kolla][kolla-ansible] distro mixing on hosts In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> Message-ID: On Wed, 3 Mar 2021 at 22:38, Eric K. Miller wrote: > > Hi, > > > > We have been pretty consistent in deploying CentOS 7 on physical hardware where Kolla Ansible deploys its containers. However, we're going to be switching to Debian 10.8. > > > > It looks like Kolla Ansible likely won't care which distro is installed on the physical hardware, and so some nodes could have CentOS 7 and others have Debian 10.8. Long-term, we'll replace the CentOS 7 machines with Debian 10.8 and re-deploy. > > > > Kolla Container images will still be CentOS 7, but we will be recreating Kolla images with Debian and deploying those long term too. > > > > Is this supported? Any obvious issues we should be aware of? We will have the same interface names, so we will be pretty consistent in terms of the environment - but wanted to see if there was any gotchas that we didn't think of. > Hi Eric, While this is possible, it is not something we "support" upstream. We already have many combinations of OS distro and binary/source. I know that people have tried it in the past, and while it should mostly just work, occasionally it causes issues. A classic issue would be using an ansible fact about the host OS to infer something about the container. While we'll likely accept reasonable fixes for these issues upstream, mixing host and container OS may cause people to pay less attention to bugs raised. If you want to migrate from CentOS 7 to Debian, I would suggest keeping the host and container OS the same, by basing the kolla_base_distro variable on groups. Of course this will also be an untested configuration, and I suggest that you test it thoroughly, in particular things like migration between different versions of libvirt/qemu. Mark > > > Also, I guess I should ask if there are any specific Debian 10.8 issues we should be aware of? > > > > Has anybody else done this? :) > > > > Thanks! > > > Eric > > From andr.kurilin at gmail.com Thu Mar 4 09:56:47 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 4 Mar 2021 11:56:47 +0200 Subject: Need help to get rally test suite In-Reply-To: References: Message-ID: Hi! Rally plugins for OpenStack platform moved under separate project&repository. See https://github.com/openstack/rally/blob/master/CHANGELOG.rst#100---2018-06-20 for more details. чт, 4 мар. 2021 г. в 11:14, Taltavull Jean-Francois < jean-francois.taltavull at elca.ch>: > Hello Ankit, > > > > Take a look here : > https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios/ > > > > Regards, > > Jean-Francois > > > > > > *From:* Ankit Goel > *Sent:* mercredi, 3 mars 2021 19:25 > *To:* openstack-dev at lists.openstack.org > *Subject:* Need help to get rally test suite > > > > Hello Experts, > > > > Need some help on rally. > > > > I have installed Openstack rally on centos. Now I need to run some > benchmarking test suite. I need to know is there any pre-existing test > suite which covers majority of test cases for Openstack control plane. So > that I can just run that test suite. > > > > I remember earlier we use to get the samples under > rally/samples/tasks/scenarios for each component but now I am not seeing it > but I am seeing some dummy files. So from where I can get those samples > json/yaml files which is required to run the rally tests. > > > > Awaiting for your response. > > > > Thanks in Advance > > > > Regards, > > Ankit Goel > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Thu Mar 4 11:07:55 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 4 Mar 2021 11:07:55 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> Message-ID: On Wed, 3 Mar 2021 at 22:41, Jeremy Stanley wrote: > > On 2021-03-04 09:31:08 +1100 (+1100), Ian Wienand wrote: > [...] > > I'd personally prefer to build it in a job and publish it so that you > > can see what's been done. > [...] > > Sure, all things being equal I'd also prefer a transparent automated > build with a periodic refresh, but that seems like something we can > make incremental improvement on. Of course, you're a more active > reviewer on DevStack than I am, so I'm happy to follow your lead if > it's something you feel strongly about. Agreed, if there's a need to build and cache another unreleased cirros fix then I would be happy to look into automating the build somewhere but for a one off I think this is just about acceptable. FWIW nova-next is passing with the image below: WIP: nova-next: Start testing the 'q35' machine type https://review.opendev.org/c/openstack/nova/+/708701 I'll clean that change up later today. Thanks again, Lee From admin at gsic.uva.es Thu Mar 4 13:04:13 2021 From: admin at gsic.uva.es (Cristina Mayo Sarmiento) Date: Thu, 04 Mar 2021 14:04:13 +0100 Subject: [glance] Doc about swift as backend Message-ID: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> Hi everyone, Are there any specific documentation about how to configure Swift as backend of Glance service? I've found this: https://docs.openstack.org/glance/latest/configuration/configuring.html but I'm not sure what are the options I need. Firstly, I follow the installation guides of glance: https://docs.openstack.org/glance/latest/install/install-ubuntu.html and I have now file as store but I'rather change it. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Mar 4 13:29:59 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 04 Mar 2021 14:29:59 +0100 Subject: [neutron] Drivers meeting 05.03.2021 cancelled Message-ID: <11123192.FyhnmifhmU@p1> Hi, Due to lack of agenda let's cancel tomorrow's drivers meeting. Have a great weekend and see You on the meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From meberhardt at unl.edu.ar Thu Mar 4 15:09:37 2021 From: meberhardt at unl.edu.ar (meberhardt at unl.edu.ar) Date: Thu, 04 Mar 2021 15:09:37 +0000 Subject: [Keystone][Oslo] Policy problems Message-ID: <20210304150937.Horde.HUlWGgiGudMxr39pvoxGJzT@webmail.unl.edu.ar> Hi, my installation complains about deprecated policies and throw errors when I try to run spacific commands in cli (list projects or list users, for example). "You are not authorized to perform the requested action: identity:list_users." I tried to fix this by upgrading the keystone policies using oslopolicy-policy-generator and oslopolicy-policy-upgrade. Just found two places where I have those keystone policy files in my sistem: /etc/openstack_dashboard/keystone_policy.json and cd /lib/python3.6/site-packages/openstack_auth/tests/conf/keystone_policy.json I regenerated & upgraded it but Keystone still complains about the old polices. ¿Where are it placed? ¿How I shoud fix it? OS: CentOS 8 Openstack version: Ussuri Manual installation Thanks, Matias /* --------------------------------------------------------------- */ /* Matías A. Eberhardt */ /* */ /* Centro de Telemática */ /* Secretaría General */ /* UNIVERSIDAD NACIONAL DEL LITORAL */ /* Pje. Martínez 2652 - S3002AAB Santa Fe - Argentina */ /* tel +54(342)455-4245 - FAX +54(342)457-1240 */ /* --------------------------------------------------------------- */ From cjeanner at redhat.com Thu Mar 4 16:16:08 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 4 Mar 2021 17:16:08 +0100 Subject: [TripleO][Validation] Validation CLI simplification In-Reply-To: References: Message-ID: <01e89e71-11ec-2af2-0b66-e14d246fd737@redhat.com> Hello there, On 3/3/21 2:13 PM, Mathieu Bultel wrote: > Thank you Marios for the response. > > On Wed, Mar 3, 2021 at 1:45 PM Marios Andreou > wrote: > > > > On Wed, Mar 3, 2021 at 12:51 PM Mathieu Bultel > wrote: > > Hi TripleO Folks, > > I'm raising this topic to the ML because it appears we have some > divergence regarding some design around the way the Validations > should be used with and without TripleO and I wanted to have a > larger audience, in particular PTL and core thoughts around this > topic. > > The current situation is: > We have an openstack tripleo validator set of sub commands to > handle Validation (run, list ...). > The CLI validation is taking several parameters as an entry > point and in particular the stack/plan, Openstack authentication > and static inventory file. > > By asking the stack/plan name, the CLI is trying to verify and > understand if the plan or the stack is valid, if the Overcloud > exists somewhere in the cloud before passing that to the > tripleo-ansible-inventory script and trying to generate a static > inventory file in regard to what --plan or stack has been passed. > > > > Sorry if silly question but, can't we just make 'validate the stack > status' as one of the validations? In fact you already have > something like that > there https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/roles/stack_health/tasks/main.yml#L2 > > . Then only this validation will require the stack name passed in > instead of on every validation run. > > BTW as an aside we should probably remove 'plan' from that code > altogether given the recent 'remove swift and overcloud plan' work > from ramishra/cloudnull and > co @ https://review.opendev.org/q/topic:%22env_merging%22+(status:open%20OR%20status:merged) > > > Not exactly, the main goal of checking if the --stack/plan value is > correct and if the stack provided exists and is right is for getting a > valid Ansible inventory to execute the Validations. > Meaning that all the extra checks in the code in [1] is made for > generating the inventory, which imho should not belong to the Validation > CLI but to something else, and Validation should consider that the > --inventory file is correct (because of the reasons mentioned earlier). well... It actually isn't part of the validation CLI, it's part of the plugin for tripleoclient wrapping the actual validation library... Sooo... That's a usage we'd intend to get, isn't it? All the "bad things" are on the tripleo side, NOT the VF cli/code/lib/content. Cheers, C. > >   > > The code is mainly here: [1]. > > This behavior implies several constraints: > * Validation CLI needs Openstack authentication in order to do > those checks > * It introduces some complexity in the Validation code part: > querying Heat to get the plan name to be sure the name provided > is correct, get the status of the stack... In case of Standalone > deployment, it adds more complexity then. > * This code is only valid for "standard" deployments and usage > meaning it doesn't work for Standalone, for some Upgrade and FFU > stages and needs to be bypassed for pre-undercloud deployment.  > * We hit several blockers around this part of code. > > My proposal is the following: > > Since we are thinking of the future of Validation and we want > something more robust, stronger, simpler, usable and efficient, > I propose to get rid of the plan/stack and authentication > functionalities in the Validation code, and only ask for a valid > inventory provided by the user. > I propose as well to create a new entry point in the TripleO CLI > to generate a static inventory such as: > openstack tripleo inventory generate --output-file my-inv.yaml > and then: > openstack tripleo validator run --validation my-validation > --inventory my-inv.yaml > > By doing that, I think we gain a lot in simplification, it's > more robust, and Validation will only do what it aims for: wrapp > Ansible execution to provide better logging information and history. > > The main concerns about this approach is that the user will have > to provide a valid inventory to the Validation CLI. > I understand the point of view of getting something fully > autonomous, and the way of just kicking *one* command and the > Validation can be *magically* executed against your cloud, but I > think the less complex the Validation code is, the more robust, > stable and usable it will be. > > Deferring a specific entry point for the inventory, which is a > key part of post deployment action, seems something more clear > and robust as well. > This part of code could be shared and used for any other usages > instead of calling the inventory script stored into > tripleo-validations. It could then use the tripleo-common > inventory library directly with tripleoclient, instead of > calling from client -> tripleo-validations/scripts -> query > tripleo-common inventory library. > > I know it changes a little bit the usage (adding one command > line in the execution process for getting a valid inventory) but > it's going in a less buggy and racy direction. > And the inventory should be generated only once, or at least at > any big major cloud change. > > So, I'm glad to get your thoughts on that topic and your overall > views around this topic. > > > > The proposal sounds sane to me, but just to be clear by > "authentication functionalities" are you referring specifically to > the '--ssh-user' argument > (https://github.com/openstack/tripleo-validations/blob/1a9f1758d160cc2e543a1cf7cd4507dd3355945a/tripleo_validations/tripleo_validator.py#L243 > )?  > i.e. we will already have that in the generated static inventory so > no need to have in on the CLI?  > > The authentication in the current CLI is only needed to get the stack > output in order to generate the inventory. > If the user provides his inventory, no authentication, no heat stack > check and then Validation can run everywhere on every stage of a TripleO > deployment (early without any TripleO bits, or in the middle of LEAP > Upgrade for example). > > > If the only cost is that we have to have an extra step for > generating the inventory then IMO it is worth doing. I would however > be interested to hear from those that are objecting to the proposal > about why it is a bad idea ;) since you said there has been a > divergence in opinions over the design  > > regards, marios > >   > > > Thanks, > Mathieu > > [1] https://github.com/openstack/tripleo-validations/blob/master/tripleo_validations/tripleo_validator.py#L338-L382 > > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From E.Panter at mittwald.de Thu Mar 4 16:43:38 2021 From: E.Panter at mittwald.de (Erik Panter) Date: Thu, 4 Mar 2021 16:43:38 +0000 Subject: [kolla-ansible] partial upgrades / mixed releases in deployments Message-ID: Hi, We are currently preparing to upgrade a kolla-ansible deployed OpenStack cluster and were wondering if it is possible to upgrade individual services independently of each other, for example to upgrade one service at a time to Ussuri while still using kolla-ansible to deploy and reconfigure the Train versions of the other services. Our idea was that either the Ussuri release of kolla-ansible is used to deploy Train and Ussuri services (with properly migrated configuration), or that two different releases and configurations are used for the two sets of services in the same deployment. Does anyone have experience if this is practical or even possible? Thank you in advance, Erik _____ Erik Panter Systementwickler | Infrastruktur Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 Mobil: 0151 / 12345678 e.panter at mittwald.de https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From ikatzir at infinidat.com Thu Mar 4 17:56:00 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Thu, 4 Mar 2021 19:56:00 +0200 Subject: [ironic] How to move node from active state to manageable Message-ID: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> Hello Forum, I have an overcloud that gone bad and I am trying to re-deploy it, Running rhos16.1 with one director and two overcloud nodes (compute and controller) I have re-installed undercloud and having both nodes in an active provisioning state. Do I need to run introspection again? Here is the outputted for baremetal node list: (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | c7bf16b7-eb3c-4022-88de-7c5a78cda174 | power on | active | False | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | 99223f65-6985-4815-92ff-e19a28c2aab1 | power on | active | False | +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ When I want to move each node from active > manage I get an error: (undercloud) [stack at interop010 ~]$ openstack baremetal node manage 4b02703a-f765-4ebb-85ed-75e88b4cbea5 The requested action "manage" can not be performed on node "4b02703a-f765-4ebb-85ed-75e88b4cbea5" while it is in state "active". (HTTP 400) How do I get to a state which is ready for deployment (available) ? Thanks, Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.faulkner at verizonmedia.com Thu Mar 4 18:12:33 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 4 Mar 2021 10:12:33 -0800 Subject: [E] [ironic] How to move node from active state to manageable In-Reply-To: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> References: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> Message-ID: When a node is active with an instance UUID set, that generally indicates a nova instance (with that UUID) is provisioned onto the node. Nodes that are provisioned (active) are not able to be moved to manageable state. If you want to reprovision these nodes, you'll want to delete the associated instances from Nova (openstack server delete instanceUUID), and after they complete a cleaning cycle they'll return to available. Good luck, Jay Faulkner On Thu, Mar 4, 2021 at 10:01 AM Igal Katzir wrote: > Hello Forum, > > I have an overcloud that gone bad and I am trying to re-deploy it, Running > rhos16.1 with one director and two overcloud nodes (compute and controller) > I have re-installed undercloud and having both nodes in an *active * > provisioning state. > Do I need to run introspection again? > Here is the outputted for baremetal node list: > (undercloud) [stack at interop010 ~]$ openstack baremetal node list > > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > | UUID | Name | > Instance UUID | Power State | Provisioning State | > Maintenance | > > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | > c7bf16b7-eb3c-4022-88de-7c5a78cda174 | power on | active | > False | > | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | > 99223f65-6985-4815-92ff-e19a28c2aab1 | power on | active | > False | > > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > When I want to move each node from active > manage I get an error: > (undercloud) [stack at interop010 ~]$ openstack baremetal node manage > 4b02703a-f765-4ebb-85ed-75e88b4cbea5 > The requested action "manage" can not be performed on node > "4b02703a-f765-4ebb-85ed-75e88b4cbea5" while it is in state "active". (HTTP > 400) > > How do I get to a state which is ready for deployment (available) ? > > Thanks, > Igal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Mar 4 19:05:01 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 4 Mar 2021 11:05:01 -0800 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> Message-ID: Peter, Feel free to message me on IRC (johnsom) if you run into questions about enabling the PDF docs for your projects. I did the work for Octavia so might have some answers. Michael On Thu, Mar 4, 2021 at 12:36 AM Bogdan Dobrelya wrote: > > On 3/3/21 9:30 PM, Jeremy Stanley wrote: > > On 2021-03-03 15:05:10 -0500 (-0500), Peter Matulis wrote: > > [...] > >> How do I get a download PDF link like what is available in the > >> published pages of the Nova project? Where is that documented? > > On each project's documentation page, there is a "Download PDF" button, > at the top, near to the "Report a Bug". > > >> > >> In short, yes, I am interested in having downloadable PDFs for the > >> projects that I maintain: > >> > >> https://opendev.org/openstack/charm-guide > >> https://opendev.org/openstack/charm-deployment-guide > > > > The official goal document is still available here: > > > > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html > > > > Some technical detail can also be found in the earlier docs spec: > > > > https://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html > > > > A bit of spelunking in Git history turns up, for example, this > > change implementing PDF generation for openstack/ironic (you can > > find plenty more if you hunt): > > > > https://review.opendev.org/680585 > > > > I expect you would just do something similar to that. If memory > > serves (it's been a couple years now), each project hit slightly > > different challenges as no two bodies of documentation are every > > quite the same. You'll likely have to dig deep occasionally in > > Sphinx and LaTeX examples to iron things out. > > > > One thing which would have been nice as an output of that cycle goal > > was if the PTI section for documentation was updated with related > > technical guidance on building PDFs, but it's rather lacking in that > > department still: > > > > https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation > > > > If you can come up with a succinct summary for what's needed, I > > expect adding it there would be really useful to others too. > > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > From tcr1br24 at gmail.com Thu Mar 4 09:26:52 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Thu, 4 Mar 2021 17:26:52 +0800 Subject: [Consultation] SFC issue Message-ID: Dear Sir, We are testing a simple SFC in OpenStack (Stein) + OpenDaylight (Neon) + Open vSwitch (v2.11.1). It's an all-in-one deployment. We have read the document: https://readthedocs.org/projects/odl-sfc/downloads/pdf/latest/ and https://github.com/opnfv/sfc/blob/master/docs/release/scenarios/os-odl-sfc-noha/scenario.description.rst [image: image.png] Our SFC topology: [image: enter image description here] All on the same compute node. Build SFC through API: 1. openstack sfc flow classifier create --source-ip-prefix 10.20.0.0/24 --logical-source-port p0 FC1 2. openstack sfc port pair create --description "Firewall SF instance 1" --ingress p1 --egress p1 --service-function-parameters correlation=None PP1 3. openstack sfc port pair group create --port-pair PP1 PPG1 4. openstack sfc port chain create --port-pair-group PPG1 --flow-classifier FC1 --chain-parameters correlation=nsh PC1 Ping from client to server, but packet did not pass through firewall,open vswitch log show: [image: enter image description here] Flow table: [image: enter image description here] trace flow: [image: enter image description here] Is there something wrong with the OpenStack instructions? It seems SFC proxy not work or there may be some bugs in "networking-sfc"? Thanks! Sincerely, Jhen-Hao >From NTU CSIE [image: Mailtrack] Sender notified by Mailtrack 03/04/21, 05:26:34 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 29563 bytes Desc: not available URL: From emiller at genesishosting.com Thu Mar 4 19:50:52 2021 From: emiller at genesishosting.com (Eric K. Miller) Date: Thu, 4 Mar 2021 13:50:52 -0600 Subject: [kolla][kolla-ansible] distro mixing on hosts In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048DFF30@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA048DFF3D@gmsxchsvr01.thecreation.com> > While this is possible, it is not something we "support" upstream. We > already have many combinations of OS distro and binary/source. I know > that people have tried it in the past, and while it should mostly just > work, occasionally it causes issues. A classic issue would be using an > ansible fact about the host OS to infer something about the container. > While we'll likely accept reasonable fixes for these issues upstream, > mixing host and container OS may cause people to pay less attention to > bugs raised. Understood. If we run into issues, I'll report regardless of whether there is attention, just so someone knows what was tried, and what failed. > If you want to migrate from CentOS 7 to Debian, I would suggest > keeping the host and container OS the same, by basing the > kolla_base_distro variable on groups. Of course this will also be an > untested configuration, and I suggest that you test it thoroughly, in > particular things like migration between different versions of > libvirt/qemu. Ah, that's a good idea, setting kolla_base_distro in groups as opposed to globally. Thanks! If we can do that, then we'll definitely keep both host and container OS the same. We're testing in a VM environment first, so will try to identify problems before we have to do this on live systems. Thanks again for the quick responses as always Mark! Eric From masayuki.igawa at gmail.com Thu Mar 4 23:09:05 2021 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Fri, 05 Mar 2021 08:09:05 +0900 Subject: [qa][election][ptl] PTL non-candidacy Message-ID: Hello all, I will not be running again for the QA PTL for the Xena cycle because I believe rotating leadership for open source projects is a good thing basically, and I will also have very limited bandwidth for the cycle. Therefore it would be best if someone else runs for QA PTL for the cycle. But I'll be around and continue to involve in OpenStack though it'll be downstream mostly. So, feel free to ping me if you want. Best Regards, -- Masayuki Igawa From rosmaita.fossdev at gmail.com Fri Mar 5 02:51:46 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 4 Mar 2021 21:51:46 -0500 Subject: [cinder] final patches for wallaby os-brick release need reviews Message-ID: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> Hello members of the cinder community: Please direct your attention to the following patches at your earliest convenience (i.e., right now): https://review.opendev.org/c/openstack/os-brick/+/777086 "NVMeOF connector driver connection information compatibility fix" - this fixes the regression in the nvmeof connector - patch has passed CI: Zuul, Mellanox SPDK (to detect the regression), and Kioxia (which uses the mdraid feature) - looks pretty much ready to go https://review.opendev.org/c/openstack/os-brick/+/778810 "Add release note for nvmeof connector" - the title says it all https://review.opendev.org/c/openstack/os-brick/+/778807 "Update requirements for wallaby release" - raises the minimum versions in the various requirements files to reflect what we're actually testing with right now - nothing major because of the adjustments made back in January to deal with the upgraded pip dependency resolver Happy reviewing! brian From mark at stackhpc.com Fri Mar 5 08:49:17 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 5 Mar 2021 08:49:17 +0000 Subject: [kolla-ansible] partial upgrades / mixed releases in deployments In-Reply-To: References: Message-ID: On Thu, 4 Mar 2021 at 16:44, Erik Panter wrote: > > Hi, > > We are currently preparing to upgrade a kolla-ansible deployed > OpenStack cluster and were wondering if it is possible to upgrade > individual services independently of each other, for example to > upgrade one service at a time to Ussuri while still using > kolla-ansible to deploy and reconfigure the Train versions of the > other services. > > Our idea was that either the Ussuri release of kolla-ansible is used > to deploy Train and Ussuri services (with properly migrated > configuration), or that two different releases and configurations are > used for the two sets of services in the same deployment. > > Does anyone have experience if this is practical or even possible? Hi Erik, In general, this might work, however it's not something we test or "support", so it cannot be guaranteed. Generally there are not too many changes in how services are deployed from release to release, but there are times when procedures change, configuration changes, or we may introduce incompatibilities between the container images and the Ansible deployment tooling. At runtime, there is also the operation of services in a mixed environment to consider, although stable components and APIs help here. All of that is to say that such a configuration would need testing, and ideally not be in place for a long period of time. One similar case we often have is upgrading a single service, often Magnum, to a newer release than the rest of the cloud. To achieve this we set the magnum_tag variable. Mark > > Thank you in advance, > > Erik > > _____ > > Erik Panter > Systementwickler | Infrastruktur > > > Mittwald CM Service GmbH & Co. KG > Königsberger Straße 4-6 > 32339 Espelkamp > > Tel.: 05772 / 293-900 > Fax: 05772 / 293-333 > Mobil: 0151 / 12345678 > > e.panter at mittwald.de > https://www.mittwald.de > > Geschäftsführer: Robert Meyer, Florian Jürgens > > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen > > Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit > gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. > From hberaud at redhat.com Fri Mar 5 09:56:48 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 5 Mar 2021 10:56:48 +0100 Subject: [release] Release countdown for week R-5 Mar 08 - Mar 12 Message-ID: Development Focus ----------------- We are getting close to the end of the Wallaby cycle! Next week on 11 March 2021 is the Wallaby-3 milestone, also known as feature freeze. It's time to wrap up feature work in the services and their client libraries, and defer features that won't make it to the Xena cycle. General Information ------------------- This coming week is the deadline for client libraries: their last feature release needs to happen before "Client library freeze" on 11 March, 2021. Only bugfix releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/wallaby branching request with the review. As an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2 11 March, 2021 is also the deadline for feature work in all OpenStack deliverables following the cycle-with-rc model. To help those projects produce a first release candidate in time, only bugfixes should be allowed in the master branch beyond this point. Any feature work past that deadline has to be raised as a Feature Freeze Exception (FFE) and approved by the team PTL. Finally, feature freeze is also the deadline for submitting a first version of your cycle-highlights. Cycle highlights are the raw data that helps shape what is communicated in press releases and other release activity at the end of the cycle, avoiding direct contacts from marketing folks. See https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights for more details. Upcoming Deadlines & Dates -------------------------- Cross-project events: - Wallaby-3 milestone (feature freeze): 11 March, 2021 (R-5 week) - RC1 deadline: 22 March, 2021 (R-3 week) - Final RC deadline: 5 April, 2021 (R-1 week) - Final Wallaby final release: 14 April, 2021 Project-specific events: - Cinder 3rd Party CI Compliance Checkpoint: 12 March, 2021 (R-5 week) -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Fri Mar 5 11:54:59 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 05 Mar 2021 12:54:59 +0100 Subject: [cinder] final patches for wallaby os-brick release need reviews In-Reply-To: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> References: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> Message-ID: <3632106.q0ZmV6gNhb@whitebase.usersys.redhat.com> On Friday, 5 March 2021 03:51:46 CET Brian Rosmaita wrote: > Hello members of the cinder community: > > Please direct your attention to the following patches at your earliest > convenience (i.e., right now): > > https://review.opendev.org/c/openstack/os-brick/+/777086 > "NVMeOF connector driver connection information compatibility fix" > - this fixes the regression in the nvmeof connector > - patch has passed CI: Zuul, Mellanox SPDK (to detect the regression), > and Kioxia (which uses the mdraid feature) > - looks pretty much ready to go > > https://review.opendev.org/c/openstack/os-brick/+/778810 > "Add release note for nvmeof connector" > - the title says it all > > https://review.opendev.org/c/openstack/os-brick/+/778807 > "Update requirements for wallaby release" > - raises the minimum versions in the various requirements files to > reflect what we're actually testing with right now > - nothing major because of the adjustments made back in January to deal > with the upgraded pip dependency resolver Maybe also: https://review.opendev.org/c/openstack/os-brick/+/775545 "Avoid unhandled exceptions during connecting to iSCSI portals" which may make the code more robust? Ciao -- Luigi From rosmaita.fossdev at gmail.com Fri Mar 5 14:31:58 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 5 Mar 2021 09:31:58 -0500 Subject: [glance] Doc about swift as backend In-Reply-To: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> References: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> Message-ID: On 3/4/21 8:04 AM, Cristina Mayo Sarmiento wrote: > Hi everyone, > > Are there any specific documentation about how to configure Swift as > backend of Glance service? I've found this: > https://docs.openstack.org/glance/latest/configuration/configuring.html > but I'm not sure what are the options I need. Firstly, I follow the > installation guides of glance: > https://docs.openstack.org/glance/latest/install/install-ubuntu.htmland > I have now file as store but I'rather change it. The only thing I'm aware of is the "configuring swift" section of that doc you already found: https://docs.openstack.org/glance/latest/configuration/configuring.html#configuring-the-swift-storage-backend The first decision you have to make is whether to use single tenant (all images stored in an account "owned" by glance) or multi-tenant (the data for an image owned by tenant T is stored in T's account in swift). There are pros and cons to each; you may want to add [ops] to your subject and see if other operators can give you some advice. They may also know of some other helpful documentation. > > Thanks! From donny at fortnebula.com Fri Mar 5 15:23:59 2021 From: donny at fortnebula.com (Donny Davis) Date: Fri, 5 Mar 2021 10:23:59 -0500 Subject: [glance] Doc about swift as backend In-Reply-To: References: <383939b49fad248bba6eee2d248e67c2@gsic.uva.es> Message-ID: I would say that if you're building a public facing service the multi-tenant approach makes sense. If this is internal then a single tenant is the way to go IMO. On Fri, Mar 5, 2021 at 9:35 AM Brian Rosmaita wrote: > On 3/4/21 8:04 AM, Cristina Mayo Sarmiento wrote: > > Hi everyone, > > > > Are there any specific documentation about how to configure Swift as > > backend of Glance service? I've found this: > > https://docs.openstack.org/glance/latest/configuration/configuring.html > > but I'm not sure what are the options I need. Firstly, I follow the > > installation guides of glance: > > https://docs.openstack.org/glance/latest/install/install-ubuntu.htmland > > I have now file as store but I'rather change it. > > The only thing I'm aware of is the "configuring swift" section of that > doc you already found: > > https://docs.openstack.org/glance/latest/configuration/configuring.html#configuring-the-swift-storage-backend > > The first decision you have to make is whether to use single tenant (all > images stored in an account "owned" by glance) or multi-tenant (the data > for an image owned by tenant T is stored in T's account in swift). > > There are pros and cons to each; you may want to add [ops] to your > subject and see if other operators can give you some advice. They may > also know of some other helpful documentation. > > > > > > Thanks! > > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Mar 5 15:37:02 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 5 Mar 2021 17:37:02 +0200 Subject: [TripleO] Wallaby cycle highlights Message-ID: Hello all o/ Before you have a heart attack ;) Wallaby is still ~1 month away (and more like ~1.5 months+ for TripleO as we always trail the release) :) As mentioned in the last tripleo irc meeting [1] the deadline for wallaby cycle highlights is next Friday 12th March [2]. I had a go at and posted the proposal to https://review.opendev.org/c/openstack/releases/+/778971 . Please comment on the review or here if you want to update the wording and especially if I have forgotten to include something please accept my apologies if you let me know I can fix it :) thanks, marios [1] http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-03-02-14.00.log.html#l-176 [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020714.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Mar 5 16:26:19 2021 From: melwittt at gmail.com (melanie witt) Date: Fri, 5 Mar 2021 08:26:19 -0800 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? Message-ID: Hello all, I'm seeking input from the neutron and nova teams regarding policy enforcement for allowing attachment to external networks. Details below. Recently we've been looking at an issue that was reported quite a long time ago (2017) [1] where we have a policy check in nova-compute that controls whether to allow users to attach an external network to their instances. This has historically been a pain point for operators as (1) it goes against convention of having policy checks in nova-api only and (2) setting the policy to anything other than the default requires deploying a policy file change to all of the compute hosts in the deployment. The launchpad bug report mentions neutron refactoring work that was happening at the time, which was thought might make the 'network:attach_external_network' policy check on the nova side redundant. Years have passed since then and customers are still running into this problem, so we are thinking, can this policy check be removed on the nova-compute side now? I did a local test with devstack to verify what the behavior is if we were to remove the 'network:attach_external_network' policy check entirely [2] and found that neutron appears to properly enforce permission to attach to external networks itself. It appears that the enforcement on the neutron side makes the nova policy check redundant. When I tried to boot an instance to attach to an external network, neutron API returned the following: INFO neutron.pecan_wsgi.hooks.translation [req-58fdb103-cd20-48c9-b73b-c9074061998c req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create port on this network [...] INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 Can anyone from the neutron team confirm whether it would be OK for us to remove our nova-compute policy check for external network attach permission and let neutron take care of the check? And on the nova side, I assume we would need a deprecation cycle before removing the 'network:attach_external_network' policy. If we can get confirmation from the neutron team, is anyone opposed to the idea of deprecating the 'network:attach_external_network' policy in the Wallaby cycle, to be removed in the Xena release? I would appreciate your thoughts. Cheers, -melanie [1] https://bugs.launchpad.net/nova/+bug/1675486 [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 From ankit at aptira.com Fri Mar 5 05:51:03 2021 From: ankit at aptira.com (Ankit Goel) Date: Fri, 5 Mar 2021 05:51:03 +0000 Subject: Need help to get rally test suite In-Reply-To: References: Message-ID: Oh ok. Thanks Jean. I got it now. Regards, Ankit Goel From: Andrey Kurilin Sent: 04 March 2021 15:27 To: Taltavull Jean-Francois Cc: Ankit Goel ; openstack-dev at lists.openstack.org Subject: Re: Need help to get rally test suite Hi! Rally plugins for OpenStack platform moved under separate project&repository. See https://github.com/openstack/rally/blob/master/CHANGELOG.rst#100---2018-06-20 for more details. чт, 4 мар. 2021 г. в 11:14, Taltavull Jean-Francois >: Hello Ankit, Take a look here : https://opendev.org/openstack/rally-openstack/src/branch/master/samples/tasks/scenarios/ Regards, Jean-Francois From: Ankit Goel > Sent: mercredi, 3 mars 2021 19:25 To: openstack-dev at lists.openstack.org Subject: Need help to get rally test suite Hello Experts, Need some help on rally. I have installed Openstack rally on centos. Now I need to run some benchmarking test suite. I need to know is there any pre-existing test suite which covers majority of test cases for Openstack control plane. So that I can just run that test suite. I remember earlier we use to get the samples under rally/samples/tasks/scenarios for each component but now I am not seeing it but I am seeing some dummy files. So from where I can get those samples json/yaml files which is required to run the rally tests. Awaiting for your response. Thanks in Advance Regards, Ankit Goel -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Mar 5 17:27:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 5 Mar 2021 12:27:51 -0500 Subject: [cinder] final patches for wallaby os-brick release need reviews In-Reply-To: <3632106.q0ZmV6gNhb@whitebase.usersys.redhat.com> References: <48e2a89a-383f-3ee8-66ce-ab80a92352be@gmail.com> <3632106.q0ZmV6gNhb@whitebase.usersys.redhat.com> Message-ID: On 3/5/21 6:54 AM, Luigi Toscano wrote: > On Friday, 5 March 2021 03:51:46 CET Brian Rosmaita wrote: >> Hello members of the cinder community: >> >> Please direct your attention to the following patches at your earliest >> convenience (i.e., right now): >> >> https://review.opendev.org/c/openstack/os-brick/+/777086 >> "NVMeOF connector driver connection information compatibility fix" >> - this fixes the regression in the nvmeof connector >> - patch has passed CI: Zuul, Mellanox SPDK (to detect the regression), >> and Kioxia (which uses the mdraid feature) >> - looks pretty much ready to go >> >> https://review.opendev.org/c/openstack/os-brick/+/778810 >> "Add release note for nvmeof connector" >> - the title says it all >> >> https://review.opendev.org/c/openstack/os-brick/+/778807 >> "Update requirements for wallaby release" >> - raises the minimum versions in the various requirements files to >> reflect what we're actually testing with right now >> - nothing major because of the adjustments made back in January to deal >> with the upgraded pip dependency resolver > > Maybe also: > https://review.opendev.org/c/openstack/os-brick/+/775545 > "Avoid unhandled exceptions during connecting to iSCSI portals" > > which may make the code more robust? Thanks for pointing this one out, and thanks to Takashi for being so responsive to the reviews and making quick revisions. It's in the gate now. > > Ciao > From rlandy at redhat.com Fri Mar 5 17:50:50 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 5 Mar 2021 12:50:50 -0500 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) Message-ID: Hello All, Just a reminder that we will be starting to implement steps to migrate from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. The plan is outlined in: https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday In summary, on Sunday, we plan to: - Move the master integration line for promotions to build containers and images on centos-8 stream nodes - Change the release files to bring down centos-8 stream repos for use in test jobs (test jobs will still start on centos-8 nodes - changing this nodeset will happen later) - Image build and container build check jobs will be moved to non-voting during this transition. We have already run all the test jobs in RDO with centos-8 stream content running on centos-8 nodes to prequalify this transition. We will update this list with status as we go forward with next steps. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Mar 5 21:59:54 2021 From: amy at demarco.com (Amy) Date: Fri, 5 Mar 2021 15:59:54 -0600 Subject: [openstack-community] Keystone deprecated policies problem In-Reply-To: <20210305211222.Horde.HHcZNM5k3FoOKQXguoLtJMA@webmail.unl.edu.ar> References: <20210305211222.Horde.HHcZNM5k3FoOKQXguoLtJMA@webmail.unl.edu.ar> Message-ID: Adding the OpenStack discuss list. Thanks, Amy (spotz) > On Mar 5, 2021, at 3:19 PM, meberhardt at unl.edu.ar wrote: > > Hi, > > my installation complains about deprecated policies and throw errors when I try to run specific commands in cli as admin user(list projects or list users, for example). > > "You are not authorized to perform the requested action: identity:list_users." > > I tried to fix this by upgrading the keystone policies using oslopolicy-policy-generator and oslopolicy-policy-upgrade. Just found two places where I have those keystone policy files in my sistem: /etc/openstack_dashboard/keystone_policy.json and cd /lib/python3.6/site-packages/openstack_auth/tests/conf/keystone_policy.json > > I regenerated & upgraded it but Keystone still complains about the old polices. Where are it placed? How I shoud fix it? > > OS: CentOS 8 > Openstack version: Ussuri > Manual installation > > Thanks, > > Matias > > /* --------------------------------------------------------------- */ > /* Matías A. Eberhardt */ > /* */ > /* Centro de Telemática */ > /* Secretaría General */ > /* UNIVERSIDAD NACIONAL DEL LITORAL */ > /* Pje. Martínez 2652 - S3002AAB Santa Fe - Argentina */ > /* tel +54(342)455-4245 - FAX +54(342)457-1240 */ > /* --------------------------------------------------------------- */ > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community From victoria at vmartinezdelacruz.com Fri Mar 5 22:13:03 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 5 Mar 2021 23:13:03 +0100 Subject: [manila] Wallaby Collab Review - CephFS drivers updates (Mar 8th) Message-ID: Hi everyone, Next Monday (March 8th) we will hold a Manila collaborative review session in which we'll go through the code and review the latest changes introduced in the CephFS drivers [0][1] This meeting is scheduled to last one hour, starting at 5.00pm UTC. It might extend a bit, but the most important should be covered within the hour. Meeting notes and relevant links are available in [2] Your feedback will be truly appreciated. Cheers, V [0] https://review.opendev.org/q/topic:%22bp%252Fupdate-cephfs-drivers%22+(status:open%20OR%20status:merged) [1] https://review.opendev.org/q/topic:%22bp%252Fcreate-share-from-snapshot-cephfs%22+(status:open%20OR%20status:merged) [2] https://etherpad.opendev.org/p/update-cephfs-drivers-collab-review -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Mar 5 22:13:09 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 5 Mar 2021 14:13:09 -0800 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: Octavia doesn't really care what the ML2 is in neutron, there are no dependencies on it as we use stable neutron APIs. Devstack however is creating a port on the management network for the controller processes. Octavia has a function hook to allow the SDN providers to handle creating access to the management network. When OVN moved into neutron this hook was implemented in neutron for linux bridge, OVS, and OVN. I should also note that we have been running Octavia gate jobs, on neutron with OVN, since before the migration of OVN into neutron, so I would not expect any issues from the proposed change to the default ML2 in neutron. Michael On Tue, Mar 2, 2021 at 7:05 PM Lingxian Kong wrote: > > Hi, > > Thanks for all your hard work on this. > > I'm wondering is there any doc proposed for devstack to tell people who are not interested in OVN to keep the current devstack behaviour? I have a feeling that using OVN as default Neutron driver would break the CI jobs for some projects like Octavia, Trove, etc. which rely on ovs port for the set up. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi wrote: >> >> On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: >> > >> > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: >> > > Hi all, >> > > >> > > As part of the Victoria PTG [0] the Neutron community agreed upon >> > > switching the default backend in Devstack to OVN. A lot of work has >> > > been done since, from porting the OVN devstack module to the DevStack >> > > tree, refactoring the DevStack module to install OVN from distro >> > > packages, implementing features to close the parity gap with ML2/OVS, >> > > fixing issues with tests and distros, etc... >> > > >> > > We are now very close to being able to make the switch and we've >> > > thought about sending this email to the broader community to raise >> > > awareness about this change as well as bring more attention to the >> > > patches that are current on review. >> > > >> > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is >> > > discontinued and/or not supported anymore. The ML2/OVS driver is still >> > > going to be developed and maintained by the upstream Neutron >> > > community. >> > can we ensure that this does not happen until the xena release. >> > in generall i think its ok to change the default but not this late in the cycle. >> > i would also like to ensure we keep at least one non ovn based multi node job in >> > nova until https://review.opendev.org/c/openstack/nova/+/602432 is merged and possible after. >> > right now the event/neutorn interaction is not the same during move operations. >> > > >> > > Below is a e per project explanation with relevant links and issues of >> > > where we stand with this work right now: >> > > >> > > * Keystone: >> > > >> > > Everything should be good for Keystone, the gate is happy with the >> > > changes. Here is the test patch: >> > > https://review.opendev.org/c/openstack/keystone/+/777963 >> > > >> > > * Glance: >> > > >> > > Everything should be good for Glace, the gate is happy with the >> > > changes. Here is the test patch: >> > > https://review.opendev.org/c/openstack/glance/+/748390 >> > > >> > > * Swift: >> > > >> > > Everything should be good for Swift, the gate is happy with the >> > > changes. Here is the test patch: >> > > https://review.opendev.org/c/openstack/swift/+/748403 >> > > >> > > * Ironic: >> > > >> > > Since chainloading iPXE by the OVN built-in DHCP server is work in >> > > progress, we've changed most of the Ironic jobs to explicitly enable >> > > ML2/OVS and everything is merged, so we should be good for Ironic too. >> > > Here is the test patch: >> > > https://review.opendev.org/c/openstack/ironic/+/748405 >> > > >> > > * Cinder: >> > > >> > > Cinder is almost complete. There's one test failure in the >> > > "tempest-slow-py3" job run on the >> > > "test_port_security_macspoofing_port" test. >> > > >> > > This failure is due to a bug in core OVN [1]. This bug has already >> > > been fixed upstream [2] and the fix has been backported down to the >> > > branch-20.03 [3] of the OVN project. However, since we install OVN >> > > from packages we are currently waiting for this fix to be included in >> > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already >> > > contacted the package maintainer which has been very supportive of >> > > this work and will work on the package update, but he maintain a >> > > handful of backports in that package which is not yet included in OVN >> > > 20.03 upstream and he's now working with the core OVN community [4] to >> > > include it first in the branch and then create a new package for it. >> > > Hopefully this will happen soon. >> > > >> > > But for now we have a few options moving on with this issue: >> > > >> > > 1- Wait for the new package version >> > > 2- Mark the test as unstable until we get the new package version >> > > 3- Compile OVN from source instead of installing it from packages >> > > (OVN_BUILD_FROM_SOURCE=True in local.conf) >> > i dont think we should default to ovn untill a souce build is not required. >> > compiling form souce while not supper expensice still adds time to the job >> > execution and im not sure we should be paying that cost on every devstack job run. >> > >> > we could maybe compile it once and bake the package into the image or host it on a mirror >> > but i think we should avoid this option if we have alternitives. >> > > >> > > What do you think about it ? >> > > >> > > Here is the test patch for Cinder: >> > > https://review.opendev.org/c/openstack/cinder/+/748227 >> > > >> > > * Nova: >> > > >> > > There are a few patches waiting for review for Nova, which are: >> > > >> > > 1- Adapting the live migration scripts to work with ML2/OVN: Basically >> > > the scripts were trying to stop the Neutron agent (q-agt) process >> > > which is not part of an ML2/OVN deployment. The patch changes the code >> > > to check if that system unit exists before trying to stop it. >> > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 >> > > >> > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change >> > > which can be removed one release cycle after we switch DevStack to >> > > ML2/OVN. Grenade will test updating from the release version to the >> > > master branch but, since the default of the released version is not >> > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job >> > > is not supported. >> > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 >> > > >> > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS >> > > minimum bandwidth feature which is not yet supported by ML2/OVN [5][6] >> > > therefore we are temporarily enabling ML2/OVS for this job until that >> > > feature lands in core OVN. >> > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 >> > > >> > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about these >> > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now >> > > because he feels like a change in the default network driver a few >> > > weeks prior to the upstream code freeze can be concerning. We do not >> > > know yet precisely when we are changing the default due to the current >> > > patches we need to get merged but, if this is a shared feeling among >> > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova >> > > until we get a new release in OpenStack. >> > yep this is still my view. >> > i would suggest we do the work required in the repos but not merge it until the xena release >> > is open. thats technically at RC1 so march 25th >> > i think we can safely do the swich after that but i would not change the defualt in any project >> > before then. >> > > >> > > Here's the test patch for Nova: >> > > https://review.opendev.org/c/openstack/nova/+/776945 >> > > >> > > * DevStack: >> > > >> > > And this is the final patch that will make this all happen: >> > > https://review.opendev.org/c/openstack/devstack/+/735097 >> > > >> > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been >> > > a long and bumpy road to get to this point and I would like to say >> > > thanks to everyone involved so far and everyone that read the whole >> > > email, please let me know your thoughts. >> > thanks for working on this. >> > > >> > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg >> > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 >> > > [2] https://patchwork.ozlabs.org/project/openvswitch/patch/20200319122641.473776-1-numans at ovn.org/ >> > > [3] https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d8ab39380cb >> > > [4] https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050961.html >> > > [5] https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a99fe76a9ee1/gate/post_test_hook.sh#L129-L143 >> > > [6] https://docs.openstack.org/neutron/latest/ovn/gaps.html >> > > >> > > Cheers, >> > > Lucas >> >> ++ Thank you indeed for working diligently on this important change. >> >> Please do note that devstack, and the base job that you're modifying >> is used by many other projects besides the ones that you have >> enumerated in the subject line. >> I suggest using [all] as a better subject line indicator to get the >> attention of folks like me who have filters based on the subject line. >> Also, the network substrate is important for the project I help >> maintain: Manila, which provides shared file systems over a network - >> so I followed your lead and submitted a dependent patch. I hope to >> reach out to you in case we see some breakages: >> https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 >> >> > > >> > >> > >> > >> From victoria at vmartinezdelacruz.com Fri Mar 5 22:17:11 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 5 Mar 2021 23:17:11 +0100 Subject: [manila] Wallaby Collab Review - CephFS drivers updates (Mar 8th) In-Reply-To: References: Message-ID: Small update: the meeting will start at 6.00pm UTC instead of 5.00pm UTC. Thanks, V On Fri, Mar 5, 2021 at 11:13 PM Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi everyone, > > Next Monday (March 8th) we will hold a Manila collaborative review session > in which we'll go through the code and review the latest changes introduced > in the CephFS drivers [0][1] > > This meeting is scheduled to last one hour, starting at 5.00pm UTC. It > might extend a bit, but the most important should be covered within the > hour. > > Meeting notes and relevant links are available in [2] > > Your feedback will be truly appreciated. > > Cheers, > > V > > [0] > https://review.opendev.org/q/topic:%22bp%252Fupdate-cephfs-drivers%22+(status:open%20OR%20status:merged) > [1] > https://review.opendev.org/q/topic:%22bp%252Fcreate-share-from-snapshot-cephfs%22+(status:open%20OR%20status:merged) > [2] https://etherpad.opendev.org/p/update-cephfs-drivers-collab-review > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sat Mar 6 07:37:39 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 06 Mar 2021 08:37:39 +0100 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: References: Message-ID: <2609049.HbdjPCY3gI@p1> Hi, Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > Hello all, > > I'm seeking input from the neutron and nova teams regarding policy > enforcement for allowing attachment to external networks. Details below. > > Recently we've been looking at an issue that was reported quite a long > time ago (2017) [1] where we have a policy check in nova-compute that > controls whether to allow users to attach an external network to their > instances. > > This has historically been a pain point for operators as (1) it goes > against convention of having policy checks in nova-api only and (2) > setting the policy to anything other than the default requires deploying > a policy file change to all of the compute hosts in the deployment. > > The launchpad bug report mentions neutron refactoring work that was > happening at the time, which was thought might make the > 'network:attach_external_network' policy check on the nova side redundant. > > Years have passed since then and customers are still running into this > problem, so we are thinking, can this policy check be removed on the > nova-compute side now? > > I did a local test with devstack to verify what the behavior is if we > were to remove the 'network:attach_external_network' policy check > entirely [2] and found that neutron appears to properly enforce > permission to attach to external networks itself. It appears that the > enforcement on the neutron side makes the nova policy check redundant. > > When I tried to boot an instance to attach to an external network, > neutron API returned the following: > > INFO neutron.pecan_wsgi.hooks.translation > [req-58fdb103-cd20-48c9-b73b-c9074061998c > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > port on this network > [...] > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 I just checked in Neutron code and we don't have any policy rule related directly to the creation of ports on the external network. Probably what You had there is the fact that Your router:external network was owned by other tenant and due to that You wasn't able to create port directly on it. If as an admin You would create external network which would belong to Your tenant, You would be allowed to create port there. > > Can anyone from the neutron team confirm whether it would be OK for us > to remove our nova-compute policy check for external network attach > permission and let neutron take care of the check? I don't know exactly the reasons why it is forbiden on Nova's side but TBH I don't see any reason why we should forbid pluging instances directly to the network marked as router:external=True. > > And on the nova side, I assume we would need a deprecation cycle before > removing the 'network:attach_external_network' policy. If we can get > confirmation from the neutron team, is anyone opposed to the idea of > deprecating the 'network:attach_external_network' policy in the Wallaby > cycle, to be removed in the Xena release? > > I would appreciate your thoughts. > > Cheers, > -melanie > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Sat Mar 6 08:30:58 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 06 Mar 2021 09:30:58 +0100 Subject: [Neutron][Nova][Ironic][Cinder][Keystone][Glance][Swift] OVN as the default network backend for DevStack In-Reply-To: References: Message-ID: <28531574.fdTSbW8Txg@p1> Hi, Dnia piątek, 5 marca 2021 23:13:09 CET Michael Johnson pisze: > Octavia doesn't really care what the ML2 is in neutron, there are no > dependencies on it as we use stable neutron APIs. > > Devstack however is creating a port on the management network for the > controller processes. Octavia has a function hook to allow the SDN > providers to handle creating access to the management network. When > OVN moved into neutron this hook was implemented in neutron for linux > bridge, OVS, and OVN. > > I should also note that we have been running Octavia gate jobs, on > neutron with OVN, since before the migration of OVN into neutron, so I > would not expect any issues from the proposed change to the default > ML2 in neutron. Thx for confirmation that Octavia should be fine with it :) > > Michael > > On Tue, Mar 2, 2021 at 7:05 PM Lingxian Kong wrote: > > Hi, > > > > Thanks for all your hard work on this. > > > > I'm wondering is there any doc proposed for devstack to tell people who > > are not interested in OVN to keep the current devstack behaviour? I have > > a feeling that using OVN as default Neutron driver would break the CI > > jobs for some projects like Octavia, Trove, etc. which rely on ovs port > > for the set up. > > > > --- > > Lingxian Kong > > Senior Cloud Engineer (Catalyst Cloud) > > Trove PTL (OpenStack) > > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > > On Wed, Mar 3, 2021 at 2:03 PM Goutham Pacha Ravi wrote: > >> On Mon, Mar 1, 2021 at 10:29 AM Sean Mooney wrote: > >> > On Mon, 2021-03-01 at 16:07 +0000, Lucas Alvares Gomes wrote: > >> > > Hi all, > >> > > > >> > > As part of the Victoria PTG [0] the Neutron community agreed upon > >> > > switching the default backend in Devstack to OVN. A lot of work has > >> > > been done since, from porting the OVN devstack module to the DevStack > >> > > tree, refactoring the DevStack module to install OVN from distro > >> > > packages, implementing features to close the parity gap with ML2/OVS, > >> > > fixing issues with tests and distros, etc... > >> > > > >> > > We are now very close to being able to make the switch and we've > >> > > thought about sending this email to the broader community to raise > >> > > awareness about this change as well as bring more attention to the > >> > > patches that are current on review. > >> > > > >> > > Note that moving DevStack to ML2/OVN does not mean that ML2/OVS is > >> > > discontinued and/or not supported anymore. The ML2/OVS driver is > >> > > still > >> > > going to be developed and maintained by the upstream Neutron > >> > > community. > >> > > >> > can we ensure that this does not happen until the xena release. > >> > in generall i think its ok to change the default but not this late in > >> > the cycle. i would also like to ensure we keep at least one non ovn > >> > based multi node job in nova until > >> > https://review.opendev.org/c/openstack/nova/+/602432 is merged and > >> > possible after. right now the event/neutorn interaction is not the > >> > same during move operations.>> > > >> > > Below is a e per project explanation with relevant links and issues > >> > > of > >> > > where we stand with this work right now: > >> > > > >> > > * Keystone: > >> > > > >> > > Everything should be good for Keystone, the gate is happy with the > >> > > changes. Here is the test patch: > >> > > https://review.opendev.org/c/openstack/keystone/+/777963 > >> > > > >> > > * Glance: > >> > > > >> > > Everything should be good for Glace, the gate is happy with the > >> > > changes. Here is the test patch: > >> > > https://review.opendev.org/c/openstack/glance/+/748390 > >> > > > >> > > * Swift: > >> > > > >> > > Everything should be good for Swift, the gate is happy with the > >> > > changes. Here is the test patch: > >> > > https://review.opendev.org/c/openstack/swift/+/748403 > >> > > > >> > > * Ironic: > >> > > > >> > > Since chainloading iPXE by the OVN built-in DHCP server is work in > >> > > progress, we've changed most of the Ironic jobs to explicitly enable > >> > > ML2/OVS and everything is merged, so we should be good for Ironic > >> > > too. > >> > > Here is the test patch: > >> > > https://review.opendev.org/c/openstack/ironic/+/748405 > >> > > > >> > > * Cinder: > >> > > > >> > > Cinder is almost complete. There's one test failure in the > >> > > "tempest-slow-py3" job run on the > >> > > "test_port_security_macspoofing_port" test. > >> > > > >> > > This failure is due to a bug in core OVN [1]. This bug has already > >> > > been fixed upstream [2] and the fix has been backported down to the > >> > > branch-20.03 [3] of the OVN project. However, since we install OVN > >> > > from packages we are currently waiting for this fix to be included in > >> > > the packages for Ubuntu Focal (it's based on OVN 20.03). I already > >> > > contacted the package maintainer which has been very supportive of > >> > > this work and will work on the package update, but he maintain a > >> > > handful of backports in that package which is not yet included in OVN > >> > > 20.03 upstream and he's now working with the core OVN community [4] > >> > > to > >> > > include it first in the branch and then create a new package for it. > >> > > Hopefully this will happen soon. > >> > > > >> > > But for now we have a few options moving on with this issue: > >> > > > >> > > 1- Wait for the new package version > >> > > 2- Mark the test as unstable until we get the new package version > >> > > 3- Compile OVN from source instead of installing it from packages > >> > > (OVN_BUILD_FROM_SOURCE=True in local.conf) > >> > > >> > i dont think we should default to ovn untill a souce build is not > >> > required. > >> > compiling form souce while not supper expensice still adds time to the > >> > job > >> > execution and im not sure we should be paying that cost on every > >> > devstack job run. > >> > > >> > we could maybe compile it once and bake the package into the image or > >> > host it on a mirror but i think we should avoid this option if we have > >> > alternitives. > >> > > >> > > What do you think about it ? > >> > > > >> > > Here is the test patch for Cinder: > >> > > https://review.opendev.org/c/openstack/cinder/+/748227 > >> > > > >> > > * Nova: > >> > > > >> > > There are a few patches waiting for review for Nova, which are: > >> > > > >> > > 1- Adapting the live migration scripts to work with ML2/OVN: > >> > > Basically > >> > > the scripts were trying to stop the Neutron agent (q-agt) process > >> > > which is not part of an ML2/OVN deployment. The patch changes the > >> > > code > >> > > to check if that system unit exists before trying to stop it. > >> > > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776419 > >> > > > >> > > 2- Explicitly set grenade job to ML2/OVS: This is a temporary change > >> > > which can be removed one release cycle after we switch DevStack to > >> > > ML2/OVN. Grenade will test updating from the release version to the > >> > > master branch but, since the default of the released version is not > >> > > ML2/OVN, upgrading from ML2/OVS to ML2/OVN as part of the grenade job > >> > > is not supported. > >> > > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776934 > >> > > > >> > > 3- Explicitly set nova-next job to ML2/OVS: This job uses the QoS > >> > > minimum bandwidth feature which is not yet supported by ML2/OVN > >> > > [5][6] > >> > > therefore we are temporarily enabling ML2/OVS for this job until that > >> > > feature lands in core OVN. > >> > > > >> > > Patch: https://review.opendev.org/c/openstack/nova/+/776944 > >> > > > >> > > I also spoke briefly with Sean Mooney (irc: sean-k-mooney) about > >> > > these > >> > > changes and he suggested keeping all the Nova jobs on ML2/OVS for now > >> > > because he feels like a change in the default network driver a few > >> > > weeks prior to the upstream code freeze can be concerning. We do not > >> > > know yet precisely when we are changing the default due to the > >> > > current > >> > > patches we need to get merged but, if this is a shared feeling among > >> > > the Nova community I can work on enabling ML2/OVS on all jobs in Nova > >> > > until we get a new release in OpenStack. > >> > > >> > yep this is still my view. > >> > i would suggest we do the work required in the repos but not merge it > >> > until the xena release is open. thats technically at RC1 so march 25th > >> > i think we can safely do the swich after that but i would not change > >> > the defualt in any project before then. > >> > > >> > > Here's the test patch for Nova: > >> > > https://review.opendev.org/c/openstack/nova/+/776945 > >> > > > >> > > * DevStack: > >> > > > >> > > And this is the final patch that will make this all happen: > >> > > https://review.opendev.org/c/openstack/devstack/+/735097 > >> > > > >> > > It changes the default in DevStack from ML2/OVS to ML2/OVN. It's been > >> > > a long and bumpy road to get to this point and I would like to say > >> > > thanks to everyone involved so far and everyone that read the whole > >> > > email, please let me know your thoughts. > >> > > >> > thanks for working on this. > >> > > >> > > [0] https://etherpad.opendev.org/p/neutron-victoria-ptg > >> > > [1] https://bugs.launchpad.net/tempest/+bug/1728886 > >> > > [2] > >> > > https://patchwork.ozlabs.org/project/openvswitch/patch/2020031912264 > >> > > 1.473776-1-numans at ovn.org/ [3] > >> > > https://github.com/ovn-org/ovn/commit/0c26bc03064f2c21d208f0f860b48d > >> > > 8ab39380cb [4] > >> > > https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050 > >> > > 961.html [5] > >> > > https://github.com/openstack/nova/blob/ded25f33c734ebff963f06984707a > >> > > 99fe76a9ee1/gate/post_test_hook.sh#L129-L143 [6] > >> > > https://docs.openstack.org/neutron/latest/ovn/gaps.html > >> > > > >> > > Cheers, > >> > > Lucas > >> > >> ++ Thank you indeed for working diligently on this important change. > >> > >> Please do note that devstack, and the base job that you're modifying > >> is used by many other projects besides the ones that you have > >> enumerated in the subject line. > >> I suggest using [all] as a better subject line indicator to get the > >> attention of folks like me who have filters based on the subject line. > >> Also, the network substrate is important for the project I help > >> maintain: Manila, which provides shared file systems over a network - > >> so I followed your lead and submitted a dependent patch. I hope to > >> reach out to you in case we see some breakages: > >> https://review.opendev.org/c/openstack/manila-tempest-plugin/+/778346 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Sat Mar 6 12:53:02 2021 From: smooney at redhat.com (Sean Mooney) Date: Sat, 06 Mar 2021 12:53:02 +0000 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: <2609049.HbdjPCY3gI@p1> References: <2609049.HbdjPCY3gI@p1> Message-ID: On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: > Hi, > > Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > > Hello all, > > > > I'm seeking input from the neutron and nova teams regarding policy > > enforcement for allowing attachment to external networks. Details below. > > > > Recently we've been looking at an issue that was reported quite a long > > time ago (2017) [1] where we have a policy check in nova-compute that > > controls whether to allow users to attach an external network to their > > instances. > > > > This has historically been a pain point for operators as (1) it goes > > against convention of having policy checks in nova-api only and (2) > > setting the policy to anything other than the default requires deploying > > a policy file change to all of the compute hosts in the deployment. > > > > The launchpad bug report mentions neutron refactoring work that was > > happening at the time, which was thought might make the > > 'network:attach_external_network' policy check on the nova side redundant. > > > > Years have passed since then and customers are still running into this > > problem, so we are thinking, can this policy check be removed on the > > nova-compute side now? > > > > I did a local test with devstack to verify what the behavior is if we > > were to remove the 'network:attach_external_network' policy check > > entirely [2] and found that neutron appears to properly enforce > > permission to attach to external networks itself. It appears that the > > enforcement on the neutron side makes the nova policy check redundant. > > > > When I tried to boot an instance to attach to an external network, > > neutron API returned the following: > > > > INFO neutron.pecan_wsgi.hooks.translation > > [req-58fdb103-cd20-48c9-b73b-c9074061998c > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > > port on this network > > [...] > > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 > > I just checked in Neutron code and we don't have any policy rule related > directly to the creation of ports on the external network. > Probably what You had there is the fact that Your router:external network was > owned by other tenant and due to that You wasn't able to create port directly > on it. If as an admin You would create external network which would belong to > Your tenant, You would be allowed to create port there. > > > > > Can anyone from the neutron team confirm whether it would be OK for us > > to remove our nova-compute policy check for external network attach > > permission and let neutron take care of the check? > > I don't know exactly the reasons why it is forbiden on Nova's side but TBH I > don't see any reason why we should forbid pluging instances directly to the > network marked as router:external=True. i have listed the majority of my consers in https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the downstream bug related to this. there are a number of issue sthat i was concerd about but tl;dr - booting ip form external network consumes ip from the floating ip subnet withtout using quota - by default neutron upstream and downstream is configured to provide nova metadata api access via the neutron router not the dhcp server so by default the metadata api will not work with external network. that would require neueton to be configre to use the dhcp server for metadta or config driver or else insance wont get ssh keys ingject by cloud init. - there might be security considertaions. typeically external networks are vlan or flat networks and in some cases operators may not want tenats to be able to boot on such networks expsially with vnic-type=driect-physical since that might allow them to violate tenant isolation if the top of rack switch was not configured by a heracical port binding driver to provide adiquite isolation in that case. this is not so much because this is an external network and more a concern anytime you do PF passtough but there may be other implication to allowing this by default. that said if neutron has a way to express policy in this regard nova does not have too. router:external=True is really used to mark a network as providing connectivity such that it can be used for the gateway port of neutron routers. the workaroud that i have come up with currently is to mark the network as shared and then use neturon rbac to only share it with the teant that owns it. i assigning external network to speficic tenat being useful when you want to provde a specific ip allocation pool to them or just a set of ips. i understand that the current motivation for this request is commign form some edge deployments. in general i dont thinkthis would be widely used but for those that need its better ux then marking it as shared. > > > > > And on the nova side, I assume we would need a deprecation cycle before > > removing the 'network:attach_external_network' policy. If we can get > > confirmation from the neutron team, is anyone opposed to the idea of > > deprecating the 'network:attach_external_network' policy in the Wallaby > > cycle, to be removed in the Xena release? > > > > I would appreciate your thoughts. > > > > Cheers, > > -melanie > > > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 > > From skaplons at redhat.com Sat Mar 6 16:00:02 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 06 Mar 2021 17:00:02 +0100 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: References: <2609049.HbdjPCY3gI@p1> Message-ID: <4299490.gzMO86gykG@p1> Hi, Dnia sobota, 6 marca 2021 13:53:02 CET Sean Mooney pisze: > On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: > > Hi, > > > > Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > > > Hello all, > > > > > > I'm seeking input from the neutron and nova teams regarding policy > > > enforcement for allowing attachment to external networks. Details below. > > > > > > Recently we've been looking at an issue that was reported quite a long > > > time ago (2017) [1] where we have a policy check in nova-compute that > > > controls whether to allow users to attach an external network to their > > > instances. > > > > > > This has historically been a pain point for operators as (1) it goes > > > against convention of having policy checks in nova-api only and (2) > > > setting the policy to anything other than the default requires deploying > > > a policy file change to all of the compute hosts in the deployment. > > > > > > The launchpad bug report mentions neutron refactoring work that was > > > happening at the time, which was thought might make the > > > 'network:attach_external_network' policy check on the nova side > > > redundant. > > > > > > Years have passed since then and customers are still running into this > > > problem, so we are thinking, can this policy check be removed on the > > > nova-compute side now? > > > > > > I did a local test with devstack to verify what the behavior is if we > > > were to remove the 'network:attach_external_network' policy check > > > entirely [2] and found that neutron appears to properly enforce > > > permission to attach to external networks itself. It appears that the > > > enforcement on the neutron side makes the nova policy check redundant. > > > > > > When I tried to boot an instance to attach to an external network, > > > neutron API returned the following: > > > > > > INFO neutron.pecan_wsgi.hooks.translation > > > [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > > > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > > > port on this network > > > [...] > > > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > > > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 > > > > I just checked in Neutron code and we don't have any policy rule related > > directly to the creation of ports on the external network. > > Probably what You had there is the fact that Your router:external network > > was owned by other tenant and due to that You wasn't able to create port > > directly on it. If as an admin You would create external network which > > would belong to Your tenant, You would be allowed to create port there. > > > > > Can anyone from the neutron team confirm whether it would be OK for us > > > to remove our nova-compute policy check for external network attach > > > permission and let neutron take care of the check? > > > > I don't know exactly the reasons why it is forbiden on Nova's side but TBH > > I don't see any reason why we should forbid pluging instances directly to > > the network marked as router:external=True. > > i have listed the majority of my consers in > https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the > downstream bug related to this. > there are a number of issue sthat i was concerd about but tl;dr > - booting ip form external network consumes ip from the floating ip subnet > withtout using quota - by default neutron upstream and downstream is > configured to provide nova metadata api access via the neutron router not > the dhcp server so by default the metadata api will not work with external > network. that would require neueton to be configre to use the dhcp server > for metadta or config driver or else insance wont get ssh keys ingject by > cloud init. > - there might be security considertaions. typeically external networks are > vlan or flat networks and in some cases operators may not want tenats to be > able to boot on such networks expsially with vnic-type=driect-physical > since that might allow them to violate tenant isolation if the top of rack > switch was not configured by a heracical port binding driver to provide > adiquite isolation in that case. this is not so much because this is an > external network and more a concern anytime you do PF passtough but there > may be other implication to allowing this by default. that said if neutron > has a way to express policy in this regard nova does not have too. Those are all valid points, true. But TBH, if administrator created such network as pool of FIPs for the user, then users will not be able to plug vms directly to that network as they aren't owners of the network so neutron will forbid that. > > router:external=True is really used to mark a network as providing > connectivity such that it can be used for the gateway port of neutron > routers. the workaroud that i have come up with currently is to mark the > network as shared and then use neturon rbac to only share it with the teant > that owns it. > > i assigning external network to speficic tenat being useful when you want to > provde a specific ip allocation pool to them or just a set of ips. i > understand that the current motivation for this request is commign form > some edge deployments. in general i dont thinkthis would be widely used but > for those that need its better ux then marking it as shared. > > > > And on the nova side, I assume we would need a deprecation cycle before > > > removing the 'network:attach_external_network' policy. If we can get > > > confirmation from the neutron team, is anyone opposed to the idea of > > > deprecating the 'network:attach_external_network' policy in the Wallaby > > > cycle, to be removed in the Xena release? > > > > > > I would appreciate your thoughts. > > > > > > Cheers, > > > -melanie > > > > > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > > > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ikatzir at infinidat.com Sun Mar 7 09:36:53 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Sun, 7 Mar 2021 11:36:53 +0200 Subject: [E] [ironic] How to move node from active state to manageable In-Reply-To: References: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> Message-ID: <8564CA59-4F41-4CB7-91B5-92CB1C38A28A@infinidat.com> Thanks Jay for the prompt response! (had my weekend off) I have deleted the instances through nova as you suggested, I then see them being cleaned , which is Good! > (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean wait | False | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power off | cleaning | False | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ And after a while they become available > (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power off | available | False | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | cleaning | False | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ now I can start deployment as planned. Igal > On 4 Mar 2021, at 20:12, Jay Faulkner wrote: > > are provisioned (active) are not able to be moved to manageable state. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manubk2020 at gmail.com Sun Mar 7 16:50:02 2021 From: manubk2020 at gmail.com (Manu B) Date: Sun, 7 Mar 2021 22:20:02 +0530 Subject: [neutron-dynamic-routing] Support for multiple BGP speaker Message-ID: Hi, As understood from the code, currently the neutron dynamic routing supports only one BGP speaker per host *# os-ken can only support One *speaker if* self.cache.get_hosted_bgp_speakers_count() == 1: raise bgp_driver_exc.BgpSpeakerMaxScheduled(count=1) * Could you please let me know if this is due to some limitation of the os-ken driver? Or are there any other reasons for this limitation? Thanks, Manu -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun Mar 7 23:24:21 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 7 Mar 2021 16:24:21 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: > Hello All, > > Just a reminder that we will be starting to implement steps to migrate > from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. > > The plan is outlined in: > https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday > > In summary, on Sunday, we plan to: > - Move the master integration line for promotions to build containers and > images on centos-8 stream nodes > - Change the release files to bring down centos-8 stream repos for use in > test jobs (test jobs will still start on centos-8 nodes - changing this > nodeset will happen later) > - Image build and container build check jobs will be moved to non-voting > during this transition. > > We have already run all the test jobs in RDO with centos-8 stream content > running on centos-8 nodes to prequalify this transition. > > We will update this list with status as we go forward with next steps. > > Thanks! > OK... status update. Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure Monday wasn't a disaster upstream. TripleO master jobs have successfully been migrated to CentOS-8-Stream today. You should see "8-stream" now in /etc/yum.repos.d/tripleo-centos.* repos. Your CentOS-8-Stream Master hash is: edd46672cb9b7a661ecf061942d71a72 Your master repos are: https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo Containers, and overcloud images should all be centos-8-stream. The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. Please open launchpad bugs w/ the "alert" tag if you are having any issues. Thanks and well done all! > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Mar 8 01:06:16 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 07 Mar 2021 19:06:16 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-6 Update Message-ID: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> Hello Everyone, Please find the week's R-6 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) Progress Summary: =============== Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml * Projects completed: 21 * Projects required to merge the patches: 8 * Projects required to push the patches: 2 (horizon and Openstackansible) * Projects do not need any work: 16 Patches ready to merge: ================== * Octavia: https://review.opendev.org/c/openstack/octavia/+/764578 * Magnum: https://review.opendev.org/c/openstack/magnum/+/767242 * Murano: https://review.opendev.org/c/openstack/murano/+/768520 * Panko: https://review.opendev.org/c/openstack/panko/+/768498 * Solum: https://review.opendev.org/c/openstack/solum/+/768381 * Zaqar: https://review.opendev.org/c/openstack/zaqar/+/768488 Updates: ======= * Fixed the lower constraints failure for all the current open patches. * Few project patches are failing on config object initialization, I am debugging those and will fix them soon. * It is important to merge all the service side patches so that JSON format deprecation happens in the Wallaby release. -gmann From amotoki at gmail.com Mon Mar 8 01:21:20 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 8 Mar 2021 10:21:20 +0900 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-6 Update In-Reply-To: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> References: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> Message-ID: On Mon, Mar 8, 2021 at 10:08 AM Ghanshyam Mann wrote: > > Hello Everyone, > > Please find the week's R-6 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. > > Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > Progress Summary: > =============== > Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > * Projects completed: 21 > * Projects required to merge the patches: 8 > * Projects required to push the patches: 2 (horizon and Openstackansible) Horizon does not use /etc//policy.json, so the community goal does not directly affect horizon or it is more complicated than what the goal says. However, https://review.opendev.org/c/openstack/horizon/+/750134 which is to handle policy-in-code and deprecated rules in horizon addressed the goal indirectly. Thanks, Akihiro > * Projects do not need any work: 16 > > Patches ready to merge: > ================== > * Octavia: https://review.opendev.org/c/openstack/octavia/+/764578 > * Magnum: https://review.opendev.org/c/openstack/magnum/+/767242 > * Murano: https://review.opendev.org/c/openstack/murano/+/768520 > * Panko: https://review.opendev.org/c/openstack/panko/+/768498 > * Solum: https://review.opendev.org/c/openstack/solum/+/768381 > * Zaqar: https://review.opendev.org/c/openstack/zaqar/+/768488 > > > Updates: > ======= > * Fixed the lower constraints failure for all the current open patches. > * Few project patches are failing on config object initialization, I am debugging those and will fix them soon. > * It is important to merge all the service side patches so that JSON format deprecation happens in the Wallaby release. > > -gmann > From gmann at ghanshyammann.com Mon Mar 8 02:35:11 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 07 Mar 2021 20:35:11 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-6 Update In-Reply-To: References: <1780f5ed65e.d5bbdf1f119480.463321966283339384@ghanshyammann.com> Message-ID: <1780fb03ba2.e898433f119842.1432973529588624272@ghanshyammann.com> ---- On Sun, 07 Mar 2021 19:21:20 -0600 Akihiro Motoki wrote ---- > On Mon, Mar 8, 2021 at 10:08 AM Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > Please find the week's R-6 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. > > > > Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > > > Progress Summary: > > =============== > > Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > > > * Projects completed: 21 > > * Projects required to merge the patches: 8 > > * Projects required to push the patches: 2 (horizon and Openstackansible) > > Horizon does not use /etc//policy.json, so the community goal > does not directly affect horizon or it is more complicated than what > the goal says. > However, https://review.opendev.org/c/openstack/horizon/+/750134 which > is to handle policy-in-code and deprecated rules in horizon addressed > the goal indirectly. Thanks amotoki for the updates and adding notes in etherpad, I have moved Horizon to the completed section. > > Thanks, > Akihiro > > > * Projects do not need any work: 16 > > > > Patches ready to merge: > > ================== > > * Octavia: https://review.opendev.org/c/openstack/octavia/+/764578 > > * Magnum: https://review.opendev.org/c/openstack/magnum/+/767242 > > * Murano: https://review.opendev.org/c/openstack/murano/+/768520 > > * Panko: https://review.opendev.org/c/openstack/panko/+/768498 > > * Solum: https://review.opendev.org/c/openstack/solum/+/768381 > > * Zaqar: https://review.opendev.org/c/openstack/zaqar/+/768488 > > > > > > Updates: > > ======= > > * Fixed the lower constraints failure for all the current open patches. > > * Few project patches are failing on config object initialization, I am debugging those and will fix them soon. > > * It is important to merge all the service side patches so that JSON format deprecation happens in the Wallaby release. > > > > -gmann > > > From amotoki at gmail.com Mon Mar 8 02:55:50 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 8 Mar 2021 11:55:50 +0900 Subject: [horizon][i18n][infra][horizon plugins] Renaming Chinese locales in Django from zh-cn/zh-tw to zh-hans/zh-hant In-Reply-To: References: Message-ID: Hi, The patch in openstack/openstack-zuul-jobs which renames Chinese locales (zh-cn and zh-tw) to zh-hans and zh-hant has landed last week. You can see Chinese locale renaming in recent translation import patches [2]. I also checked the job results of propose-translation-update job [3] and all worked expectedly. ACTIONS for project teams with horizon plugins: All horizon plugins which include Chinese translations need to be released. If your plugin contains wrote: > > Hi, > > The horizon team is planning to switch Chinese language codes in > Django codes from zh-cn/zh-tw to zh-hans/zh-hant. Django, a framework > used in horizon, recommends to use them since more than 5 years ago > [1][2]. > > This change touches Chinese locales in the dashbaord codes of horizon > and its plugins only. It does not change Chinese locales in other > translations like documentations and non-Django python codes. This is > to minimize the impact to other translations and the translation > platform. > > ### What are/are not changed in repositories > > * horizon and horizon plugins > * locales in the dashboard codes are renamed from zh-cn/zh-tw to > zh-hans/zh-hant > * locales in doc/ and releasenotes/ are not changed > * other repositories > * no locale change happens > > NOTE: > * This leads to a situation that we have two different locales in > horizon and plugin repositories (zh-hans/hant in the code folders and > zh-cn/tw in doc and releasenotes folders), but it affects only > developers and does not affect horizon consumers (operators/users) and > translators. > * In addition, documentations are translated in OpenStack-wide (not > only in horizon and plugins). By keeping locales in docs, locales in > documentation translations will be consistent. > > ### Impact on Zanata > > In Zanata (the translation platform), zh-cn/zh-tw continue to be used, > so no change is visible to translators. > The infra job proposes zh-cn/zh-tw GUI translatoins as zh-hans/zh-hant > translations to horizon and plugin repositories. > > NOTE: > The alternative is to create the corresponding language teams > (zh-hans/zh-hant) in Zanata, but it affects Chinese translators a lot. > They need to join two language teams to translate horizon > (zh-hans/zh-hant) and docs (zh-cn/zh-tw). It makes translator workflow > complicated. The proposed way has no impact on translators and they > can continue the current translation process and translate both > horizon and docs under a single language code. > > ### Changes in the infra scripts > > Converting Chinese locales of dashboard translations from zh-cn/zh-tw > to zh-hans/zh-hant is handled by the periodic translation job. > propose_translation_update job is responsible for this. > > [propose_translation_update.sh] > * Move zh-cn/zh-tw translations related to Django codes in horizon and > its plugins from zanata to zh-hans/hant directory. > * This should happen in the master branch (+ future stable branhces > such as stable/wallaby). > > ### Additional Remarks > > I18n SIG respects all language team coordinators & members, and is > looking forward to seeing discussions and/or active contributions from > the language teams. > > Currently all language codes follow the ISO 639-1 standard (language > codes with relevant country codes), but the change introduces new > language code forms like zh-hans/zh-hant. This follows IETF BCP 47 > which recommends a combination of language codes and ISO 15924 script > code (four letters). We now have two different language codes in > OpenStack world. This is just to minimize the impact on the existing > translations. It is not ideal. We are open for further discussion on > language codes and translation support. > > ### References > > [1] https://code.djangoproject.com/ticket/18419 > [2] https://www.djbook.ru/rel1.7/releases/1.7.html#language-codes-zh-cn-zh-tw-and-fy-nl > > Thanks, > Akihiro Motoki (irc: amotoki) From gouthampravi at gmail.com Mon Mar 8 05:53:42 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Sun, 7 Mar 2021 21:53:42 -0800 Subject: [election][manila] PTL Candidacy for Xena Message-ID: Greetings Zorillas & other Stackers, This is my candidacy to be the PTL for the OpenStack Manila team through the Xena cycle. I've had the great privilege of leading this unique team for the past three releases. During this time, we've had an enviable momentum and some fantastic achievements while building impactful software. I have enjoyed being a catalyst and an advocate for this motivated team, and I wish to do this a bit longer. In Wallaby, we harnessed our diverse strengths to mentor five new contributors through our student project internships, two of who were sponsored through Outreachy funding. They have all expressed their desire to become long term contributors. I am very proud of the manila team for their sincerity and enthusiasm in aiding new Stackers. I seek to further this outreach through the Xena release cycle. Testing and project quality will remain a top personal priority. The team was able to get new interop guidelines published last cycle, and we will focus on adding more test coverage under these guidelines, alongside addressing test gaps and innovating on more test scenarios. We have continued to implement the pro-active backport policy to get bug fixes back to stable branches and request timely releases of these stable branches. In the Xena cycle, I intend to codify this policy based on our learning and spread the release responsibility and know-how around the team. We split project maintenance into sub-teams during the Wallaby cycle, and recruited sub-team core reviewers. An increase in focus and velocity ensued across project code repositories. I intend to propose more additions to these sub-teams in the Xena cycle and encourage more contributors to take on project maintenance responsibilities. So, if you will have me, I wish to serve you through Xena and get things done. Thank you for your support, Goutham Pacha Ravi IRC: gouthamr From marios at redhat.com Mon Mar 8 07:46:22 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 8 Mar 2021 09:46:22 +0200 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin wrote: > > > On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: > >> Hello All, >> >> Just a reminder that we will be starting to implement steps to migrate >> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >> >> The plan is outlined in: >> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >> >> In summary, on Sunday, we plan to: >> - Move the master integration line for promotions to build containers >> and images on centos-8 stream nodes >> - Change the release files to bring down centos-8 stream repos for use in >> test jobs (test jobs will still start on centos-8 nodes - changing this >> nodeset will happen later) >> - Image build and container build check jobs will be moved to non-voting >> during this transition. >> > >> We have already run all the test jobs in RDO with centos-8 stream content >> running on centos-8 nodes to prequalify this transition. >> >> We will update this list with status as we go forward with next steps. >> >> Thanks! >> > > OK... status update. > > Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure > Monday wasn't a disaster upstream. TripleO master jobs have successfully > been migrated to CentOS-8-Stream today. You should see "8-stream" now in > /etc/yum.repos.d/tripleo-centos.* repos. > > \o/ this is fantastic! nice work all thanks to everyone involved for getting this done with minimal disruption tripleo-ci++ > Your CentOS-8-Stream Master hash is: > > edd46672cb9b7a661ecf061942d71a72 > > Your master repos are: > https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo > > Containers, and overcloud images should all be centos-8-stream. > > The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. > > Please open launchpad bugs w/ the "alert" tag if you are having any issues. > > Thanks and well done all! > > > >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danilo_dellorto at it.ibm.com Mon Mar 8 09:31:24 2021 From: danilo_dellorto at it.ibm.com (DANILO PAOLO DELL'ORTO) Date: Mon, 8 Mar 2021 10:31:24 +0100 Subject: Post to Openstack mail list Message-ID: Hi, I would like to post questions on the openstack mail lists, can you authorize me? best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 9685 bytes Desc: not available URL: From lyarwood at redhat.com Mon Mar 8 09:31:54 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 8 Mar 2021 09:31:54 +0000 Subject: [infra][qe][nova] Would it be possible to host a custom cirros image somewhere for the nova-next job? In-Reply-To: References: <1f878393cfb1e49a86313c87f0ccb72f5e4ad5a9.camel@redhat.com> <20210303134135.ycnzw5z3f2eaawkm@yuggoth.org> <20210303192745.vukicp4d6b6jkzqt@yuggoth.org> <20210303223802.rijk6d3xqg6aizjd@yuggoth.org> Message-ID: On Thu, 4 Mar 2021 at 11:07, Lee Yarwood wrote: > > On Wed, 3 Mar 2021 at 22:41, Jeremy Stanley wrote: > > > > On 2021-03-04 09:31:08 +1100 (+1100), Ian Wienand wrote: > > [...] > > > I'd personally prefer to build it in a job and publish it so that you > > > can see what's been done. > > [...] > > > > Sure, all things being equal I'd also prefer a transparent automated > > build with a periodic refresh, but that seems like something we can > > make incremental improvement on. Of course, you're a more active > > reviewer on DevStack than I am, so I'm happy to follow your lead if > > it's something you feel strongly about. > > Agreed, if there's a need to build and cache another unreleased cirros > fix then I would be happy to look into automating the build somewhere > but for a one off I think this is just about acceptable. > > FWIW nova-next is passing with the image below: > > WIP: nova-next: Start testing the 'q35' machine type > https://review.opendev.org/c/openstack/nova/+/708701 > > I'll clean that change up later today. After all of this Cirros 0.5.2 was released on Friday after another colleague asked. I've reverted my original change to cache the dev build, introduced a change to cache 0.5.2 and switched devstack over to 0.5.2 below: Revert "Add custom cirros image with ahci module enabled to cache" https://review.opendev.org/c/openstack/project-config/+/779140 Add Cirros 0.5.2 to cache https://review.opendev.org/c/openstack/project-config/+/779178 Update Cirros to 0.5.2 https://review.opendev.org/c/openstack/devstack/+/779179 Apologies for all of the noise with this, hopefully that's it for now. Thanks again, Lee From zigo at debian.org Mon Mar 8 10:16:21 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 8 Mar 2021 11:16:21 +0100 Subject: [cinder] Inflated version dependency in os-brick Message-ID: Hi, As I've started packaging Wallaby for Debian, I noticed that os-brick has very inflated version dependencies: $ diff -u ../requirements.txt requirements.txt --- ../requirements.txt 2021-03-08 10:54:44.896134101 +0100 +++ requirements.txt 2021-03-08 10:54:48.848127942 +0100 @@ -2,17 +2,16 @@ # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. -pbr!=2.1.0,>=5.4.1 # Apache-2.0 -eventlet>=0.25.1 # MIT -oslo.concurrency>=3.26.0 # Apache-2.0 -oslo.context>=2.23.0 # Apache-2.0 -oslo.log>=3.44.0 # Apache-2.0 -oslo.i18n>=3.24.0 # Apache-2.0 -oslo.privsep>=1.32.0 # Apache-2.0 -oslo.serialization>=2.29.0 # Apache-2.0 -oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 -oslo.utils>=3.34.0 # Apache-2.0 -requests>=2.14.2 # Apache-2.0 -six>=1.10.0 # MIT -tenacity>=6.0.0 # Apache-2.0 -os-win>=3.0.0 # Apache-2.0 +pbr>=5.5.1 # Apache-2.0 +eventlet>=0.30.1 # MIT +oslo.concurrency>=4.4.0 # Apache-2.0 +oslo.context>=3.1.1 # Apache-2.0 +oslo.log>=4.4.0 # Apache-2.0 +oslo.i18n>=5.0.1 # Apache-2.0 +oslo.privsep>=2.4.0 # Apache-2.0 +oslo.serialization>=4.1.0 # Apache-2.0 +oslo.service>=2.5.0 # Apache-2.0 +oslo.utils>=4.8.0 # Apache-2.0 +requests>=2.25.1 # Apache-2.0 +tenacity>=6.3.1 # Apache-2.0 +os-win>=5.4.0 # Apache-2.0 Some of the above is clearly abusing. For example, I don't think Cinder really needs version 5.5.1 of PBR, and 5.5.0 should really be enough. I've traced it back to: https://review.opendev.org/c/openstack/os-brick/+/778807 If this is a consequence of the project stopping to test lower bounds of requirements, then this has gone really insane, and the bad practice must stop immediately to restore sanity, and we must revert. We're back 5 years ago where each projects was uselessly requiring the latest version of everything for no reason... From a downstream package maintainer perspective, this is a disaster move. Haven't we learned from our mistakes? Cheers, Thomas Goirand (zigo) From hberaud at redhat.com Mon Mar 8 10:38:25 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 8 Mar 2021 11:38:25 +0100 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Hello Everyone! Wanted to give you another reminder! Looking forward to see your highlights by the end of the week! Hervé Le ven. 26 févr. 2021 à 00:03, Kendall Nelson a écrit : > Hello Everyone! > > It's time to start thinking about calling out 'cycle-highlights' in your > deliverables! I have no idea how we are here AGAIN ALREADY, alas, here we > be. > > As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > the last few releases: Stein[1], Train[2]. > > We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. Looking through your release notes > might be a good place to start. > > *The deadline for cycle highlights is the end of the R-5 week [3] (next > week) on March 12th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the > openstack/releases repo like this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/$RELEASE/highlights.html > file or the output of the build-openstack-sphinx-docs job under > html/$RELEASE/highlights.html. > > Feel free to add me as a reviewer on your patches. > > Can't wait to see you all have accomplished this release! > > Thanks :) > > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/stein/highlights.html > [2] https://releases.openstack.org/train/highlights.html > [3] htt > > https://releases.openstack.org/wallaby/schedule.html > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 8 10:43:14 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 8 Mar 2021 11:43:14 +0100 Subject: [election][neutron] PTL Candidacy for Xena Message-ID: <20210308104314.ij2onqbattsfncsv@p1.localdomain> Hi, I want to propose my candidacy for Neutron PTL in the Xena cycle. Wallaby was my third cycle serving as Neutron PTL. I think I helped to keep the project in a healthy state and I would like to continue serving as the PTL and keep Neutron running well in the Xena cycle. In Wallaby we accomplished many important goals like e.g.: * finished migration to the engine facade, * improve our CI jobs and its overall stability, * and close of many feature parity gaps between OVN and OVS backends. But we also didn't finish some of the goals which were set for that cycle, like e.g. switching OVN to be the default Neutron backend in Devstack. If I will be elected, I would like to set it as my main goal for Xena cycle. We did a lot of work on adoption of the OVN backend in Neutron already and I think that we are now ready to move on and switch it to be the default backend in the Devstack. I want to focus on couple of things in the Xena cycle: * reduce our bug backlog as it is huge now, * maintenance and continue improvements of our CI, as this is "never ending story", * keep Neutron running in a smooth way. As I mentioned above, if I will be elected, Xena will be my 4th cycle as Neutron PTL and I would like to find some potential successors for the next cycles to help them onboard and understand what are the duties and responsibilities of the PTL. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From thierry at openstack.org Mon Mar 8 11:38:20 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 8 Mar 2021 12:38:20 +0100 Subject: [largescale-sig] Next meeting: March 10, 15utc Message-ID: Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210310T15 Belmiro Moreira will chair this meeting. A number of topics have already been added to the agenda, including discussing CentOS Stream, reflecting on last video meeting and pick a topic for the next one. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From skaplons at redhat.com Mon Mar 8 12:21:06 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 8 Mar 2021 13:21:06 +0100 Subject: [neutron] Bug deputy report - week of March 1st Message-ID: <20210308122106.upiztvi7ksybbbtx@p1.localdomain> Hi, I was Neutron's bug deputy last week. Below is my summary. **critical** https://bugs.launchpad.net/neutron/+bug/1917487 - [FT] "IpNetnsCommand.add" command fails frequently - Assigned to Rodolfo https://bugs.launchpad.net/nova/+bug/1917610 - Migration and resize tests from tempest.scenario.test_minbw_allocation_placement.MinBwAllocationPlacementTest failing in neutron-tempest-dvr-ha-multinode-full - fix done in tempest https://review.opendev.org/c/openstack/tempest/+/778451 https://bugs.launchpad.net/neutron/+bug/1917793 - [HA] keepalived_state_change does not finish "handle_initial_state"execution - gate failure, assigned to Rodolfo **high** https://bugs.launchpad.net/neutron/+bug/1917409 - neutron-l3-agents won't become active - needs assignment https://bugs.launchpad.net/neutron/+bug/1917508 - Router create fails when router with same name already exists - related to ovn l3 plugin, needs assignment https://bugs.launchpad.net/neutron/+bug/1917370 - [functional] ovn maintenance worker isn't mocked in functional tests - **High** as it's causing gate failures or timeouts, assigned, patch: https://review.opendev.org/c/openstack/neutron/+/778080 - already merged https://bugs.launchpad.net/neutron/+bug/1917393 - [L3][Port forwarding] admin state DOWN/UP router will lose all pf-floating-ips and nat rules - assigned, In progress, Patch: https://review.opendev.org/c/openstack/neutron/+/778126 https://bugs.launchpad.net/neutron/+bug/1918108 - [OVN] IGMP snooping traps IGMP messages - assigned to Lucas already, **medium** https://bugs.launchpad.net/neutron/+bug/1917448 - GRE tunnels over IPv6 have wrong packet_type set in OVS - set as **Medium**, assigned, patch: https://review.opendev.org/c/openstack/neutron/+/778178, already merged **Wishlist** https://bugs.launchpad.net/neutron/+bug/1917437 - Enable querier for multicast (IGMP) in OVN, Assigned, In progress https://bugs.launchpad.net/neutron/+bug/1917866 - No need to fetch whole network object on port create - assigned to Oleg, patch proposed already https://review.opendev.org/c/openstack/neutron/+/778881 https://bugs.launchpad.net/neutron/+bug/1904559 - Designate driver: allow_reverse_dns_lookup doesn't works if dns_domain zone wasn't created - Switched to be RFE - please, especially drivers team members, check and triage it so we can discuss that on the drivers meeting, **undecided** https://bugs.launchpad.net/networking-ovn/+bug/1914857 - AttributeError: 'NoneType' object has no attribute 'db_find_rows' - Rodolfo and Lucas are triaging it, **Old bugs which needs attention** https://bugs.launchpad.net/neutron/+bug/1752903 - Floating IPs should not allocate IPv6 addresses - L3 subteam should take a look at it. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From eblock at nde.ag Mon Mar 8 13:18:56 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 13:18:56 +0000 Subject: Cleanup database(s) Message-ID: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> Hi *, I have a quick question, last year we migrated our OpenStack to a highly available environment through a reinstall of all nodes. The migration went quite well, we're working happily in the new cloud but the databases still contain deprecated data. For example, the nova-scheduler logs lines like these on a regular basis: /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING nova.scheduler.host_manager [...] No compute service record found for host compute1 This is one of the old compute nodes that has been reinstalled and is now compute01. I tried to find the right spot to delete some lines in the DB but there are a couple of places so I wanted to check and ask you for some insights. The scheduler messages seem to originate in /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py ---snip--- for cell_uuid, computes in compute_nodes.items(): for compute in computes: service = services.get(compute.host) if not service: LOG.warning( "No compute service record found for host %(host)s", {'host': compute.host}) continue ---snip--- So I figured it could be this table in the nova DB: ---snip--- MariaDB [nova]> select host,deleted from compute_nodes; +-----------+---------+ | host | deleted | +-----------+---------+ | compute01 | 0 | | compute02 | 0 | | compute03 | 0 | | compute04 | 0 | | compute05 | 0 | | compute1 | 0 | | compute2 | 0 | | compute3 | 0 | | compute4 | 0 | +-----------+---------+ ---snip--- What would be the best approach here to clean up a little? I believe it would be safe to simply purge those lines containing the old compute node, but there might be a smoother way. Or maybe there are more places to purge old data from? I'd appreciate any ideas. Regards, Eugen From gmann at ghanshyammann.com Mon Mar 8 13:47:25 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 08 Mar 2021 07:47:25 -0600 Subject: [all][qa] Gate failure for <= stable/train and Tempest master gate (Do not recheck) Message-ID: <1781217b005.10f2576cb155502.3684013520139333947@ghanshyammann.com> Hello Everyone, get-pip.py url for py2.7 has been changed which causing failure on stable/train or older branches and Tempest master gate. Thanks, Dan, Elod for fixing and backporting those. Please wait for the below fixes to merge and do not recheck. https://review.opendev.org/q/Id62e91b1609db4b1d2fa425010bac1ce77e9fc51 -gmann From smooney at redhat.com Mon Mar 8 13:53:58 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 08 Mar 2021 13:53:58 +0000 Subject: Cleanup database(s) In-Reply-To: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> Message-ID: On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: > Hi *, > > I have a quick question, last year we migrated our OpenStack to a > highly available environment through a reinstall of all nodes. The > migration went quite well, we're working happily in the new cloud but > the databases still contain deprecated data. For example, the > nova-scheduler logs lines like these on a regular basis: > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING > nova.scheduler.host_manager [...] No compute service record found for > host compute1 > > This is one of the old compute nodes that has been reinstalled and is > now compute01. I tried to find the right spot to delete some lines in > the DB but there are a couple of places so I wanted to check and ask > you for some insights. > > The scheduler messages seem to originate in > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py > > ---snip--- >          for cell_uuid, computes in compute_nodes.items(): >              for compute in computes: >                  service = services.get(compute.host) > >                  if not service: >                      LOG.warning( >                          "No compute service record found for host %(host)s", >                          {'host': compute.host}) >                      continue > ---snip--- > > So I figured it could be this table in the nova DB: > > ---snip--- > MariaDB [nova]> select host,deleted from compute_nodes; > +-----------+---------+ > > host | deleted | > +-----------+---------+ > > compute01 | 0 | > > compute02 | 0 | > > compute03 | 0 | > > compute04 | 0 | > > compute05 | 0 | > > compute1 | 0 | > > compute2 | 0 | > > compute3 | 0 | > > compute4 | 0 | > +-----------+---------+ > ---snip--- > > What would be the best approach here to clean up a little? I believe > it would be safe to simply purge those lines containing the old > compute node, but there might be a smoother way. Or maybe there are > more places to purge old data from? so the step you porably missed was deleting the old compute service records so you need to do openstack compute service list to get teh compute service ids then do openstack compute service delete ... you need to make sure that you only remvoe the unused old serivces but i think that would fix your issue. > > I'd appreciate any ideas. > > Regards, > Eugen > > From eblock at nde.ag Mon Mar 8 14:18:36 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 14:18:36 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> Message-ID: <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Thank you, Sean. > so you need to do > openstack compute service list to get teh compute service ids > then do > openstack compute service delete ... > > you need to make sure that you only remvoe the unused old serivces > but i think that would fix your issue. That's the thing, they don't show up in the compute service list. But I also found them in the resource_providers table, only the old compute nodes appear here: MariaDB [nova]> select name from nova_api.resource_providers; +--------------------------+ | name | +--------------------------+ | compute1.fqdn | | compute2.fqdn | | compute3.fqdn | | compute4.fqdn | +--------------------------+ Zitat von Sean Mooney : > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> Hi *, >> >> I have a quick question, last year we migrated our OpenStack to a >> highly available environment through a reinstall of all nodes. The >> migration went quite well, we're working happily in the new cloud but >> the databases still contain deprecated data. For example, the >> nova-scheduler logs lines like these on a regular basis: >> >> /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >> nova.scheduler.host_manager [...] No compute service record found for >> host compute1 >> >> This is one of the old compute nodes that has been reinstalled and is >> now compute01. I tried to find the right spot to delete some lines in >> the DB but there are a couple of places so I wanted to check and ask >> you for some insights. >> >> The scheduler messages seem to originate in >> >> /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> >> ---snip--- >>          for cell_uuid, computes in compute_nodes.items(): >>              for compute in computes: >>                  service = services.get(compute.host) >> >>                  if not service: >>                      LOG.warning( >>                          "No compute service record found for host >> %(host)s", >>                          {'host': compute.host}) >>                      continue >> ---snip--- >> >> So I figured it could be this table in the nova DB: >> >> ---snip--- >> MariaDB [nova]> select host,deleted from compute_nodes; >> +-----------+---------+ >> > host | deleted | >> +-----------+---------+ >> > compute01 | 0 | >> > compute02 | 0 | >> > compute03 | 0 | >> > compute04 | 0 | >> > compute05 | 0 | >> > compute1 | 0 | >> > compute2 | 0 | >> > compute3 | 0 | >> > compute4 | 0 | >> +-----------+---------+ >> ---snip--- >> >> What would be the best approach here to clean up a little? I believe >> it would be safe to simply purge those lines containing the old >> compute node, but there might be a smoother way. Or maybe there are >> more places to purge old data from? > so the step you porably missed was deleting the old compute service records > > so you need to do > openstack compute service list to get teh compute service ids > then do > openstack compute service delete ... > > you need to make sure that you only remvoe the unused old serivces > but i think that would fix your issue. > >> >> I'd appreciate any ideas. >> >> Regards, >> Eugen >> >> From smooney at redhat.com Mon Mar 8 14:48:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 08 Mar 2021 14:48:41 +0000 Subject: Cleanup database(s) In-Reply-To: <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Message-ID: On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: > Thank you, Sean. > > > so you need to do > > openstack compute service list to get teh compute service ids > > then do > > openstack compute service delete ... > > > > you need to make sure that you only remvoe the unused old serivces > > but i think that would fix your issue. > > That's the thing, they don't show up in the compute service list. But > I also found them in the resource_providers table, only the old > compute nodes appear here: > > MariaDB [nova]> select name from nova_api.resource_providers; > +--------------------------+ > > name | > +--------------------------+ > > compute1.fqdn | > > compute2.fqdn | > > compute3.fqdn | > > compute4.fqdn | > +--------------------------+ ah in that case the compute service delete is ment to remove the RPs too but if the RP had stale allcoation at teh time of the delete the RP delete will fail what you proably need to do in this case is check if the RPs still have allocations and if so verify that the allocation are owned by vms that nolonger exist. if that is the case you should be able to delete teh allcaotion and then the RP if the allocations are related to active vms that are now on the rebuild nodes then you will have to try and heal the allcoations. there is a openstack client extention called osc-placement that you can install to help. we also have a heal allcoation command in nova-manage that may help but the next step would be to validate if the old RPs are still in use or not. from there you can then work to align novas and placment view with the real toplogy. that could invovle removing the old compute nodes form the compute_nodes table or marking them as deleted but both nova db and plamcent need to be kept in sysnc to correct your current issue. > > > Zitat von Sean Mooney : > > > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: > > > Hi *, > > > > > > I have a quick question, last year we migrated our OpenStack to a > > > highly available environment through a reinstall of all nodes. The > > > migration went quite well, we're working happily in the new cloud but > > > the databases still contain deprecated data. For example, the > > > nova-scheduler logs lines like these on a regular basis: > > > > > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING > > > nova.scheduler.host_manager [...] No compute service record found for > > > host compute1 > > > > > > This is one of the old compute nodes that has been reinstalled and is > > > now compute01. I tried to find the right spot to delete some lines in > > > the DB but there are a couple of places so I wanted to check and ask > > > you for some insights. > > > > > > The scheduler messages seem to originate in > > > > > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py > > > > > > ---snip--- > > >          for cell_uuid, computes in compute_nodes.items(): > > >              for compute in computes: > > >                  service = services.get(compute.host) > > > > > >                  if not service: > > >                      LOG.warning( > > >                          "No compute service record found for host > > > %(host)s", > > >                          {'host': compute.host}) > > >                      continue > > > ---snip--- > > > > > > So I figured it could be this table in the nova DB: > > > > > > ---snip--- > > > MariaDB [nova]> select host,deleted from compute_nodes; > > > +-----------+---------+ > > > > host | deleted | > > > +-----------+---------+ > > > > compute01 | 0 | > > > > compute02 | 0 | > > > > compute03 | 0 | > > > > compute04 | 0 | > > > > compute05 | 0 | > > > > compute1 | 0 | > > > > compute2 | 0 | > > > > compute3 | 0 | > > > > compute4 | 0 | > > > +-----------+---------+ > > > ---snip--- > > > > > > What would be the best approach here to clean up a little? I believe > > > it would be safe to simply purge those lines containing the old > > > compute node, but there might be a smoother way. Or maybe there are > > > more places to purge old data from? > > so the step you porably missed was deleting the old compute service records > > > > so you need to do > > openstack compute service list to get teh compute service ids > > then do > > openstack compute service delete ... > > > > you need to make sure that you only remvoe the unused old serivces > > but i think that would fix your issue. > > > > > > > > I'd appreciate any ideas. > > > > > > Regards, > > > Eugen > > > > > > > > > From eblock at nde.ag Mon Mar 8 15:28:19 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 15:28:19 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Message-ID: <20210308152819.Horde.3O8lkYNu58IYePW7cpYmVqS@webmail.nde.ag> Hi, > there is a openstack client extention called osc-placement that you > can install to help. > we also have a heal allcoation command in nova-manage that may help > but the next step would be to validate > if the old RPs are still in use or not. from there you can then work > to align novas and placment view with > the real toplogy. I read about that in the docs, but there's no RPM for our distro (openSUSE), I guess we'll have to build it from source. > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. Is this the right place to look at? MariaDB [nova]> select count(*) from nova_api.allocations; +----------+ | count(*) | +----------+ | 263 | +----------+ MariaDB [nova]> select resource_provider_id,consumer_id from nova_api.allocations limit 10; +----------------------+--------------------------------------+ | resource_provider_id | consumer_id | +----------------------+--------------------------------------+ | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | | 1 | 339d0585-b671-4afa-918b-a772bfc36da8 | +----------------------+--------------------------------------+ MariaDB [nova]> select name,id from nova_api.resource_providers; +--------------------------+----+ | name | id | +--------------------------+----+ | compute1.fqdn | 3 | | compute2.fqdn | 1 | | compute3.fqdn | 2 | | compute4.fqdn | 4 | +--------------------------+----+ I only checked four of those consumer_id entries and all are existing VMs, I'll need to check all of them tomorrow. So I guess we should try to get the osc-placement tool running for us. Thanks, that already helped a lot! Eugen Zitat von Sean Mooney : > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >> Thank you, Sean. >> >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> >> That's the thing, they don't show up in the compute service list. But >> I also found them in the resource_providers table, only the old >> compute nodes appear here: >> >> MariaDB [nova]> select name from nova_api.resource_providers; >> +--------------------------+ >> > name | >> +--------------------------+ >> > compute1.fqdn | >> > compute2.fqdn | >> > compute3.fqdn | >> > compute4.fqdn | >> +--------------------------+ > ah in that case the compute service delete is ment to remove the RPs too > but if the RP had stale allcoation at teh time of the delete the RP > delete will fail > > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. > if that is the case you should be able to delete teh allcaotion and > then the RP > if the allocations are related to active vms that are now on the > rebuild nodes then you will have to try and > heal the allcoations. > > there is a openstack client extention called osc-placement that you > can install to help. > we also have a heal allcoation command in nova-manage that may help > but the next step would be to validate > if the old RPs are still in use or not. from there you can then work > to align novas and placment view with > the real toplogy. > > that could invovle removing the old compute nodes form the > compute_nodes table or marking them as deleted but > both nova db and plamcent need to be kept in sysnc to correct your > current issue. > >> >> >> Zitat von Sean Mooney : >> >> > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> > > Hi *, >> > > >> > > I have a quick question, last year we migrated our OpenStack to a >> > > highly available environment through a reinstall of all nodes. The >> > > migration went quite well, we're working happily in the new cloud but >> > > the databases still contain deprecated data. For example, the >> > > nova-scheduler logs lines like these on a regular basis: >> > > >> > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >> > > nova.scheduler.host_manager [...] No compute service record found for >> > > host compute1 >> > > >> > > This is one of the old compute nodes that has been reinstalled and is >> > > now compute01. I tried to find the right spot to delete some lines in >> > > the DB but there are a couple of places so I wanted to check and ask >> > > you for some insights. >> > > >> > > The scheduler messages seem to originate in >> > > >> > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> > > >> > > ---snip--- >> > >          for cell_uuid, computes in compute_nodes.items(): >> > >              for compute in computes: >> > >                  service = services.get(compute.host) >> > > >> > >                  if not service: >> > >                      LOG.warning( >> > >                          "No compute service record found for host >> > > %(host)s", >> > >                          {'host': compute.host}) >> > >                      continue >> > > ---snip--- >> > > >> > > So I figured it could be this table in the nova DB: >> > > >> > > ---snip--- >> > > MariaDB [nova]> select host,deleted from compute_nodes; >> > > +-----------+---------+ >> > > > host | deleted | >> > > +-----------+---------+ >> > > > compute01 | 0 | >> > > > compute02 | 0 | >> > > > compute03 | 0 | >> > > > compute04 | 0 | >> > > > compute05 | 0 | >> > > > compute1 | 0 | >> > > > compute2 | 0 | >> > > > compute3 | 0 | >> > > > compute4 | 0 | >> > > +-----------+---------+ >> > > ---snip--- >> > > >> > > What would be the best approach here to clean up a little? I believe >> > > it would be safe to simply purge those lines containing the old >> > > compute node, but there might be a smoother way. Or maybe there are >> > > more places to purge old data from? >> > so the step you porably missed was deleting the old compute >> service records >> > >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> > >> > > >> > > I'd appreciate any ideas. >> > > >> > > Regards, >> > > Eugen >> > > >> > > >> >> >> From danilo_dellorto at it.ibm.com Mon Mar 8 16:19:26 2021 From: danilo_dellorto at it.ibm.com (DANILO PAOLO DELL'ORTO) Date: Mon, 8 Mar 2021 17:19:26 +0100 Subject: Openstack Cinder support matrix Message-ID: Hi, The Openstack Cinder driver support matrix (see link-1 below) lists the supported features for each storage and in the IBM Virtualize Family (SVC) section lacks support for 3 important features: Thin provisioning, Volume migration (storage assisted) and Active -Active high availability support. In the same site in section IBM Spectrum Virtualize volume driver (link-2 below), the functions described above seem to be supported and explained in more detail. Which of the two parts is true? Maybe the support matrix is out of date? best regards link-1 https://docs.openstack.org/cinder/latest/reference/support-matrix.html link-2 https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/ibm-storwize-svc-driver.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 9685 bytes Desc: not available URL: From gthiemonge at redhat.com Mon Mar 8 16:51:19 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Mon, 8 Mar 2021 17:51:19 +0100 Subject: [election][Octavia] PTL candidacy for Xena Message-ID: Hi everyone, I would like to propose my candidacy for Octavia PTL during the Xena cycle. I have been an Octavia contributor for 2 years, and a core reviewer for 2 releases. Since then, I have been working in many areas in Octavia: adding support for new protocols (SCTP), fixing CI jobs, improving the octavia-dashboard. My focus will be to continue the work that the team has accomplished these last years, particularly adding many new features in Octavia (active/active load balancers, leveraging features provided by HAProxy 2.0) and improving our CI coverage (restoring CentOS jobs). Thanks for your consideration, Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Mar 8 17:00:53 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 8 Mar 2021 19:00:53 +0200 Subject: [TripleO] stable/rocky End Of Life - closed for new code Message-ID: hello TripleO FYI we have now merged https://review.opendev.org/c/openstack/releases/+/774244 which tags stable/rocky for tripleo repos as End Of Life. *** WE ARE NO LONGER ACCEPTING PATCHES *** for tripleo stable/rocky [1]. In due course the rocky branch will be removed from all tripleO repos, however this will not be possible if there are pending patches open against any of those [1]. So please avoid posting anything to tripleo stable/rocky and if you see a review posted please help by commenting and informing the author that rocky is closed for tripleo. thank you for reading and your help in this matter regards, marios [1] https://releases.openstack.org/teams/tripleo.html#rocky -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Mar 8 17:01:48 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 8 Mar 2021 18:01:48 +0100 Subject: Openstack Cinder support matrix In-Reply-To: References: Message-ID: Hello Danilo, yes, maybe. I contacted several storage vendors for a tender and seems the support matrix is not very accurate under the point of view of most of them . I suggest to contact the storage vendor. Ignazio Il giorno lun 8 mar 2021 alle ore 17:28 DANILO PAOLO DELL'ORTO < danilo_dellorto at it.ibm.com> ha scritto: > Hi, > > The Openstack Cinder driver support matrix (see link-1 below) lists the > supported features for each storage and in the IBM Virtualize Family (SVC) > section > lacks support for 3 important features: Thin provisioning, Volume > migration (storage assisted) and Active -Active high availability support. > In the same site in section IBM Spectrum Virtualize volume driver (link-2 > below), the functions described above seem to be supported > and explained in more detail. > Which of the two parts is true? > Maybe the support matrix is out of date? > > best regards > > link-1 > https://docs.openstack.org/cinder/latest/reference/support-matrix.html > link-2 > https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/ibm-storwize-svc-driver.html > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: noname Type: image/gif Size: 9685 bytes Desc: not available URL: From mdemaced at redhat.com Mon Mar 8 17:26:36 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 8 Mar 2021 14:26:36 -0300 Subject: [election][kuryr] PTL Candidacy for Xena Message-ID: Greetings, I would like to continue serving as Kuryr PTL for the Xena cycle. I have been contributing to the OpenStack community since the Queens release and started serving as Kuryr PTL in the Wallaby cycle. It was a great opportunity to contribute back to the community as PTL and I would like to continue doing that. In wallaby we achieved the following goals: * Improved CI stability and fixed broken gates e.g. Network Policy e2e and OVN gates. * Added new testing scenarios for Services without selectors and Services with sctp. * Extended Kuryr functionalities - we started the dual-stack support and included the support for SCTP and Service without selectors. * Increased the contributor base with new contributors from the Outreachy program. For the next cycle, I propose the following goals for Kuryr: * Improve and extend CI: we already did great improvements, but we must continuously work on it to provide better and quicker feedback during the development process. As part of that, we plan to: update gates to start using Kubeadm and include a new cri-o gate. * Extend Kuryr functionalities: Support dual stack and Kubeadm for DevStack installations, and move Pools management to CRDs. * Continue growing the contributor base. Thanks, Maysa Macedo. IRC: maysams -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Mar 8 18:08:40 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 8 Mar 2021 10:08:40 -0800 Subject: [PTLs][All] vPTG April 2021 Team Signup Message-ID: Greetings! As you hopefully already know, our next PTG will be virtual again, and held from Monday, April 19 to Friday, April 23. We will have the same schedule set up available as last time with three windows of time spread across the day to cover all timezones with breaks in between. *To signup your team, you must complete **BOTH** the survey[1] AND reserve time in the ethercalc[2] by March 25 at 7:00 UTC.* We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions in with 4 rules/guidelines. 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first. 2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. Again, you need to fill out BOTH the ethercalc AND the survey to complete your team's sign up. If you have any issues with signing up your team, due to conflict or otherwise, please let me know! While we are trying to empower you to make your own decisions as to when you meet and for how long (after all, you know your needs and teams timezones better than we do), we are here to help! Once your team is signed up, please register! And remind your team to register! Registration is free, but since it will be how we contact you with passwords, event details, etc. it is still important! Continue to check back for updates at openstack.org/ptg. -the Kendalls (diablo_rojo & wendallkaters) [1] Team Survey: https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey [2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi [3] PTG Registration: https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Mar 8 18:47:02 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 8 Mar 2021 12:47:02 -0600 Subject: [community/MLs] Measuring ML Success Message-ID: Hi All - Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. Open to other ideas/thoughts :) Cheers! Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Mar 8 19:21:58 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 9 Mar 2021 04:21:58 +0900 Subject: [election][tacker] PTL candidacy for Xena Message-ID: <39b6bce5-5b9e-9362-2136-a27ef05f46a5@gmail.com> Hi, I'd like to propose my candidacy for Tacker PTL in Xena cycle. In Wallaby release, we have released several features for the latest ETSI NFV standard while largely updating infras, such as moving to Ubuntu 20.04, supporting redhat distros again and dropping python2 completely for not only Tacker but also related projects such as tosca-parser or heat-translator or so [1]. In addition, we have fixed instability in unit and functional tests for which we have troubles several times. As Tacker PTL, I've led the team by proposing not only new features, but also things for driving the team such as documentation or bug tracking. In Xena cycle, I would like to continue to make Tacker be more useful product for users interested in NFV. I believe Tacker will be a good reference implementation for NFV standard. We have planed to make Tacker more feasible not only for VM environment, but also container to meet requirements from industries. - Continue to implement the latest container technology with ETSI NFV standard. - Introduce multi API versions to meet the requirements for operators enable to deploy mixed environment of multi-vendor products in which some products provide a stable version APIs while other products adopt move advanced ones. - Proceed to design and implement test framework under development in ETSI NFV TST to improve the quality of the product, not only unit tests and functional tests, but also introduce more sophisticated scheme such as robot framework. [1] https://docs.openstack.org/releasenotes/tacker/unreleased.html Regards, Yasufumi Ogawa From anost1986 at gmail.com Mon Mar 8 19:56:43 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Mon, 8 Mar 2021 13:56:43 -0600 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Hi Jimmy, Stackalytics [0] currently tracks emails identifying and grouping them by author, company, module (with some success). To answer your challenge we'll need to add a grouping by thread, but still need some criteria to mark thread as possibly unanswered. E.g. threads having a single message or only messages from a single author, or if the last message in a thread contains a question mark. With no additional information marking thread as closed explicitly, all this will still be a guessing, producing candidates for unanswered threads. Thank you for bringing this up! [0] https://www.stackalytics.io/?metric=emails On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > Hi All - > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > Open to other ideas/thoughts :) > > Cheers! > Jimmy From eblock at nde.ag Mon Mar 8 20:57:35 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Mar 2021 20:57:35 +0000 Subject: Cleanup database(s) In-Reply-To: <20210308152819.Horde.3O8lkYNu58IYePW7cpYmVqS@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> <20210308152819.Horde.3O8lkYNu58IYePW7cpYmVqS@webmail.nde.ag> Message-ID: <20210308205735.Horde.RP7Fip19F6rkzcTUwX0p6ca@webmail.nde.ag> > I read about that in the docs, but there's no RPM for our distro > (openSUSE), I guess we'll have to build it from source. I should have read the docs more carefully, I installed the osc-placement plug-in on a test machine and will play around with the options. Thanks again! Zitat von Eugen Block : > Hi, > >> there is a openstack client extention called osc-placement that you >> can install to help. >> we also have a heal allcoation command in nova-manage that may help >> but the next step would be to validate >> if the old RPs are still in use or not. from there you can then >> work to align novas and placment view with >> the real toplogy. > > I read about that in the docs, but there's no RPM for our distro > (openSUSE), I guess we'll have to build it from source. > >> what you proably need to do in this case is check if the RPs still >> have allocations and if so >> verify that the allocation are owned by vms that nolonger exist. > > Is this the right place to look at? > > MariaDB [nova]> select count(*) from nova_api.allocations; > +----------+ > | count(*) | > +----------+ > | 263 | > +----------+ > > > MariaDB [nova]> select resource_provider_id,consumer_id from > nova_api.allocations limit 10; > +----------------------+--------------------------------------+ > | resource_provider_id | consumer_id | > +----------------------+--------------------------------------+ > | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | > | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | > | 3 | fce8f56e-e50b-47ef-bbf5-87b91336b2d4 | > | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | > | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | > | 3 | 67d95ce0-7902-40db-8ad7-ef0ce350bcb4 | > | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | > | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | > | 1 | 0caaebae-56a6-45d8-a486-f3294ab321e8 | > | 1 | 339d0585-b671-4afa-918b-a772bfc36da8 | > +----------------------+--------------------------------------+ > > MariaDB [nova]> select name,id from nova_api.resource_providers; > +--------------------------+----+ > | name | id | > +--------------------------+----+ > | compute1.fqdn | 3 | > | compute2.fqdn | 1 | > | compute3.fqdn | 2 | > | compute4.fqdn | 4 | > +--------------------------+----+ > > I only checked four of those consumer_id entries and all are > existing VMs, I'll need to check all of them tomorrow. So I guess we > should try to get the osc-placement tool running for us. > > Thanks, that already helped a lot! > > Eugen > > > Zitat von Sean Mooney : > >> On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >>> Thank you, Sean. >>> >>>> so you need to do >>>> openstack compute service list to get teh compute service ids >>>> then do >>>> openstack compute service delete ... >>>> >>>> you need to make sure that you only remvoe the unused old serivces >>>> but i think that would fix your issue. >>> >>> That's the thing, they don't show up in the compute service list. But >>> I also found them in the resource_providers table, only the old >>> compute nodes appear here: >>> >>> MariaDB [nova]> select name from nova_api.resource_providers; >>> +--------------------------+ >>>> name | >>> +--------------------------+ >>>> compute1.fqdn | >>>> compute2.fqdn | >>>> compute3.fqdn | >>>> compute4.fqdn | >>> +--------------------------+ >> ah in that case the compute service delete is ment to remove the RPs too >> but if the RP had stale allcoation at teh time of the delete the RP >> delete will fail >> >> what you proably need to do in this case is check if the RPs still >> have allocations and if so >> verify that the allocation are owned by vms that nolonger exist. >> if that is the case you should be able to delete teh allcaotion and >> then the RP >> if the allocations are related to active vms that are now on the >> rebuild nodes then you will have to try and >> heal the allcoations. >> >> there is a openstack client extention called osc-placement that you >> can install to help. >> we also have a heal allcoation command in nova-manage that may help >> but the next step would be to validate >> if the old RPs are still in use or not. from there you can then >> work to align novas and placment view with >> the real toplogy. >> >> that could invovle removing the old compute nodes form the >> compute_nodes table or marking them as deleted but >> both nova db and plamcent need to be kept in sysnc to correct your >> current issue. >> >>> >>> >>> Zitat von Sean Mooney : >>> >>>> On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >>>> > Hi *, >>>> > >>>> > I have a quick question, last year we migrated our OpenStack to a >>>> > highly available environment through a reinstall of all nodes. The >>>> > migration went quite well, we're working happily in the new cloud but >>>> > the databases still contain deprecated data. For example, the >>>> > nova-scheduler logs lines like these on a regular basis: >>>> > >>>> > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >>>> > nova.scheduler.host_manager [...] No compute service record found for >>>> > host compute1 >>>> > >>>> > This is one of the old compute nodes that has been reinstalled and is >>>> > now compute01. I tried to find the right spot to delete some lines in >>>> > the DB but there are a couple of places so I wanted to check and ask >>>> > you for some insights. >>>> > >>>> > The scheduler messages seem to originate in >>>> > >>>> > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >>>> > >>>> > ---snip--- >>>> >          for cell_uuid, computes in compute_nodes.items(): >>>> >              for compute in computes: >>>> >                  service = services.get(compute.host) >>>> > >>>> >                  if not service: >>>> >                      LOG.warning( >>>> >                          "No compute service record found for host >>>> > %(host)s", >>>> >                          {'host': compute.host}) >>>> >                      continue >>>> > ---snip--- >>>> > >>>> > So I figured it could be this table in the nova DB: >>>> > >>>> > ---snip--- >>>> > MariaDB [nova]> select host,deleted from compute_nodes; >>>> > +-----------+---------+ >>>> > > host | deleted | >>>> > +-----------+---------+ >>>> > > compute01 | 0 | >>>> > > compute02 | 0 | >>>> > > compute03 | 0 | >>>> > > compute04 | 0 | >>>> > > compute05 | 0 | >>>> > > compute1 | 0 | >>>> > > compute2 | 0 | >>>> > > compute3 | 0 | >>>> > > compute4 | 0 | >>>> > +-----------+---------+ >>>> > ---snip--- >>>> > >>>> > What would be the best approach here to clean up a little? I believe >>>> > it would be safe to simply purge those lines containing the old >>>> > compute node, but there might be a smoother way. Or maybe there are >>>> > more places to purge old data from? >>>> so the step you porably missed was deleting the old compute >>>> service records >>>> >>>> so you need to do >>>> openstack compute service list to get teh compute service ids >>>> then do >>>> openstack compute service delete ... >>>> >>>> you need to make sure that you only remvoe the unused old serivces >>>> but i think that would fix your issue. >>>> >>>> > >>>> > I'd appreciate any ideas. >>>> > >>>> > Regards, >>>> > Eugen >>>> > >>>> > >>> >>> >>> From vhariria at redhat.com Mon Mar 8 21:15:49 2021 From: vhariria at redhat.com (Vida Haririan) Date: Mon, 8 Mar 2021 16:15:49 -0500 Subject: [Manila ] Bug squash happening next week starting Monday March 15th Message-ID: Hi all, We are planning a new Bug Squash event for next week. The event will be held from 15th through 18th March, 2021, providing plenty of time for all interested to participate. There will be a synchronous bug triage/review call held simultaneously on our Freenode channel #openstack-manila on 18th March, 2021 at 15:00 UTC and on this Jitsi bridge [1]. A list of selected bugs will be provided in advance here [2]. As always, please feel free to update the list with any bugs that you would like us to focus on during this period. Looking forward to you joining us for this event and many thanks in advance for your participation. :) Regards, Vida [1] https://meetpad.opendev.org/ManilaW-ReleaseBugSquash-II [2] https://ethercalc.openstack.org/hrs8m6sqpmaz -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Mon Mar 8 21:45:25 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 8 Mar 2021 18:45:25 -0300 Subject: [cinder] Bug deputy report for week of 2021-03-01 Message-ID: This is a bug report from 2021-03-03 to 2021-03-08. Looks like we have a lot of things going on. Some of these bugs were discussed at the Cinder meeting last Wednesday 2021-02-24. Critical: - High: - https://bugs.launchpad.net/cinder/+bug/1916980: "Cinder sends old db object when delete an attachment ''. Assigned to Gorka Eguileor (gorka). - https://bugs.launchpad.net/cinder/+bug/1917450: "Automatic quota refresh counting twice migrating volumes". Assigned to Gorka Eguileor (gorka). - https://bugs.launchpad.net/cinder/+bug/1917287: "Ambiguous error logs for retype of inspur storage volume". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917353: "image_conversion_cpu_limit and image_conversion_address_space_limit settings ignored in /etc/cinder/cinder.conf". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917574: "Cannot show volumes with name for non-admins". Assigned to Rajat Dhasmana. Medium: - https://bugs.launchpad.net/cinder/+bug/1918099: " Nimble revert to snapshot bug". Assigned to Ajitha Robert (ajitharobert01). Low: - https://bugs.launchpad.net/cinder/+bug/1917293: "Scheduling is not even among multiple thin provisioning pools which have different sizes". Unassigned. Incomplete: - https://bugs.launchpad.net/cinder/+bug/1916843: " Backup create failed: RBD volume flatten too long causing mq to timed out." Unassigned. Undecided/Unconfirmed: - https://bugs.launchpad.net/cinder/+bug/1918119: "Volume backup timeout for large volumes". Assigned to Kiran Pawar (kiranpawar89). - https://bugs.launchpad.net/cinder/+bug/1918102: "Cinder-backup progress notification has incorrect percentage." Assigned to Jon Cui (czl389). - https://bugs.launchpad.net/cinder/+bug/1917797: "Cinder request to glance does not support TLS". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917795: "Cinder ignores reader role conventions in default policies". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917750: " Running parallel iSCSI/LVM c-vol backends is causing random failures in CI". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1917605: "Bulk create Hyperswap volume is failing". Assigned to Girish Chilukur. Not a bug:- Feel free to reply/reach me if I missed something. Regards Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Mar 8 21:59:21 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 8 Mar 2021 13:59:21 -0800 Subject: [election][designate] PTL candidacy for Xena Message-ID: Hello OpenStack community, I would like to announce my candidacy for PTL of Designate for the Xena cycle. The Wallaby release cycle seems like it went by very quickly and I would like to continue to support the Designate team during the Xena release. Thank you for your support and your consideration for Xena, Michael Johnson (johnsom) From gmann at ghanshyammann.com Mon Mar 8 23:24:55 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 08 Mar 2021 17:24:55 -0600 Subject: [all][qa] Gate failure for <= stable/train and Tempest master gate (Do not recheck) In-Reply-To: <1781217b005.10f2576cb155502.3684013520139333947@ghanshyammann.com> References: <1781217b005.10f2576cb155502.3684013520139333947@ghanshyammann.com> Message-ID: <178142866c5.e2a5adb3182208.8713510303708224396@ghanshyammann.com> ---- On Mon, 08 Mar 2021 07:47:25 -0600 Ghanshyam Mann wrote ---- > Hello Everyone, > > get-pip.py url for py2.7 has been changed which causing failure on stable/train or older > branches and Tempest master gate. > > Thanks, Dan, Elod for fixing and backporting those. Please wait for the below fixes to merge > and do not recheck. > > https://review.opendev.org/q/Id62e91b1609db4b1d2fa425010bac1ce77e9fc51 All fixes are merged, you can recheck now. -gmann > > -gmann > > From kennelson11 at gmail.com Tue Mar 9 00:55:17 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 8 Mar 2021 16:55:17 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations Last Days Message-ID: Hello All! A quick reminder that we are in the last hours for declaring PTL and TC candidacies. Nominations are open until Mar 09, 2021 23:45 UTC. If you want to stand for election, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: Nominations started @ 2021-03-02 23:45:00 UTC Nominations end @ 2021-03-09 23:45:00 UTC Nominations duration : 7 days, 0:00:00 Nominations remaining : 1 day, 0:48:52 Nominations progress : 85.23% --------------------------------------------------- Projects[1] : 49 Projects with candidates : 22 ( 44.90%) Projects with election : 0 ( 0.00%) --------------------------------------------------- Need election : 0 () Need appointment : 27 (Adjutant Barbican Blazar Cinder Cloudkitty Cyborg Designate Horizon Keystone Kolla Magnum Masakari Mistral Monasca OpenStackAnsible OpenStack_Charms Openstack_Chef Quality_Assurance Rally Requirements Senlin Storlets Swift Telemetry Vitrage Zaqar Zun) =================================================== Stats gathered @ 2021-03-08 22:56:08 UTC This means that with approximately 2 days left, 27 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. Thank you, -Kendall Nelson (diablo_rojo) & the Election Officials [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy [2] Any open reviews at https://review.openstack.org/#/q/is:open+project:openstack/election have not been factored into these stats. [3] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html __________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Mar 9 00:55:30 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 8 Mar 2021 16:55:30 -0800 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Just an FYI, stackalytics hasn't updated since January, so it's not going to be a good, current, source of information. Michael On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > Hi Jimmy, > > Stackalytics [0] currently tracks emails identifying and grouping them > by author, company, module (with some success). > To answer your challenge we'll need to add a grouping by thread, but > still need some criteria to mark thread as possibly unanswered. E.g. > threads having a single message or only messages from a single author, > or if the last message in a thread contains a question mark. > > With no additional information marking thread as closed explicitly, > all this will still be a guessing, producing candidates for unanswered > threads. > > Thank you for bringing this up! > > [0] https://www.stackalytics.io/?metric=emails > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > Hi All - > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > Open to other ideas/thoughts :) > > > > Cheers! > > Jimmy > From anost1986 at gmail.com Tue Mar 9 01:08:23 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Mon, 8 Mar 2021 19:08:23 -0600 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Michael, https://www.stackalytics.io is updated daily and maintained. On Mon, Mar 8, 2021 at 6:55 PM Michael Johnson wrote: > > Just an FYI, stackalytics hasn't updated since January, so it's not > going to be a good, current, source of information. > > Michael > > On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > > > Hi Jimmy, > > > > Stackalytics [0] currently tracks emails identifying and grouping them > > by author, company, module (with some success). > > To answer your challenge we'll need to add a grouping by thread, but > > still need some criteria to mark thread as possibly unanswered. E.g. > > threads having a single message or only messages from a single author, > > or if the last message in a thread contains a question mark. > > > > With no additional information marking thread as closed explicitly, > > all this will still be a guessing, producing candidates for unanswered > > threads. > > > > Thank you for bringing this up! > > > > [0] https://www.stackalytics.io/?metric=emails > > > > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > > > Hi All - > > > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > > > Open to other ideas/thoughts :) > > > > > > Cheers! > > > Jimmy > > From johnsomor at gmail.com Tue Mar 9 01:17:05 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 8 Mar 2021 17:17:05 -0800 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: Oh, nice. I missed the memo on the URL change. Thanks, Michael On Mon, Mar 8, 2021 at 5:08 PM Andrii Ostapenko wrote: > > Michael, > > https://www.stackalytics.io is updated daily and maintained. > > On Mon, Mar 8, 2021 at 6:55 PM Michael Johnson wrote: > > > > Just an FYI, stackalytics hasn't updated since January, so it's not > > going to be a good, current, source of information. > > > > Michael > > > > On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > > > > > Hi Jimmy, > > > > > > Stackalytics [0] currently tracks emails identifying and grouping them > > > by author, company, module (with some success). > > > To answer your challenge we'll need to add a grouping by thread, but > > > still need some criteria to mark thread as possibly unanswered. E.g. > > > threads having a single message or only messages from a single author, > > > or if the last message in a thread contains a question mark. > > > > > > With no additional information marking thread as closed explicitly, > > > all this will still be a guessing, producing candidates for unanswered > > > threads. > > > > > > Thank you for bringing this up! > > > > > > [0] https://www.stackalytics.io/?metric=emails > > > > > > > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > > > > > Hi All - > > > > > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > > > > > Open to other ideas/thoughts :) > > > > > > > > Cheers! > > > > Jimmy > > > From whayutin at redhat.com Tue Mar 9 01:42:46 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 8 Mar 2021 18:42:46 -0700 Subject: [TripleO] stable/rocky End Of Life - closed for new code In-Reply-To: References: Message-ID: Nice work Marios!! Thank you On Mon, Mar 8, 2021 at 10:03 AM Marios Andreou wrote: > hello TripleO > > FYI we have now merged > https://review.opendev.org/c/openstack/releases/+/774244 which tags > stable/rocky for tripleo repos as End Of Life. > > *** WE ARE NO LONGER ACCEPTING PATCHES *** for tripleo stable/rocky [1]. > > In due course the rocky branch will be removed from all tripleO repos, > however this will not be possible if there are pending patches open against > any of those [1]. > > So please avoid posting anything to tripleo stable/rocky and if you see a > review posted please help by commenting and informing the author that rocky > is closed for tripleo. > > thank you for reading and your help in this matter > > regards, marios > > [1] https://releases.openstack.org/teams/tripleo.html#rocky > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Tue Mar 9 04:19:14 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 9 Mar 2021 17:19:14 +1300 Subject: [election][trove] PTL candidacy for Xena Message-ID: Hi community, I am still very glad to announce my PTL candidacy for Trove, the database-as-a-service in OpenStack. I've been served as Trove PTL since Train release and I spent more than 50% of my working time on this project. I'm so lucky to work for a company that built a public cloud based on OpenStack and has been investing heavily in Open Source since day one. We also deployed Trove in our production, we report bug and fix in the upstream, we don't maintain private changes, we keep growing together with the community. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Tue Mar 9 04:41:18 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Tue, 9 Mar 2021 10:11:18 +0530 Subject: [election][horizon] PTL candidacy for Xena Message-ID: Hi everyone, I would like to announce my candidacy for PTL of Horizon for Xena release. I have been actively contributing to Horizon from stein release [1] and become horizon core reviewer in early train release. During these years, my focus was on filing horizon feature-gap [2], bug-fixing, and stabilizing horizon and its plugins. I am very grateful to all the people that have mentored me and helped me throughout these years. As a PTL I will focus on the following areas: * Migrate horizon & its plugins to the next LTS version of both Django and Nodejs. * Focus on specific set of features-gap instead of targeting all or many at a time. For example, 'System Scope Support' is one of the highest priority we should do in Xena cycle. * Reduce the number of New/Open bugs which is high currently. Something we can do weekly basis in rotation basis or together in meeting etc. * Help new contributors to work on the horizon. I am looking forward to working together with all of you on the Xena release. Thank you, Vishal Manchanda(irc: vishalmanchanda) [1] https://www.stackalytics.io/?metric=commits&release=all&user_id=vishalmanchanda [2] https://etherpad.opendev.org/p/horizon-feature-gap -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Mar 9 05:13:00 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 9 Mar 2021 00:13:00 -0500 Subject: [election][cinder] PTL candidacy for Xena Message-ID: Hello everyone, I'd like to announce my candidacy for Cinder PTL for the Xena cycle. The primary challenge we face in the Cinder community is that our reviewing bandwidth has declined at the same time third-party driver contributions--new drivers, new features for existing drivers, and driver bugfixes--have been increasing. Similarly, we've been adding better CI coverage (good) while at the same time our gate jobs have become increasingly unstable (not due to the new tests, there are some old failures which seem to be occurring more often). We need to add some new core reviewers in Xena. Luckily, some people have been increasing their review participation recently, so there are community members getting themselves into a position to help the project in this capacity. (And anyone else currently working on cinder project who's interested becoming a cinder core, please contact me (or any of the current cores) to discuss what the expectations are.) We'll also be making it a priority to improve the gate jobs (or else we'll never be able to get anything merged). As far as community activity goes, the multiple virtual mid-cycle meetings have continued to be productive, and the cinder meeting in videoconference that we've been having once a month seems popular and gives us a break from the strict IRC meeting format. There's support to make the Festival of XS Reviews a recurring event, and Sofia has proposed holding a separate (short) bug meeting which we'll start doing soon. Hopefully all these events will help keep current contributors engaged and make it easier for other people to participate more fully. With respect to the details of Cinder development in Xena, I expect those to emerge from our virtual PTG discussions in April. You can help set the agenda here: https://etherpad.opendev.org/p/xena-ptg-cinder-planning Thanks for reading this far, and thank you for your consideration. Brian Rosmaita (rosmaita) https://review.opendev.org/c/openstack/election/+/779425 From chris.macnaughton at canonical.com Tue Mar 9 07:32:09 2021 From: chris.macnaughton at canonical.com (Chris MacNaughton) Date: Tue, 9 Mar 2021 08:32:09 +0100 Subject: [election][charms] PTL candidacy for Xena Message-ID: <44ee559d-09bb-8376-975a-b58be563a11a@canonical.com> Hello all, I would like to announce my candidacy for PTL of the OpenStack Charms project[1] for the Xena cycle. Through my time contributing to the OpenStack Charms project as a core team member, I have experienced working on many of the charms in both a bug-fix and new feature capacity. Additionally, I have made upstream contributions as needed. The OpenStack Charms project has an increasing community both consuming the charms, as well as contributing to them, and I think that it is important to actively nurture this community. In addition to our user community, it is important for OpenStack Charms members to integrate more widely in the wider OpenStack community. [1]: https://review.openstack.org/#/c/641571/ -- Chris MacNaughton -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From themasch at gmx.net Tue Mar 9 07:57:19 2021 From: themasch at gmx.net (MaSch) Date: Tue, 9 Mar 2021 08:57:19 +0100 Subject: How to Handle RabbitMQ peak after restarting neutron-openvswitch-agent Message-ID: <63b3a5ef-c8e4-d03e-0b0b-0b0b3eb92e35@gmx.net> Hello at all. After adding an new direct attached VLAN to our computeNodes and restarting neutron-openvswitch-agent, we detected a huge peak in RabbitMQ Messages. Mostly "neutron-vo-Port-1.1_fanout" these messages seem every node produced  5000+ Messages. Due to a bridge_mapping mismatch, the agent repeatedly restarted and produced more and more Messages. At the peak there were about 2Million messages in the queue. As we ran into Network issues caused by Messaging timeouts i would like to know if there is a procedure to handle this messages. Is it save to delete the queue as those messages? It seems they disappeared after a timeout period of about 30 Minutes. are currently using OpenStack Queens release. Thanks a lot in advance. Best regards MaSch From eblock at nde.ag Tue Mar 9 09:20:54 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 09 Mar 2021 09:20:54 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> Message-ID: <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> Hi again, I just wanted to get some clarification on how to proceed. > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. > if that is the case you should be able to delete teh allcaotion and > then the RP > if the allocations are related to active vms that are now on the > rebuild nodes then you will have to try and > heal the allcoations. I checked all allocations for the old compute nodes, those are all existing VMs. So simply deleting the allocations won't do any good, I guess. From [1] I understand that I should overwrite all allocations (we're on Train so there's no "unset" available yet) for those VMs to point to the new compute nodes (resource_providers). After that I should delete the resource providers, correct? I ran "heal_allocations" for one uncritical instance, but it didn't have any visible effect, the allocations still show one of the old compute nodes. What I haven't tried yet is to delete allocations for an instance and then try to heal it as the docs also mention. Do I understand that correctly or am I still missing something? Regards, Eugen [1] https://docs.openstack.org/nova/latest/admin/troubleshooting/orphaned-allocations.html Zitat von Sean Mooney : > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >> Thank you, Sean. >> >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> >> That's the thing, they don't show up in the compute service list. But >> I also found them in the resource_providers table, only the old >> compute nodes appear here: >> >> MariaDB [nova]> select name from nova_api.resource_providers; >> +--------------------------+ >> > name | >> +--------------------------+ >> > compute1.fqdn | >> > compute2.fqdn | >> > compute3.fqdn | >> > compute4.fqdn | >> +--------------------------+ > ah in that case the compute service delete is ment to remove the RPs too > but if the RP had stale allcoation at teh time of the delete the RP > delete will fail > > what you proably need to do in this case is check if the RPs still > have allocations and if so > verify that the allocation are owned by vms that nolonger exist. > if that is the case you should be able to delete teh allcaotion and > then the RP > if the allocations are related to active vms that are now on the > rebuild nodes then you will have to try and > heal the allcoations. > > there is a openstack client extention called osc-placement that you > can install to help. > we also have a heal allcoation command in nova-manage that may help > but the next step would be to validate > if the old RPs are still in use or not. from there you can then work > to align novas and placment view with > the real toplogy. > > that could invovle removing the old compute nodes form the > compute_nodes table or marking them as deleted but > both nova db and plamcent need to be kept in sysnc to correct your > current issue. > >> >> >> Zitat von Sean Mooney : >> >> > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> > > Hi *, >> > > >> > > I have a quick question, last year we migrated our OpenStack to a >> > > highly available environment through a reinstall of all nodes. The >> > > migration went quite well, we're working happily in the new cloud but >> > > the databases still contain deprecated data. For example, the >> > > nova-scheduler logs lines like these on a regular basis: >> > > >> > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING >> > > nova.scheduler.host_manager [...] No compute service record found for >> > > host compute1 >> > > >> > > This is one of the old compute nodes that has been reinstalled and is >> > > now compute01. I tried to find the right spot to delete some lines in >> > > the DB but there are a couple of places so I wanted to check and ask >> > > you for some insights. >> > > >> > > The scheduler messages seem to originate in >> > > >> > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> > > >> > > ---snip--- >> > >          for cell_uuid, computes in compute_nodes.items(): >> > >              for compute in computes: >> > >                  service = services.get(compute.host) >> > > >> > >                  if not service: >> > >                      LOG.warning( >> > >                          "No compute service record found for host >> > > %(host)s", >> > >                          {'host': compute.host}) >> > >                      continue >> > > ---snip--- >> > > >> > > So I figured it could be this table in the nova DB: >> > > >> > > ---snip--- >> > > MariaDB [nova]> select host,deleted from compute_nodes; >> > > +-----------+---------+ >> > > > host | deleted | >> > > +-----------+---------+ >> > > > compute01 | 0 | >> > > > compute02 | 0 | >> > > > compute03 | 0 | >> > > > compute04 | 0 | >> > > > compute05 | 0 | >> > > > compute1 | 0 | >> > > > compute2 | 0 | >> > > > compute3 | 0 | >> > > > compute4 | 0 | >> > > +-----------+---------+ >> > > ---snip--- >> > > >> > > What would be the best approach here to clean up a little? I believe >> > > it would be safe to simply purge those lines containing the old >> > > compute node, but there might be a smoother way. Or maybe there are >> > > more places to purge old data from? >> > so the step you porably missed was deleting the old compute >> service records >> > >> > so you need to do >> > openstack compute service list to get teh compute service ids >> > then do >> > openstack compute service delete ... >> > >> > you need to make sure that you only remvoe the unused old serivces >> > but i think that would fix your issue. >> > >> > > >> > > I'd appreciate any ideas. >> > > >> > > Regards, >> > > Eugen >> > > >> > > >> >> >> From mark at stackhpc.com Tue Mar 9 09:53:07 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Mar 2021 09:53:07 +0000 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: On Tue, 9 Mar 2021 at 01:17, Michael Johnson wrote: > > Oh, nice. I missed the memo on the URL change. If stackalytics.com is less well maintained, should it be abandoned? Or redirect to stackalytics.io? Mark > > Thanks, > Michael > > On Mon, Mar 8, 2021 at 5:08 PM Andrii Ostapenko wrote: > > > > Michael, > > > > https://www.stackalytics.io is updated daily and maintained. > > > > On Mon, Mar 8, 2021 at 6:55 PM Michael Johnson wrote: > > > > > > Just an FYI, stackalytics hasn't updated since January, so it's not > > > going to be a good, current, source of information. > > > > > > Michael > > > > > > On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko wrote: > > > > > > > > Hi Jimmy, > > > > > > > > Stackalytics [0] currently tracks emails identifying and grouping them > > > > by author, company, module (with some success). > > > > To answer your challenge we'll need to add a grouping by thread, but > > > > still need some criteria to mark thread as possibly unanswered. E.g. > > > > threads having a single message or only messages from a single author, > > > > or if the last message in a thread contains a question mark. > > > > > > > > With no additional information marking thread as closed explicitly, > > > > all this will still be a guessing, producing candidates for unanswered > > > > threads. > > > > > > > > Thank you for bringing this up! > > > > > > > > [0] https://www.stackalytics.io/?metric=emails > > > > > > > > > > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur wrote: > > > > > > > > > > Hi All - > > > > > > > > > > Over the last six months or so, we've had feedback from people that feel their questions die on the ML or that are missing ask.openstack.org. I don't think we should open up the ask.openstack.org can of worms, by any means. However, I wanted to find out if there was any software out there we could use to track metrics on which questions go unanswered on the ML. Everything I've found is very focused on email marketing, which is not what we're after. Would love to try to get some numbers on individuals that are trying to reach out to the ML, but just aren't getting through to anyone. > > > > > > > > > > Assuming we get that, I feel like it would be an easy next step to do a monthly or bi-monthly check to reach out to these potential new contributors. I realize our community is busy and people are, by and large, volunteering their time to answer these questions. But as hard as that is, it's also tough to pose new questions to a community you're unfamiliar with and then hear crickets. > > > > > > > > > > Open to other ideas/thoughts :) > > > > > > > > > > Cheers! > > > > > Jimmy > > > > > From mark at stackhpc.com Tue Mar 9 10:48:15 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Mar 2021 10:48:15 +0000 Subject: [election][kolla] PTL candidacy for Xena Message-ID: Hi, I'd like to nominate myself to serve as the Kolla PTL for the Xena cycle. I have been PTL for the last 4 cycles, and would like the opportunity to continue to lead the team. Overall I think that the project is moving in the right direction. Some things that I think we should focus on in the next cycle are: * remove our dependency on Dockerhub in CI testing to avoid pull limits * improve the documentation * expand the core team Thanks for reading, Mark Goddard (mgoddard) From tkajinam at redhat.com Tue Mar 9 10:48:22 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 9 Mar 2021 19:48:22 +0900 Subject: [election][storlets] PTL Candidacy for Xena Message-ID: Hi All, I'd like to announce my candidacy to continue the PTL role for Storlets project in the Xena cycle. Because we currently don't have very active development in Storlets project, I'll propose we mainly focus on the following two items during the next cycle. - Improve stability of our code - Keep compatibility with the latest items - Latest Swift - Latest Ubuntu LTS In addition, I'll initiate some discussion about our future release and management model, considering decreasing resources for reviews and developments. Thank you for your consideration. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Mar 9 11:17:08 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 9 Mar 2021 12:17:08 +0100 Subject: [election][blazar] PTL candidacy for Xena Message-ID: Hi, I would like to self-nominate for the role of PTL of Blazar for the Xena release cycle. I have been PTL since the Stein cycle and I am willing to continue in this role. I will keep working with existing contributors to maintain the Blazar software for their use cases and encourage the addition of new functionalities. Thank you for your support, Pierre Riteau (priteau) From balazs.gibizer at est.tech Tue Mar 9 11:25:21 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 09 Mar 2021 12:25:21 +0100 Subject: [nova][placement] Xena PTG Message-ID: <929PPQ.V3OOB6GJ47RA3@est.tech> Hi, As you probably already know that the next PTG will be held between Apr 19 - 23. To organize the gathering I need your help: 1) Please fill the doodle[1] with timeslots when you have time to join to our sessions. Please do this before end of 21st of March. 2) Please add your PTG topic in the etherpad[2]. If you feel your topic needs a cross project cooperation please note that in the etherpad which other teams are needed. Cheers, gibi [1] https://doodle.com/poll/ib2eu3c4346iqii3 [2] https://etherpad.opendev.org/p/nova-xena-ptg From thierry at openstack.org Tue Mar 9 11:39:57 2021 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 9 Mar 2021 12:39:57 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: Message-ID: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> We got two similar release processing failures: > - tag-releases https://zuul.opendev.org/t/openstack/build/707bb50b4bf645f8903d8ca59b43abbf : FAILURE in 2m 31s This one was an error running the tag-releases post-merge job after https://review.opendev.org/c/openstack/releases/+/779217 merged. It failed with: Running: git push --dry-run ssh://release at review.openstack.org:29418/openstack/tripleo-ipsec.git --all ssh://release at review.openstack.org:29418/openstack/tripleo-ipsec.git did not work. Description: Host key verification failed. > - tag-releases https://zuul.opendev.org/t/openstack/build/655e8980a53c4e829de25f0b8507b5f6 : FAILURE in 2m 57s Similar failure while running the tag-releases post-merge job after https://review.opendev.org/c/openstack/releases/+/779218 merged. Since this "host key verification failed" error hit tripleo-ipsec three times in a row, I suspect we have something stuck (or we always hit the same cache/worker). -- Thierry Carrez (ttx) From christian.rohmann at inovex.de Tue Mar 9 13:17:16 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Tue, 9 Mar 2021 14:17:16 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> Message-ID: <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> Hello again, I just ran into the openstack-resource-manager (https://github.com/SovereignCloudStack/openstack-resource-manager) apparently part of the SovereignCloudStack - SCS effort out of the Gaia-X initiative. That tool seems to solve once again the need to clean up prior to a projects deletion. Regards Christian From artem.goncharov at gmail.com Tue Mar 9 13:23:47 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 9 Mar 2021 14:23:47 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> Message-ID: <095A8B69-9EDC-4AF7-90AC-D16B0C484361@gmail.com> Hi, This is just a tiny subset of OpenStack resources with no possible flexibility vs native implementation in SDK/CLI Artem > On 9. Mar 2021, at 14:17, Christian Rohmann wrote: > > Hello again, > > I just ran into the openstack-resource-manager (https://github.com/SovereignCloudStack/openstack-resource-manager) > apparently part of the SovereignCloudStack - SCS effort out of the Gaia-X initiative. > > That tool seems to solve once again the need to clean up prior to a projects deletion. > > > Regards > > > Christian > > > From smooney at redhat.com Tue Mar 9 13:43:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 09 Mar 2021 13:43:41 +0000 Subject: Cleanup database(s) In-Reply-To: <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> Message-ID: On Tue, 2021-03-09 at 09:20 +0000, Eugen Block wrote: > Hi again, > > I just wanted to get some clarification on how to proceed. > > > what you proably need to do in this case is check if the RPs still > > have allocations and if so > > verify that the allocation are owned by vms that nolonger exist. > > if that is the case you should be able to delete teh allcaotion and > > then the RP > > if the allocations are related to active vms that are now on the > > rebuild nodes then you will have to try and > > heal the allcoations. > > I checked all allocations for the old compute nodes, those are all > existing VMs. So simply deleting the allocations won't do any good, I > guess. From [1] I understand that I should overwrite all allocations > (we're on Train so there's no "unset" available yet) for those VMs to > point to the new compute nodes (resource_providers). After that I > should delete the resource providers, correct? > I ran "heal_allocations" for one uncritical instance, but it didn't > have any visible effect, the allocations still show one of the old > compute nodes. > What I haven't tried yet is to delete allocations for an instance and > then try to heal it as the docs also mention. > > Do I understand that correctly or am I still missing something? i think the problem is you reinstalled the cloud with exisitng instances and change the hostnames of the compute nodes which is not a supported operations.(specifically changing the hostname of a computenode with vms is not supported) so in doing so that would cause all the compute service to be recreated for the new compute nodes and create new RPs in placment. the existing instnace however would still have there allocation on the old RPs and the old hostnames woudl be set in the instnace.host can you confirm that? in this case you dont actully have orphaned allocation exactly you have allcoation against the incorrect RP but if the instnace.host does not match the hypervisor hostname that its on then heal allocations will not be able to fix that. just looking at your orginal message you said "last year we migrated our OpenStack to a highly available environment through a reinstall of all nodes" i had assumed you have no instnace form the orignial enviornment with the old names if you had exising instnaces with the old name then you would have had to ensure the host names did not change to do that correctly without breaking the resouce tracking in nova. can you clarify those point. e.g. were all the workload removed before the reinstall? if not did the host name change? that is harder probelm to fix unless you can restore the old host name but i suspect you likely have booted new vms if this even has been runing for a year. > > Regards, > Eugen > > > [1] > https://docs.openstack.org/nova/latest/admin/troubleshooting/orphaned-allocations.html > > Zitat von Sean Mooney : > > > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: > > > Thank you, Sean. > > > > > > > so you need to do > > > > openstack compute service list to get teh compute service ids > > > > then do > > > > openstack compute service delete ... > > > > > > > > you need to make sure that you only remvoe the unused old serivces > > > > but i think that would fix your issue. > > > > > > That's the thing, they don't show up in the compute service list. But > > > I also found them in the resource_providers table, only the old > > > compute nodes appear here: > > > > > > MariaDB [nova]> select name from nova_api.resource_providers; > > > +--------------------------+ > > > > name | > > > +--------------------------+ > > > > compute1.fqdn | > > > > compute2.fqdn | > > > > compute3.fqdn | > > > > compute4.fqdn | > > > +--------------------------+ > > ah in that case the compute service delete is ment to remove the RPs too > > but if the RP had stale allcoation at teh time of the delete the RP > > delete will fail > > > > what you proably need to do in this case is check if the RPs still > > have allocations and if so > > verify that the allocation are owned by vms that nolonger exist. > > if that is the case you should be able to delete teh allcaotion and > > then the RP > > if the allocations are related to active vms that are now on the > > rebuild nodes then you will have to try and > > heal the allcoations. > > > > there is a openstack client extention called osc-placement that you > > can install to help. > > we also have a heal allcoation command in nova-manage that may help > > but the next step would be to validate > > if the old RPs are still in use or not. from there you can then work > > to align novas and placment view with > > the real toplogy. > > > > that could invovle removing the old compute nodes form the > > compute_nodes table or marking them as deleted but > > both nova db and plamcent need to be kept in sysnc to correct your > > current issue. > > > > > > > > > > > Zitat von Sean Mooney : > > > > > > > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: > > > > > Hi *, > > > > > > > > > > I have a quick question, last year we migrated our OpenStack to a > > > > > highly available environment through a reinstall of all nodes. The > > > > > migration went quite well, we're working happily in the new cloud but > > > > > the databases still contain deprecated data. For example, the > > > > > nova-scheduler logs lines like these on a regular basis: > > > > > > > > > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 23540 WARNING > > > > > nova.scheduler.host_manager [...] No compute service record found for > > > > > host compute1 > > > > > > > > > > This is one of the old compute nodes that has been reinstalled and is > > > > > now compute01. I tried to find the right spot to delete some lines in > > > > > the DB but there are a couple of places so I wanted to check and ask > > > > > you for some insights. > > > > > > > > > > The scheduler messages seem to originate in > > > > > > > > > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py > > > > > > > > > > ---snip--- > > > > >          for cell_uuid, computes in compute_nodes.items(): > > > > >              for compute in computes: > > > > >                  service = services.get(compute.host) > > > > > > > > > >                  if not service: > > > > >                      LOG.warning( > > > > >                          "No compute service record found for host > > > > > %(host)s", > > > > >                          {'host': compute.host}) > > > > >                      continue > > > > > ---snip--- > > > > > > > > > > So I figured it could be this table in the nova DB: > > > > > > > > > > ---snip--- > > > > > MariaDB [nova]> select host,deleted from compute_nodes; > > > > > +-----------+---------+ > > > > > > host | deleted | > > > > > +-----------+---------+ > > > > > > compute01 | 0 | > > > > > > compute02 | 0 | > > > > > > compute03 | 0 | > > > > > > compute04 | 0 | > > > > > > compute05 | 0 | > > > > > > compute1 | 0 | > > > > > > compute2 | 0 | > > > > > > compute3 | 0 | > > > > > > compute4 | 0 | > > > > > +-----------+---------+ > > > > > ---snip--- > > > > > > > > > > What would be the best approach here to clean up a little? I believe > > > > > it would be safe to simply purge those lines containing the old > > > > > compute node, but there might be a smoother way. Or maybe there are > > > > > more places to purge old data from? > > > > so the step you porably missed was deleting the old compute > > > service records > > > > > > > > so you need to do > > > > openstack compute service list to get teh compute service ids > > > > then do > > > > openstack compute service delete ... > > > > > > > > you need to make sure that you only remvoe the unused old serivces > > > > but i think that would fix your issue. > > > > > > > > > > > > > > I'd appreciate any ideas. > > > > > > > > > > Regards, > > > > > Eugen > > > > > > > > > > > > > > > > > > > > > > From eblock at nde.ag Tue Mar 9 14:47:06 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 09 Mar 2021 14:47:06 +0000 Subject: Cleanup database(s) In-Reply-To: References: <20210308131856.Horde.XigfNVMfv7c7MzxEDHJFni3@webmail.nde.ag> <20210308141836.Horde.YXXZOozmIL_MhY4-f9iAbbU@webmail.nde.ag> <20210309092054.Horde.DdCSVbCMqVUJJVkuZ0mBqOK@webmail.nde.ag> Message-ID: <20210309144706.Horde.06Gm_1Yqz34Y3EPFacqnoVG@webmail.nde.ag> Hi, > i think the problem is you reinstalled the cloud with exisitng > instances and change the hostnames of the > compute nodes which is not a supported operations.(specifically > changing the hostname of a computenode with vms is not supported) > so in doing so that would cause all the compute service to be > recreated for the new compute nodes and create new RPs in placment. > the existing instnace however would still have there allocation on > the old RPs and the old hostnames woudl be set in the instnace.host > can you confirm that? this environment grew from being just an experiment to our production cloud, so there might be a couple of unsupported things, but it still works fine, so that's something. ;-) I'll try to explain and hopefully clarify some things. We upgraded the databases on a virtual machine prior to the actual cloud upgrade. Since the most important services successfully started we went ahead and installed two control nodes with pacemaker and imported the already upgraded databases. Then we started to evacuate the compute nodes one by one and added them to the new cloud environment while the old one was still up and running. To launch existing instances in the new cloud we had to experiment a little, but from previous troubleshooting sessions we knew which tables we had to change in order to bring the instances up on the new compute nodes. Basically, we changed instances.host and instances.node to reflect one of the new compute nodes. So the answer to your question would probably be "no", the instances.host don't have the old hostnames. > can you clarify those point. e.g. were all the workload removed > before the reinstall? if not did the host name change? > that is harder probelm to fix unless you can restore the old host > name but i suspect you likely have booted new vms if this even has > been runing > for a year. I understand, it seems as I'll have to go through the resource allocations one by one and update them in order to be able to remove the old RPs. One final question though, is there anything risky about updating the allocations to match the actual RP? I tested that for an uncritical instance, shut it down and booted it again, all without an issue, it seems. If I do that for the rest, ist there anything I should be aware of? From what I saw so far all new instances are allocated properly, so the placement itself seems to be working well, right? Thanks! Eugen Zitat von Sean Mooney : > On Tue, 2021-03-09 at 09:20 +0000, Eugen Block wrote: >> Hi again, >> >> I just wanted to get some clarification on how to proceed. >> >> > what you proably need to do in this case is check if the RPs still >> > have allocations and if so >> > verify that the allocation are owned by vms that nolonger exist. >> > if that is the case you should be able to delete teh allcaotion and >> > then the RP >> > if the allocations are related to active vms that are now on the >> > rebuild nodes then you will have to try and >> > heal the allcoations. >> >> I checked all allocations for the old compute nodes, those are all >> existing VMs. So simply deleting the allocations won't do any good, I >> guess. From [1] I understand that I should overwrite all allocations >> (we're on Train so there's no "unset" available yet) for those VMs to >> point to the new compute nodes (resource_providers). After that I >> should delete the resource providers, correct? >> I ran "heal_allocations" for one uncritical instance, but it didn't >> have any visible effect, the allocations still show one of the old >> compute nodes. >> What I haven't tried yet is to delete allocations for an instance and >> then try to heal it as the docs also mention. >> >> Do I understand that correctly or am I still missing something? > > i think the problem is you reinstalled the cloud with exisitng > instances and change the hostnames of the > compute nodes which is not a supported operations.(specifically > changing the hostname of a computenode with vms is not supported) > so in doing so that would cause all the compute service to be > recreated for the new compute nodes and create new RPs in placment. > the existing instnace however would still have there allocation on > the old RPs and the old hostnames woudl be set in the instnace.host > can you confirm that? > > in this case you dont actully have orphaned allocation exactly you > have allcoation against the incorrect RP but if the instnace.host > does not > match the hypervisor hostname that its on then heal allocations will > not be able to fix that. > > just looking at your orginal message you said "last year we migrated > our OpenStack to a highly available environment through a reinstall > of all nodes" > > i had assumed you have no instnace form the orignial enviornment > with the old names if you had exising instnaces with the old name > then you would > have had to ensure the host names did not change to do that > correctly without breaking the resouce tracking in nova. > > can you clarify those point. e.g. were all the workload removed > before the reinstall? if not did the host name change? > that is harder probelm to fix unless you can restore the old host > name but i suspect you likely have booted new vms if this even has > been runing > for a year. >> >> Regards, >> Eugen >> >> >> [1] >> https://docs.openstack.org/nova/latest/admin/troubleshooting/orphaned-allocations.html >> >> Zitat von Sean Mooney : >> >> > On Mon, 2021-03-08 at 14:18 +0000, Eugen Block wrote: >> > > Thank you, Sean. >> > > >> > > > so you need to do >> > > > openstack compute service list to get teh compute service ids >> > > > then do >> > > > openstack compute service delete ... >> > > > >> > > > you need to make sure that you only remvoe the unused old serivces >> > > > but i think that would fix your issue. >> > > >> > > That's the thing, they don't show up in the compute service list. But >> > > I also found them in the resource_providers table, only the old >> > > compute nodes appear here: >> > > >> > > MariaDB [nova]> select name from nova_api.resource_providers; >> > > +--------------------------+ >> > > > name | >> > > +--------------------------+ >> > > > compute1.fqdn | >> > > > compute2.fqdn | >> > > > compute3.fqdn | >> > > > compute4.fqdn | >> > > +--------------------------+ >> > ah in that case the compute service delete is ment to remove the RPs too >> > but if the RP had stale allcoation at teh time of the delete the RP >> > delete will fail >> > >> > what you proably need to do in this case is check if the RPs still >> > have allocations and if so >> > verify that the allocation are owned by vms that nolonger exist. >> > if that is the case you should be able to delete teh allcaotion and >> > then the RP >> > if the allocations are related to active vms that are now on the >> > rebuild nodes then you will have to try and >> > heal the allcoations. >> > >> > there is a openstack client extention called osc-placement that you >> > can install to help. >> > we also have a heal allcoation command in nova-manage that may help >> > but the next step would be to validate >> > if the old RPs are still in use or not. from there you can then work >> > to align novas and placment view with >> > the real toplogy. >> > >> > that could invovle removing the old compute nodes form the >> > compute_nodes table or marking them as deleted but >> > both nova db and plamcent need to be kept in sysnc to correct your >> > current issue. >> > >> > > >> > > >> > > Zitat von Sean Mooney : >> > > >> > > > On Mon, 2021-03-08 at 13:18 +0000, Eugen Block wrote: >> > > > > Hi *, >> > > > > >> > > > > I have a quick question, last year we migrated our OpenStack to a >> > > > > highly available environment through a reinstall of all nodes. The >> > > > > migration went quite well, we're working happily in the new >> cloud but >> > > > > the databases still contain deprecated data. For example, the >> > > > > nova-scheduler logs lines like these on a regular basis: >> > > > > >> > > > > /var/log/nova/nova-scheduler.log:2021-02-19 12:02:46.439 >> 23540 WARNING >> > > > > nova.scheduler.host_manager [...] No compute service record >> found for >> > > > > host compute1 >> > > > > >> > > > > This is one of the old compute nodes that has been >> reinstalled and is >> > > > > now compute01. I tried to find the right spot to delete >> some lines in >> > > > > the DB but there are a couple of places so I wanted to check and ask >> > > > > you for some insights. >> > > > > >> > > > > The scheduler messages seem to originate in >> > > > > >> > > > > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py >> > > > > >> > > > > ---snip--- >> > > > >          for cell_uuid, computes in compute_nodes.items(): >> > > > >              for compute in computes: >> > > > >                  service = services.get(compute.host) >> > > > > >> > > > >                  if not service: >> > > > >                      LOG.warning( >> > > > >                          "No compute service record found for host >> > > > > %(host)s", >> > > > >                          {'host': compute.host}) >> > > > >                      continue >> > > > > ---snip--- >> > > > > >> > > > > So I figured it could be this table in the nova DB: >> > > > > >> > > > > ---snip--- >> > > > > MariaDB [nova]> select host,deleted from compute_nodes; >> > > > > +-----------+---------+ >> > > > > > host | deleted | >> > > > > +-----------+---------+ >> > > > > > compute01 | 0 | >> > > > > > compute02 | 0 | >> > > > > > compute03 | 0 | >> > > > > > compute04 | 0 | >> > > > > > compute05 | 0 | >> > > > > > compute1 | 0 | >> > > > > > compute2 | 0 | >> > > > > > compute3 | 0 | >> > > > > > compute4 | 0 | >> > > > > +-----------+---------+ >> > > > > ---snip--- >> > > > > >> > > > > What would be the best approach here to clean up a little? I believe >> > > > > it would be safe to simply purge those lines containing the old >> > > > > compute node, but there might be a smoother way. Or maybe there are >> > > > > more places to purge old data from? >> > > > so the step you porably missed was deleting the old compute >> > > service records >> > > > >> > > > so you need to do >> > > > openstack compute service list to get teh compute service ids >> > > > then do >> > > > openstack compute service delete ... >> > > > >> > > > you need to make sure that you only remvoe the unused old serivces >> > > > but i think that would fix your issue. >> > > > >> > > > > >> > > > > I'd appreciate any ideas. >> > > > > >> > > > > Regards, >> > > > > Eugen >> > > > > >> > > > > >> > > >> > > >> > > >> >> >> From gchamoul at redhat.com Tue Mar 9 14:53:22 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Tue, 9 Mar 2021 15:53:22 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core Message-ID: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Hi TripleO Devs, David is already a key member of our team since a long time now, he provided all the needed ansible roles for the Validation Framework into tripleo-ansible-operator. He continuously provides excellent code reviews and he is a source of great ideas for the future of the Validation Framework. That's why we would highly benefit from his addition to the core reviewer team. Assuming that there are no objections, we will add David to the core team next week. Thanks, David, for your excellent work! -- Gaël Chamoulaud - (He/Him/His) .::. Red Hat .::. OpenStack .::. .::. DFG:DF Squad:VF .::. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jbuchta at redhat.com Tue Mar 9 14:56:51 2021 From: jbuchta at redhat.com (Jan Buchta) Date: Tue, 9 Mar 2021 15:56:51 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: Not that it would technically matter, but +1. Jan Buchta Manager, Openstack Engineering Red Hat Czech, s.r.o. Purkyňova 3080/97b 61200 Brno, Czech Republic jbuchta at redhat.com T: +420-532294903 M: +420-603949107 IM: jbuchta On Tue, Mar 9, 2021 at 3:53 PM Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews > and he > is a source of great ideas for the future of the Validation Framework. > That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team > next > week. > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Mar 9 15:03:00 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 9 Mar 2021 20:33:00 +0530 Subject: [OSSN-0088] Some of the Glance metadef APIs likely to leak resources Message-ID: Some of the Glance metadef APIs likely to leak resources -------------------------------------------------------- ### Summary ### Metadef APIs are vulnerable and potentially leaking information to unauthorized users and also there is currently no limit on creation of metadef namespaces, objects, properties, resources and tags. This can be abused by malicious users to fill the Glance database resulting in a Denial of Service (DoS) condition. ### Affected Services / Software ### Glance ### Discussion ### There is no restriction on creation of metadef namespaces, objects, properties, resources and tags as well as it could also leak the information to unauthorized users or to the users outside of the project. By taking advantage of this lack of restrictions around metadef APIs, a a single user could fill the Glance database by creating unlimited resources, resulting in a Denial Of Service (DoS) style attack. Glance does allow metadef APIs to be controlled by policy. However, the default policy setting for metadef APIs allows all users to create or read the metadef information. Because metadef resources are not properly isolated to the owner, any use of them with potentially sensitive names (such as internal infrastructure details, customer names, etc) could unintentionally expose that information to a malicious user. ### Recommended Actions ### Since these fundamental issues have been present since the API was introduced, the Glance project is recommending operators disable all metadef APIs by default in their deployments. Here is an example of disabling the metadef APIs in the deployments for current stable OpenStack releases either in policy.json or policy.yaml. ---- begin example policy.json/policy.yaml snippet ---- "metadef_default": "!", "get_metadef_namespace": "rule:metadef_default", "get_metadef_namespaces": "rule:metadef_default", "modify_metadef_namespace": "rule:metadef_default", "add_metadef_namespace": "rule:metadef_default", "get_metadef_object": "rule:metadef_default", "get_metadef_objects": "rule:metadef_default", "modify_metadef_object": "rule:metadef_default", "add_metadef_object": "rule:metadef_default", "list_metadef_resource_types": "rule:metadef_default", "get_metadef_resource_type": "rule:metadef_default", "add_metadef_resource_type_association": "rule:metadef_default", "get_metadef_property": "rule:metadef_default", "get_metadef_properties": "rule:metadef_default", "modify_metadef_property": "rule:metadef_default", "add_metadef_property": "rule:metadef_default", "get_metadef_tag": "rule:metadef_default", "get_metadef_tags": "rule:metadef_default", "modify_metadef_tag": "rule:metadef_default", "add_metadef_tag": "rule:metadef_default", "add_metadef_tags": "rule:metadef_default" ---- end example policy.json/policy.yaml snippet ---- To re-enable metadef policies to be allowed to be admin only, operator(s) can make a change in respective policy.json or policy.yaml as shown below; (assuming all metadef policies are configured to use rule:metadeta_default as shown in above example) ---- begin example policy.json/policy.yaml snippet ---- "metadef_default": "rule:admin", ---- begin example policy.json/policy.yaml snippet ---- Operators with users that depend on metadef APIs may choose to leave these accessible to all users. In that case, education of users about the potential for information leakage in the resource names is advisable so that vulnerable practices can be altered as mitigation. To re-enable metadef policies to all users, operator(s) can make a change in respective policy.json or policy.yaml as shown below; (assuming all metadef policies are configured to use rule:metadeta_default as shown in above example) ---- begin example policy.json/policy.yaml snippet ---- "metadef_default": "", ---- begin example policy.json/policy.yaml snippet ---- ### Contacts / References ### Author: Abhishek Kekane, Red Hat Author: Lance Bragstad, Red Hat This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0088 Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1545702 Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1916926 Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1916922 Mailing List : [Security] openstack-security at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Tue Mar 9 15:10:46 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 9 Mar 2021 16:10:46 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: <7cbffe4c-17e1-37c2-c099-a0622ed218f5@redhat.com> of course +1! Even +42 On 3/9/21 3:53 PM, Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews and he > is a source of great ideas for the future of the Validation Framework. That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team next > week. > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From marios at redhat.com Tue Mar 9 15:46:46 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 17:46:46 +0200 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews > and he > is a source of great ideas for the future of the Validation Framework. > That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team > next > week. > o/ Gael so it is clear and fair for everyone (e.g. I've been approached by others about candidates for tripleo-core) I'd like to be clear on your proposal here because I don't think we have a 'validation framework core' group in gerrit - do we? Is your proposal that David is added to the tripleo-core group [1] with the understanding that voting rights will be exercised only in the following repos: tripleo-validations, validations-common and validations-libs? thanks, marios [1] https://review.opendev.org/admin/groups/0319cee8020840a3016f46359b076fa6b6ea831a > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchamoul at redhat.com Tue Mar 9 15:52:19 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Tue, 9 Mar 2021 16:52:19 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> On 09/Mar/2021 17:46, Marios Andreou wrote: > > > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud wrote: > > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews > and he > is a source of great ideas for the future of the Validation Framework. > That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team > next > week. > > > o/ Gael > > so it is clear and fair for everyone (e.g. I've been approached by others about > candidates for tripleo-core) > > I'd like to be clear on your proposal here because I don't think we have a > 'validation framework core' group in gerrit - do we? > > Is your proposal that David is added to the tripleo-core group [1] with the > understanding that voting rights will be exercised only in the following repos: > tripleo-validations, validations-common and validations-libs? Yes exactly! Sorry for the confusion. > thanks, marios > > [1] https://review.opendev.org/admin/groups/ > 0319cee8020840a3016f46359b076fa6b6ea831a > > >   > > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud  -  (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::.     DFG:DF Squad:VF    .::. > -- Gaël Chamoulaud - (He/Him/His) .::. Red Hat .::. OpenStack .::. .::. DFG:DF Squad:VF .::. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Tue Mar 9 16:00:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 18:00:07 +0200 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> Message-ID: On Tue, Mar 9, 2021 at 5:52 PM Gaël Chamoulaud wrote: > On 09/Mar/2021 17:46, Marios Andreou wrote: > > > > > > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud > wrote: > > > > Hi TripleO Devs, > > > > David is already a key member of our team since a long time now, he > > provided all the needed ansible roles for the Validation Framework > into > > tripleo-ansible-operator. He continuously provides excellent code > reviews > > and he > > is a source of great ideas for the future of the Validation > Framework. > > That's > > why we would highly benefit from his addition to the core reviewer > team. > > > > Assuming that there are no objections, we will add David to the core > team > > next > > week. > > > > > > o/ Gael > > > > so it is clear and fair for everyone (e.g. I've been approached by > others about > > candidates for tripleo-core) > > > > I'd like to be clear on your proposal here because I don't think we have > a > > 'validation framework core' group in gerrit - do we? > > > > Is your proposal that David is added to the tripleo-core group [1] with > the > > understanding that voting rights will be exercised only in the following > repos: > > tripleo-validations, validations-common and validations-libs? > > Yes exactly! Sorry for the confusion. > ACK no problem ;) As I said we need to be transparent and fair towards everyone. +1 from me to your proposal. Being obligated to do so at PTL ;) I did a quick review of activities. I can see that David has been particularly active in Wallaby [1] but has made tripleo contributions going back to 2017 [2] - I cannot see some reason to object to the proposal! regards, marios [1] https://www.stackalytics.io/?module=tripleo-group&project_type=openstack&user_id=davidjpeacock&metric=marks&release=wallaby [2] https://review.opendev.org/q/owner:davidjpeacock > > > thanks, marios > > > > [1] https://review.opendev.org/admin/groups/ > > 0319cee8020840a3016f46359b076fa6b6ea831a > > > > > > > > > > > > Thanks, David, for your excellent work! > > > > -- > > Gaël Chamoulaud - (He/Him/His) > > .::. Red Hat .::. OpenStack .::. > > .::. DFG:DF Squad:VF .::. > > > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Mar 9 16:10:01 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 9 Mar 2021 13:10:01 -0300 Subject: [election][cloudkitty] PTL candidacy for Xena Message-ID: Hello guys, I would like to self-nominate for the role of PTL of CloudKitty for the Xena release cycle. I am the current PTL for the Wallaby cycle and I am willing to continue in this role. I will keep working with existing contributors to maintain CloudKitty for their use cases and encourage the addition of new functionalities. Thank you for your support, -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Mar 9 16:35:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 18:35:07 +0200 Subject: [TripleO] Xena PTG - registration and reserved time slots Message-ID: Hello TripleO gentle early reminder that PTG is coming soon-ish... 12 April is not *that* far away ;) As requested by [1] I just booked some slots for us at https://ethercalc.net/oz7q0gds9zfi . I selected the same times we used for the Wallaby PTG - 1300-1700 UTC for Monday/Tuesday/Wednesday/Thursday. Obviously we are a massively distributed team so there is no good time that will suit everyone. However I think this serves well enough for the majority of our contributors? We may not use Thursday but it was pretty close last time with respect to the number of sessions requested so I booked it just in case. Please speak up if you disagree with the times and days I have booked. Finally *** REMEMBER TO REGISTER *** for the PTG at https://april2021-ptg.eventbrite.com regards, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020915.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Tue Mar 9 16:49:11 2021 From: mthode at mthode.org (Matthew Thode) Date: Tue, 9 Mar 2021 10:49:11 -0600 Subject: [election][requirements] PTL candidacy for Xena Message-ID: <20210309164911.7ygs6t6uliweessf@mthode.org> Hi all, I would like to self-nominate for the role of PTL of Requirements for the Xena release cycle. I am the current PTL for the Wallaby cycle and I am willing to continue in this role. The following will be my goals for the cycle, in order of importance: 1. The primary goal is to keep a tight rein on global-requirements and upper-constraints updates. (Keep things working well) 2. Un-cap requirements where possible (stuff like prettytable). 3. Fix some automation of generate contstraints (pkg-resources being added and setuptools being dropped) 4. Audit global-requirements and upper-constraints for redundancies. One of the rules we have for new entrants to global-requirements and/or upper-constraints is that they be non-redundant. Keeping that rule in mind, audit the list of requirements for possible redundancies and if possible, reduce the number of requirements we manage. I look forward to continue working with you in this cycle, as your PTL or not. Thank you for your support, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Tue Mar 9 16:54:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 9 Mar 2021 18:54:07 +0200 Subject: [TripleO] Xena PTG - registration and reserved time slots In-Reply-To: References: Message-ID: On Tue, Mar 9, 2021 at 6:35 PM Marios Andreou wrote: > Hello TripleO > > gentle early reminder that PTG is coming soon-ish... 12 April is not > *that* far away ;) > *19* April even ;) thanks > > As requested by [1] I just booked some slots for us at > https://ethercalc.net/oz7q0gds9zfi . > > I selected the same times we used for the Wallaby PTG - 1300-1700 UTC for > Monday/Tuesday/Wednesday/Thursday. Obviously we are a massively distributed > team so there is no good time that will suit everyone. However I think this > serves well enough for the majority of our contributors? We may not use > Thursday but it was pretty close last time with respect to the number of > sessions requested so I booked it just in case. > > Please speak up if you disagree with the times and days I have booked. > > Finally *** REMEMBER TO REGISTER *** for the PTG at > https://april2021-ptg.eventbrite.com > > regards, marios > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020915.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 9 17:19:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 17:19:17 +0000 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> Message-ID: <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > We got two similar release processing failures: [...] > Since this "host key verification failed" error hit tripleo-ipsec > three times in a row, I suspect we have something stuck (or we > always hit the same cache/worker). The errors happened when run on nodes in different providers. Looking at the build logs, I also notice that they fail for the stable/rocky branch but succeeded for other branches of the same repository. They're specifically erroring when trying to reach the Gerrit server over SSH, the connection details for which are encoded on the .gitreview file in each branch. This leads me to wonder whether there's something about the fact that the stable/rocky branch of tripleo-ipsec is still using the old review.openstack.org hostname to push, and maybe we're not pre-seeding an appropriate hostkey entry for that in the known_hosts file? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Tue Mar 9 17:28:09 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 9 Mar 2021 10:28:09 -0700 Subject: [tripleo][ci] stein and quay.io Message-ID: Greetings, FYI... At this time the accounts used to push containers in quay.io has been locked out. We are working with quay support to resolve the issue. It's not 100% clear if there is an issue w/ our account or more generally w/ quay [2] At this time we know that the stable/stein branch in tripleo is specifically configured to pull containers from quay and has an open issue [1]. Merging patches on stable/stein is likely blocked unless we move all those jobs to non-voting at this time. Is there any interest in moving the jobs to non-voting? Please respond if this issue is blocking you. Thanks all! [1] https://bugs.launchpad.net/tripleo/+bug/1915921 [2] https://status.quay.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Mar 9 17:28:56 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 9 Mar 2021 18:28:56 +0100 Subject: [election][qa] PTL candidacy for Xena Message-ID: Hi everyone, I would like to nominate myself for the PTL role of the QA in the Xena cycle. This would be my first PTL service, however, I have been contributing to Tempest for several years and have been a core contributor for over a year now. I'm also a core in refstack projects (python-tempestconf, refstack and refstack-client). A few things I would like to focus on in the Xena cycle: * Get Tempest scenario manager effort [2] finished - we started actively working on this ~2 cycles ago and we have made significant progress. After it's done we will declare the interface stable which will help to decrease the code duplicity among tempest plugins which is directly related to the maintenance difficulty. * Keep decreasing the bug counts * Decrease the number of open patches. Although many projects suffer from not enough personal resources, we can't let the contributors go out of their steam by making their patches to wait in the queue for too long. * Complete priority items from the previous cycles [1] https://www.stackalytics.com/?user_id=mkopec [2] https://etherpad.opendev.org/p/tempest-scenario-manager Thanks for your consideration! -- Martin Kopec irc: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Tue Mar 9 18:34:26 2021 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 9 Mar 2021 19:34:26 +0100 Subject: [election][magnum] Xena PTL candidacy Message-ID: Hello OpenStack community, I would like to continue serving as Magnum PTL. In wallaby we mostly maintained and updated our templates for kubernetes and related addons along with the CI maintenance. In this release, our focus continues to be making the cloud operators easier with: * control-plane on operators tenant * performance improvements for API calls * cluster node replacement * finish transition to helm3 for addons See you in gerrit, Spyros Trigazis -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy6768 at gmail.com Tue Mar 9 07:19:20 2021 From: sandy6768 at gmail.com (sandeep singh parihar) Date: Tue, 9 Mar 2021 12:49:20 +0530 Subject: openstack:How to create "admin read only " role in openstack Message-ID: Hi, can someone please help to "How to create admin read only role " in openstack? Thanks sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcr1br24 at gmail.com Tue Mar 9 14:37:00 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Tue, 9 Mar 2021 22:37:00 +0800 Subject: [ISSUE]Openstack create VNF instance ERROR Message-ID: Dear Sir, This our testbed: Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) We have trouble when creating vnf on compute node using OpenStack CLI: #openstack vnf create vnfd1 vnf1 Here are the log message and ovs information. COMPUTE NODE1: nova-compute.log [image: error1.png] [image: list1.png] [image: vsctl1.png] COMPUTE NODE 2: nova-compute.log [image: error2.png] [image: list2.png] [image: vsctl2.png] We don't use DPDK on our Openvswitch. Can anyone give us some advice on this issue? Thanks for helping us. [image: Mailtrack] Sender notified by Mailtrack 03/09/21, 10:20:09 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error1.png Type: image/png Size: 279362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list1.png Type: image/png Size: 51268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl1.png Type: image/png Size: 44054 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error2.png Type: image/png Size: 260121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list2.png Type: image/png Size: 52846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl2.png Type: image/png Size: 35301 bytes Desc: not available URL: From lyarwood at redhat.com Tue Mar 9 22:08:18 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 9 Mar 2021 22:08:18 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI Message-ID: Hello all, I reported the following bug last week but I've yet to get any real feedback after asking a few times in irc. Running parallel iSCSI/LVM c-vol backends is causing random failures in CI https://bugs.launchpad.net/cinder/+bug/1917750 AFAICT tgtadm is causing this behaviour. As I've stated in the bug with Fedora 32 and lioadm I don't see the WWN conflict between the two backends. Does anyone know if using lioadm is an option on Focal? Thanks in advance, Lee From fungi at yuggoth.org Tue Mar 9 22:18:12 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 22:18:12 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: References: Message-ID: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > I reported the following bug last week but I've yet to get any real > feedback after asking a few times in irc. > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > https://bugs.launchpad.net/cinder/+bug/1917750 > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > with Fedora 32 and lioadm I don't see the WWN conflict between the two > backends. Does anyone know if using lioadm is an option on Focal? https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html seems to indicate that you just need to set it in configuration. The package that document mentions looks like a distro package recommendation (does not exist under that name on PyPI) and the equivalent lib on PyPI is included in cinder's requirements.txt file, but I don't see either mentioned in the devstack source tree so maybe that needs to be installed for DevStack-based jobs to take advantage of it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Mar 9 22:19:48 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 22:19:48 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> Message-ID: <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > I reported the following bug last week but I've yet to get any real > > feedback after asking a few times in irc. > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > backends. Does anyone know if using lioadm is an option on Focal? > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > seems to indicate that you just need to set it in configuration. The > package that document mentions looks like a distro package > recommendation (does not exist under that name on PyPI) and the > equivalent lib on PyPI is included in cinder's requirements.txt > file, but I don't see either mentioned in the devstack source tree > so maybe that needs to be installed for DevStack-based jobs to take > advantage of it. Oh, and this would be the package to add from Ubuntu Focal: https://packages.ubuntu.com/focal/python3-rtslib-fb -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Mar 9 22:21:33 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Mar 2021 22:21:33 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> Message-ID: <20210309222133.ux5gtobgujtep3so@yuggoth.org> On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > I reported the following bug last week but I've yet to get any real > > > feedback after asking a few times in irc. > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > > seems to indicate that you just need to set it in configuration. The > > package that document mentions looks like a distro package > > recommendation (does not exist under that name on PyPI) and the > > equivalent lib on PyPI is included in cinder's requirements.txt > > file, but I don't see either mentioned in the devstack source tree > > so maybe that needs to be installed for DevStack-based jobs to take > > advantage of it. > > Oh, and this would be the package to add from Ubuntu Focal: > > https://packages.ubuntu.com/focal/python3-rtslib-fb Nevermind, DevStack installs the projects with pip, so the one in cinder's requirements.txt should already be present. In that case, yeah, just set it in the config? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zigo at debian.org Tue Mar 9 22:26:15 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 9 Mar 2021 23:26:15 +0100 Subject: [election][requirements] PTL candidacy for Xena In-Reply-To: <20210309164911.7ygs6t6uliweessf@mthode.org> References: <20210309164911.7ygs6t6uliweessf@mthode.org> Message-ID: <5851aff9-56ac-f5e3-34fe-e8a1e1782380@debian.org> Matthew, My reply is not at all a comment on your candidacy, just a few remarks on the topics you've raised (and on which I agree with you). On 3/9/21 5:49 PM, Matthew Thode wrote: > 2. Un-cap requirements where possible (stuff like prettytable). Yeah, it's kind of amazing we're relying all our command line outputs on a release of prettytable that is from 2013! Lucky, nobody in Debian complained that it hasn't been updated... > 4. Audit global-requirements and upper-constraints for redundancies. > One of the rules we have for new entrants to global-requirements > and/or upper-constraints is that they be non-redundant. Keeping that > rule in mind, audit the list of requirements for possible redundancies > and if possible, reduce the number of requirements we manage. We're currently using: - anyjson - ujson - jsonpatch - jsonschema - jsonpath - jsonpath-rw - jsonpath-rw-ext - simplejson - jsonpointer I'm not sure what they all do, but it does feel like there's room for improvement... :) Cheers, Thomas Goirand (zigo) From lyarwood at redhat.com Tue Mar 9 22:40:08 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 9 Mar 2021 22:40:08 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <20210309222133.ux5gtobgujtep3so@yuggoth.org> References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> <20210309222133.ux5gtobgujtep3so@yuggoth.org> Message-ID: On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > I reported the following bug last week but I've yet to get any real > > > > feedback after asking a few times in irc. > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > > > seems to indicate that you just need to set it in configuration. The > > > package that document mentions looks like a distro package > > > recommendation (does not exist under that name on PyPI) and the > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > file, but I don't see either mentioned in the devstack source tree > > > so maybe that needs to be installed for DevStack-based jobs to take > > > advantage of it. > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > Nevermind, DevStack installs the projects with pip, so the one in > cinder's requirements.txt should already be present. In that case, > yeah, just set it in the config? Yes correct, my question was more to see if there were any known issues with using lioadm on Focal. Anyway I've pushed the following WIP change for devstack to switch over to lioadm when using Focal: WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu https://review.opendev.org/c/openstack/devstack/+/779624 From cboylan at sapwetik.org Tue Mar 9 22:48:50 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 09 Mar 2021 14:48:50 -0800 Subject: =?UTF-8?Q?Re:_[cinder][nova]_Running_parallel_iSCSI/LVM_c-vol_backends_i?= =?UTF-8?Q?s_causing_random_failures_in_CI?= In-Reply-To: References: <20210309221812.cm6zhtvs5kmzdhi2@yuggoth.org> <20210309221948.rsec2ouh4yxh7n3e@yuggoth.org> <20210309222133.ux5gtobgujtep3so@yuggoth.org> Message-ID: On Tue, Mar 9, 2021, at 2:40 PM, Lee Yarwood wrote: > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > I reported the following bug last week but I've yet to get any real > > > > > feedback after asking a few times in irc. > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-support.html > > > > seems to indicate that you just need to set it in configuration. The > > > > package that document mentions looks like a distro package > > > > recommendation (does not exist under that name on PyPI) and the > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > file, but I don't see either mentioned in the devstack source tree > > > > so maybe that needs to be installed for DevStack-based jobs to take > > > > advantage of it. > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > Nevermind, DevStack installs the projects with pip, so the one in > > cinder's requirements.txt should already be present. In that case, > > yeah, just set it in the config? > > Yes correct, my question was more to see if there were any known > issues with using lioadm on Focal. Anyway I've pushed the following > WIP change for devstack to switch over to lioadm when using Focal: > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > https://review.opendev.org/c/openstack/devstack/+/779624 https://blog.e0ne.info/post/using-openstack-cinder-with-lio-target/ says the major issue with switching in cinder has been figuring out upgrade testing of the change. I don't know what that entails or why it might be a problem though. From ltoscano at redhat.com Tue Mar 9 22:50:38 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 09 Mar 2021 23:50:38 +0100 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: References: <20210309222133.ux5gtobgujtep3so@yuggoth.org> Message-ID: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > I reported the following bug last week but I've yet to get any real > > > > > feedback after asking a few times in irc. > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures > > > > > in CI > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the > > > > > two > > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-> > > > support.html seems to indicate that you just need to set it in > > > > configuration. The package that document mentions looks like a distro > > > > package > > > > recommendation (does not exist under that name on PyPI) and the > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > file, but I don't see either mentioned in the devstack source tree > > > > so maybe that needs to be installed for DevStack-based jobs to take > > > > advantage of it. > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > Nevermind, DevStack installs the projects with pip, so the one in > > cinder's requirements.txt should already be present. In that case, > > yeah, just set it in the config? > > Yes correct, my question was more to see if there were any known > issues with using lioadm on Focal. Anyway I've pushed the following > WIP change for devstack to switch over to lioadm when using Focal: > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > https://review.opendev.org/c/openstack/devstack/+/779624 For the record, we use the default (tgt) on the main cinder gates, as well as a lioadm job (defined in cinder-tempest-plugin). If I read the code correctly, that change would break tgt for everyone on ubuntu. Please raise this in the cinder meeting (Wednesday). -- Luigi From smooney at redhat.com Tue Mar 9 23:00:29 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 09 Mar 2021 23:00:29 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> References: <20210309222133.ux5gtobgujtep3so@yuggoth.org> <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> Message-ID: On Tue, 2021-03-09 at 23:50 +0100, Luigi Toscano wrote: > On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > > I reported the following bug last week but I've yet to get any real > > > > > > feedback after asking a few times in irc. > > > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures > > > > > > in CI > > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between the > > > > > > two > > > > > > backends. Does anyone know if using lioadm is an option on Focal? > > > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-iscsi-> > > > support.html seems to indicate that you just need to set it in > > > > > configuration. The package that document mentions looks like a distro > > > > > package > > > > > recommendation (does not exist under that name on PyPI) and the > > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > > file, but I don't see either mentioned in the devstack source tree > > > > > so maybe that needs to be installed for DevStack-based jobs to take > > > > > advantage of it. > > > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > > > Nevermind, DevStack installs the projects with pip, so the one in > > > cinder's requirements.txt should already be present. In that case, > > > yeah, just set it in the config? > > > > Yes correct, my question was more to see if there were any known > > issues with using lioadm on Focal. Anyway I've pushed the following > > WIP change for devstack to switch over to lioadm when using Focal: > > > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > > https://review.opendev.org/c/openstack/devstack/+/779624 > > For the record, we use the default (tgt) on the main cinder gates, as well as > a lioadm job (defined in cinder-tempest-plugin). If I read the code correctly, > that change would break tgt for everyone on ubuntu. > > Please raise this in the cinder meeting (Wednesday). i dont think we can wait that long im pretty sure this is causing this error form talking to lee earlier to day http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Unable%20to%20detach%20the%20device%20from%20the%20live%20config%5C%22%20AND%20loglevel:%20ERROR i guess it almost wednesday already but its startin to casue issues in multiple poject gates right before code freeze so we need to adress this before we end up rechecking things over and over on many patches. > From ltoscano at redhat.com Tue Mar 9 23:07:52 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 10 Mar 2021 00:07:52 +0100 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: References: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> Message-ID: <7488096.MhkbZ0Pkbq@whitebase.usersys.redhat.com> On Wednesday, 10 March 2021 00:00:29 CET Sean Mooney wrote: > On Tue, 2021-03-09 at 23:50 +0100, Luigi Toscano wrote: > > On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > > > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > > > I reported the following bug last week but I've yet to get any > > > > > > > real > > > > > > > feedback after asking a few times in irc. > > > > > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random > > > > > > > failures > > > > > > > in CI > > > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the > > > > > > > bug > > > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between > > > > > > > the > > > > > > > two > > > > > > > backends. Does anyone know if using lioadm is an option on > > > > > > > Focal? > > > > > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-is > > > > > > csi-> > > > support.html seems to indicate that you just need to > > > > > > set it in configuration. The package that document mentions looks > > > > > > like a distro package > > > > > > recommendation (does not exist under that name on PyPI) and the > > > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > > > file, but I don't see either mentioned in the devstack source tree > > > > > > so maybe that needs to be installed for DevStack-based jobs to > > > > > > take > > > > > > advantage of it. > > > > > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > > > > > Nevermind, DevStack installs the projects with pip, so the one in > > > > cinder's requirements.txt should already be present. In that case, > > > > yeah, just set it in the config? > > > > > > Yes correct, my question was more to see if there were any known > > > issues with using lioadm on Focal. Anyway I've pushed the following > > > WIP change for devstack to switch over to lioadm when using Focal: > > > > > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > > > https://review.opendev.org/c/openstack/devstack/+/779624 > > > > For the record, we use the default (tgt) on the main cinder gates, as well > > as a lioadm job (defined in cinder-tempest-plugin). If I read the code > > correctly, that change would break tgt for everyone on ubuntu. > > > > Please raise this in the cinder meeting (Wednesday). > > i dont think we can wait that long im pretty sure this is causing this error > form talking to lee earlier to day > http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message: > %5C%22Unable%20to%20detach%20the%20device%20from%20the%20live%20config%5C%22 > %20AND%20loglevel:%20ERROR > > i guess it almost wednesday already but its startin to casue issues in > multiple poject gates right before code freeze so we need to adress this > before we end up rechecking things over and over on many patches. But then don't error out if someone tries to use tgtadm, which is what would happen if that patch was merged (if I didn't misread the code). -- Luigi From raubvogel at gmail.com Tue Mar 9 23:19:27 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Tue, 9 Mar 2021 18:19:27 -0500 Subject: [nova?] kvm/libvirt options Message-ID: Which kvm options can I specify to an instance when I create it? I am looking at https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html and it has a handful of CPU ones (based on https://docs.openstack.org/nova/latest/configuration/config.html#libvirt), making me think the rest is some kind of openstack default which is a subset of what I can specify using, say, virsh define. From kennelson11 at gmail.com Tue Mar 9 23:52:01 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 9 Mar 2021 15:52:01 -0800 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations March 2021 End Message-ID: Hello! The PTL and TC Nomination period is now over. The official candidate lists for PTLs [0] and TC seats [1] are available on the election website. -- PTL Election Details -- There are 8 projects without candidates, so according to this resolution[2], the TC will have to decide how the following projects will proceed: Barbican, Cyborg, Keystone, Mistral, Monasca, Senlin, Zaqar, Zun -- TC Election Details -- There are 0 projects that will have elections. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start Mar 12, 2021 23:45 UTC. Thank you, -Kendall Nelson (diablo_rojo) & the Election Officials [0] https://governance.openstack.org/election/#xena-ptl-candidates [1] https://governance.openstack.org/election/#xena-tc-candidates [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Mar 10 00:13:44 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 9 Mar 2021 17:13:44 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Mon, Mar 8, 2021 at 12:46 AM Marios Andreou wrote: > > > On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin wrote: > >> >> >> On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: >> >>> Hello All, >>> >>> Just a reminder that we will be starting to implement steps to migrate >>> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >>> >>> The plan is outlined in: >>> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >>> >>> In summary, on Sunday, we plan to: >>> - Move the master integration line for promotions to build containers >>> and images on centos-8 stream nodes >>> - Change the release files to bring down centos-8 stream repos for use >>> in test jobs (test jobs will still start on centos-8 nodes - changing this >>> nodeset will happen later) >>> - Image build and container build check jobs will be moved to >>> non-voting during this transition. >>> >> >>> We have already run all the test jobs in RDO with centos-8 stream >>> content running on centos-8 nodes to prequalify this transition. >>> >>> We will update this list with status as we go forward with next steps. >>> >>> Thanks! >>> >> >> OK... status update. >> >> Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure >> Monday wasn't a disaster upstream. TripleO master jobs have successfully >> been migrated to CentOS-8-Stream today. You should see "8-stream" now in >> /etc/yum.repos.d/tripleo-centos.* repos. >> >> > > \o/ this is fantastic! > > nice work all thanks to everyone involved for getting this done with > minimal disruption > > tripleo-ci++ > > > > > >> Your CentOS-8-Stream Master hash is: >> >> edd46672cb9b7a661ecf061942d71a72 >> >> Your master repos are: >> https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo >> >> Containers, and overcloud images should all be centos-8-stream. >> >> The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. >> >> Please open launchpad bugs w/ the "alert" tag if you are having any issues. >> >> Thanks and well done all! >> >> >> >>> >>> >>> >>> >> OK.... stable/victoria will start to migrate this evening to centos-8-stream We are looking to promote the following [1]. Again if you hit any issues, please just file a launchpad bug w/ the "alert" tag. Thanks [1] https://trunk.rdoproject.org/api-centos8-victoria/api/civotes_agg_detail.html?ref_hash=457ea897ac3b7552b82c532adcea63f0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 10 00:30:55 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 10 Mar 2021 00:30:55 +0000 Subject: [nova?] kvm/libvirt options In-Reply-To: References: Message-ID: On Tue, 2021-03-09 at 18:19 -0500, Mauricio Tavares wrote: > Which kvm options can I specify to an instance when I create it? I am > looking at https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html > and it has a handful of CPU ones (based on > https://docs.openstack.org/nova/latest/configuration/config.html#libvirt), > making me think the rest is some kind of openstack default which is a > subset of what I can specify using, say, virsh define. > you cannot file any of the directly. opentack is not a virutalaition plathform its an infrasture as a service cloud plathform. https://docs.openstack.org/nova/latest/contributor/project-scope.html nova provide an abstraction layer over multile hyperviorus including libvirt via flavor and flavor extra specs https://docs.openstack.org/nova/latest/user/flavors.html as an opterator of openstack you are expect to know the low level details of your cloud but as an end user you should not be abel to tell directly which hypervior is in use. generally you can work it out by looking at the flavor and connecting to the instances but we do not provide a way to directly maniulate the xml. > From ignaziocassano at gmail.com Wed Mar 10 06:57:21 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 10 Mar 2021 07:57:21 +0100 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> Message-ID: Hello All, please, are there news about bug 1815989 ? On stein I modified code as suggested in the patches. I am worried when I will upgrade to train: wil this bug persist ? On which openstack version this bug is resolved ? Ignazio Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello, I tried to update to last stein packages on yum and seems this bug > still exists. > Before the yum update I patched some files as suggested and and ping to vm > worked fine. > After yum update the issue returns. > Please, let me know If I must patch files by hand or some new parameters > in configuration can solve and/or the issue is solved in newer openstack > versions. > Thanks > Ignazio > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney ha scritto: > >> On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: >> > Many thanks. >> > Please keep in touch. >> here are the two patches. >> the first https://review.opendev.org/#/c/724386/ is the actual change to >> add the new config opition >> this needs a release note and some tests but it shoudl be functional >> hence the [WIP] >> i have not enable the workaround in any job in this patch so the ci run >> will assert this does not break >> anything in the default case >> >> the second patch is https://review.opendev.org/#/c/724387/ which enables >> the workaround in the multi node ci jobs >> and is testing that live migration exctra works when the workaround is >> enabled. >> >> this should work as it is what we expect to happen if you are using a >> moderne nova with an old neutron. >> its is marked [DNM] as i dont intend that patch to merge but if the >> workaround is useful we migth consider enableing >> it for one of the jobs to get ci coverage but not all of the jobs. >> >> i have not had time to deploy a 2 node env today but ill try and test >> this locally tomorow. >> >> >> >> > Ignazio >> > >> > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < >> smooney at redhat.com> >> > ha scritto: >> > >> > > so bing pragmatic i think the simplest path forward given my other >> patches >> > > have not laned >> > > in almost 2 years is to quickly add a workaround config option to >> disable >> > > mulitple port bindign >> > > which we can backport and then we can try and work on the actual fix >> after. >> > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that >> shoudl >> > > serve as a workaround >> > > for thos that hav this issue but its a regression in functionality. >> > > >> > > i can create a patch that will do that in an hour or so and submit a >> > > followup DNM patch to enabel the >> > > workaound in one of the gate jobs that tests live migration. >> > > i have a meeting in 10 mins and need to finish the pacht im currently >> > > updating but ill submit a poc once that is done. >> > > >> > > im not sure if i will be able to spend time on the actul fix which i >> > > proposed last year but ill see what i can do. >> > > >> > > >> > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: >> > > > PS >> > > > I have testing environment on queens,rocky and stein and I can >> make test >> > > > as you need. >> > > > Ignazio >> > > > >> > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < >> > > > ignaziocassano at gmail.com> ha scritto: >> > > > >> > > > > Hello Sean, >> > > > > the following is the configuration on my compute nodes: >> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt >> > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 >> > > > > libvirt-libs-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 >> > > > > libvirt-client-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 >> > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 >> > > > > libvirt-python-4.5.0-1.el7.x86_64 >> > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 >> > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 >> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu >> > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 >> > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 >> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >> > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch >> > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch >> > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 >> > > > > >> > > > > >> > > > > As far as firewall driver >> > > >> > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: >> > > > > >> > > > > firewall_driver = iptables_hybrid >> > > > > >> > > > > I have same libvirt/qemu version on queens, on rocky and on stein >> > > >> > > testing >> > > > > environment and the >> > > > > same firewall driver. >> > > > > Live migration on provider network on queens works fine. >> > > > > It does not work fine on rocky and stein (vm lost connection >> after it >> > > >> > > is >> > > > > migrated and start to respond only when the vm send a network >> packet , >> > > >> > > for >> > > > > example when chrony pools the time server). >> > > > > >> > > > > Ignazio >> > > > > >> > > > > >> > > > > >> > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < >> > > >> > > smooney at redhat.com> >> > > > > ha scritto: >> > > > > >> > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: >> > > > > > > Hello, some updated about this issue. >> > > > > > > I read someone has got same issue as reported here: >> > > > > > > >> > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 >> > > > > > > >> > > > > > > If you read the discussion, someone tells that the garp must >> be >> > > >> > > sent by >> > > > > > > qemu during live miration. >> > > > > > > If this is true, this means on rocky/stein the qemu/libvirt >> are >> > > >> > > bugged. >> > > > > > >> > > > > > it is not correct. >> > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve >> as >> > > >> > > its mac >> > > > > > learning frames >> > > > > > instead >> > > >> > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol >> > > > > > >> https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html >> > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 >> > > > > > >> https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html >> > > > > > but was fixed by >> > > > > > >> > > >> > > >> https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b >> > > > > > can you confirm you are not using the broken 2.6.0 release and >> are >> > > >> > > using >> > > > > > 2.7 or newer or 2.4 and older. >> > > > > > >> > > > > > >> > > > > > > So I tried to use stein and rocky with the same version of >> > > >> > > libvirt/qemu >> > > > > > > packages I installed on queens (I updated compute and >> controllers >> > > >> > > node >> > > > > > >> > > > > > on >> > > > > > > queens for obtaining same libvirt/qemu version deployed on >> rocky >> > > >> > > and >> > > > > > >> > > > > > stein). >> > > > > > > >> > > > > > > On queens live migration on provider network continues to work >> > > >> > > fine. >> > > > > > > On rocky and stein not, so I think the issue is related to >> > > >> > > openstack >> > > > > > > components . >> > > > > > >> > > > > > on queens we have only a singel prot binding and nova blindly >> assumes >> > > > > > that the port binding details wont >> > > > > > change when it does a live migration and does not update the >> xml for >> > > >> > > the >> > > > > > netwrok interfaces. >> > > > > > >> > > > > > the port binding is updated after the migration is complete in >> > > > > > post_livemigration >> > > > > > in rocky+ neutron optionally uses the multiple port bindings >> flow to >> > > > > > prebind the port to the destiatnion >> > > > > > so it can update the xml if needed and if post copy live >> migration is >> > > > > > enable it will asyconsly activate teh dest port >> > > > > > binding before post_livemigration shortenting the downtime. >> > > > > > >> > > > > > if you are using the iptables firewall os-vif will have >> precreated >> > > >> > > the >> > > > > > ovs port and intermediate linux bridge before the >> > > > > > migration started which will allow neutron to wire it up (put >> it on >> > > >> > > the >> > > > > > correct vlan and install security groups) before >> > > > > > the vm completes the migraton. >> > > > > > >> > > > > > if you are using the ovs firewall os-vif still precreates teh >> ovs >> > > >> > > port >> > > > > > but libvirt deletes it and recreats it too. >> > > > > > as a result there is a race when using openvswitch firewall >> that can >> > > > > > result in the RARP packets being lost. >> > > > > > >> > > > > > > >> > > > > > > Best Regards >> > > > > > > Ignazio Cassano >> > > > > > > >> > > > > > > >> > > > > > > >> > > > > > > >> > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < >> > > > > > >> > > > > > smooney at redhat.com> >> > > > > > > ha scritto: >> > > > > > > >> > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: >> > > > > > > > > Hello, I have this problem with rocky or newer with >> > > >> > > iptables_hybrid >> > > > > > > > > firewall. >> > > > > > > > > So, can I solve using post copy live migration ??? >> > > > > > > > >> > > > > > > > so this behavior has always been how nova worked but rocky >> the >> > > > > > > > >> > > > > > > > >> > > > > > >> > > > > > >> > > >> > > >> https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html >> > > > > > > > spec intoduced teh ablity to shorten the outage by pre >> biding the >> > > > > > >> > > > > > port and >> > > > > > > > activating it when >> > > > > > > > the vm is resumed on the destiation host before we get to >> pos >> > > >> > > live >> > > > > > >> > > > > > migrate. >> > > > > > > > >> > > > > > > > this reduces the outage time although i cant be fully >> elimiated >> > > >> > > as >> > > > > > >> > > > > > some >> > > > > > > > level of packet loss is >> > > > > > > > always expected when you live migrate. >> > > > > > > > >> > > > > > > > so yes enabliy post copy live migration should help but be >> aware >> > > >> > > that >> > > > > > >> > > > > > if a >> > > > > > > > network partion happens >> > > > > > > > during a post copy live migration the vm will crash and >> need to >> > > >> > > be >> > > > > > > > restarted. >> > > > > > > > it is generally safe to use and will imporve the migration >> > > >> > > performace >> > > > > > >> > > > > > but >> > > > > > > > unlike pre copy migration if >> > > > > > > > the guess resumes on the dest and the mempry page has not >> been >> > > >> > > copied >> > > > > > >> > > > > > yet >> > > > > > > > then it must wait for it to be copied >> > > > > > > > and retrive it form the souce host. if the connection too >> the >> > > >> > > souce >> > > > > > >> > > > > > host >> > > > > > > > is intrupted then the vm cant >> > > > > > > > do that and the migration will fail and the instance will >> crash. >> > > >> > > if >> > > > > > >> > > > > > you >> > > > > > > > are using precopy migration >> > > > > > > > if there is a network partaion during the migration the >> > > >> > > migration will >> > > > > > > > fail but the instance will continue >> > > > > > > > to run on the source host. >> > > > > > > > >> > > > > > > > so while i would still recommend using it, i it just good >> to be >> > > >> > > aware >> > > > > > >> > > > > > of >> > > > > > > > that behavior change. >> > > > > > > > >> > > > > > > > > Thanks >> > > > > > > > > Ignazio >> > > > > > > > > >> > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney >> ha >> > > > > > >> > > > > > scritto: >> > > > > > > > > >> > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano >> wrote: >> > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm >> migrate >> > > > > > >> > > > > > from one >> > > > > > > > >> > > > > > > > node >> > > > > > > > > > > to another I cannot ping it for several minutes. If >> in the >> > > >> > > vm I >> > > > > > >> > > > > > put a >> > > > > > > > > > > script that ping the gateway continously, the live >> > > >> > > migration >> > > > > > >> > > > > > works >> > > > > > > > >> > > > > > > > fine >> > > > > > > > > > >> > > > > > > > > > and >> > > > > > > > > > > I can ping it. Why this happens ? I read something >> about >> > > > > > >> > > > > > gratuitous >> > > > > > > > >> > > > > > > > arp. >> > > > > > > > > > >> > > > > > > > > > qemu does not use gratuitous arp but instead uses an >> older >> > > > > > >> > > > > > protocal >> > > > > > > > >> > > > > > > > called >> > > > > > > > > > RARP >> > > > > > > > > > to do mac address learning. >> > > > > > > > > > >> > > > > > > > > > what release of openstack are you using. and are you >> using >> > > > > > >> > > > > > iptables >> > > > > > > > > > firewall of openvswitch firewall. >> > > > > > > > > > >> > > > > > > > > > if you are using openvswtich there is is nothing we can >> do >> > > >> > > until >> > > > > > >> > > > > > we >> > > > > > > > > > finally delegate vif pluging to os-vif. >> > > > > > > > > > currently libvirt handels interface plugging for kernel >> ovs >> > > >> > > when >> > > > > > >> > > > > > using >> > > > > > > > >> > > > > > > > the >> > > > > > > > > > openvswitch firewall driver >> > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress >> that >> > > >> > > but it >> > > > > > >> > > > > > and >> > > > > > > > >> > > > > > > > the >> > > > > > > > > > neutron patch are >> > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. >> > > >> > > while >> > > > > > >> > > > > > libvirt >> > > > > > > > >> > > > > > > > is >> > > > > > > > > > pluging the vif there will always be >> > > > > > > > > > a race condition where the RARP packets sent by qemu and >> > > >> > > then mac >> > > > > > > > >> > > > > > > > learning >> > > > > > > > > > packets will be lost. >> > > > > > > > > > >> > > > > > > > > > if you are using the iptables firewall and you have >> opnestack >> > > > > > >> > > > > > rock or >> > > > > > > > > > later then if you enable post copy live migration >> > > > > > > > > > it should reduce the downtime. in this conficution we >> do not >> > > >> > > have >> > > > > > >> > > > > > the >> > > > > > > > >> > > > > > > > race >> > > > > > > > > > betwen neutron and libvirt so the rarp >> > > > > > > > > > packets should not be lost. >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > > Please, help me ? >> > > > > > > > > > > Any workaround , please ? >> > > > > > > > > > > >> > > > > > > > > > > Best Regards >> > > > > > > > > > > Ignazio >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > >> > > > > > > > >> > > > > > >> > > > > > >> > > >> > > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoshito.itou.dr at hco.ntt.co.jp Wed Mar 10 07:00:17 2021 From: yoshito.itou.dr at hco.ntt.co.jp (Yoshito Ito) Date: Wed, 10 Mar 2021 16:00:17 +0900 Subject: [heat-translator] Ask new release 2.2.1 to fix Zuul jobs Message-ID: <3c44044b-487b-0e6f-1889-358e5eafb74d@hco.ntt.co.jp_1> Hi heat-translator core members, I'd like you to review the following patches [1][2], and to ask if we can release 2.2.1 with these commits to fix our Zuul jobs. In release 2.2.0, our Zuul jobs are broken [3] because of new release of tosca-parser 2.3.0, which provides strict validation of required attributes in [4]. The patch [1] fix this issue by updating our wrong test samples. The other [2] is now blocked by this issue and was better to be merged in 2.2.0. I missed the 2.2.0 release patch [5] because none of us was added as reviewers. So after merging [1] and [2], I will submit a patch to make new 2.2.1 release. [1] https://review.opendev.org/c/openstack/heat-translator/+/779642 [2] https://review.opendev.org/c/openstack/heat-translator/+/778612 [3] https://bugs.launchpad.net/heat-translator/+bug/1918360 [4] https://opendev.org/openstack/tosca-parser/commit/00d3a394d5a3bc13ed7d2f1d71affd9ab71e4318 [5] https://review.opendev.org/c/openstack/releases/+/777964 Best regards, Yoshito Ito From stig.openstack at telfer.org Wed Mar 10 08:15:18 2021 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 10 Mar 2021 08:15:18 +0000 Subject: [scientific-sig] IRC meeting today - Jupyter notebook platforms Message-ID: <6E5BE5BC-3F4C-4BC5-A52F-9D65F4E4A99B@telfer.org> Hi All - We have a Scientific SIG meeting today at 1100 UTC in channel #openstack-meeting. Everyone is welcome. Today's agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_March_10th_2021 We'll be discussing experiences and best practices for providing JupyterHub and Jupyter notebook platforms on OpenStack. Cheers, Stig From lyarwood at redhat.com Wed Mar 10 09:46:26 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 10 Mar 2021 09:46:26 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <7488096.MhkbZ0Pkbq@whitebase.usersys.redhat.com> References: <3885896.2VHbPRQshP@whitebase.usersys.redhat.com> <7488096.MhkbZ0Pkbq@whitebase.usersys.redhat.com> Message-ID: On Tue, 9 Mar 2021 at 23:11, Luigi Toscano wrote: > On Wednesday, 10 March 2021 00:00:29 CET Sean Mooney wrote: > > On Tue, 2021-03-09 at 23:50 +0100, Luigi Toscano wrote: > > > On Tuesday, 9 March 2021 23:40:08 CET Lee Yarwood wrote: > > > > On Tue, 9 Mar 2021 at 22:24, Jeremy Stanley wrote: > > > > > On 2021-03-09 22:19:48 +0000 (+0000), Jeremy Stanley wrote: > > > > > > On 2021-03-09 22:18:12 +0000 (+0000), Jeremy Stanley wrote: > > > > > > > On 2021-03-09 22:08:18 +0000 (+0000), Lee Yarwood wrote: > > > > > > > > I reported the following bug last week but I've yet to get any > > > > > > > > real > > > > > > > > feedback after asking a few times in irc. > > > > > > > > > > > > > > > > Running parallel iSCSI/LVM c-vol backends is causing random > > > > > > > > failures > > > > > > > > in CI > > > > > > > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > > > > > > > > > > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the > > > > > > > > bug > > > > > > > > with Fedora 32 and lioadm I don't see the WWN conflict between > > > > > > > > the > > > > > > > > two > > > > > > > > backends. Does anyone know if using lioadm is an option on > > > > > > > > Focal? > > > > > > > > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-lio-is > > > > > > > csi-> > > > support.html seems to indicate that you just need to > > > > > > > set it in configuration. The package that document mentions looks > > > > > > > like a distro package > > > > > > > recommendation (does not exist under that name on PyPI) and the > > > > > > > equivalent lib on PyPI is included in cinder's requirements.txt > > > > > > > file, but I don't see either mentioned in the devstack source tree > > > > > > > so maybe that needs to be installed for DevStack-based jobs to > > > > > > > take > > > > > > > advantage of it. > > > > > > > > > > > > Oh, and this would be the package to add from Ubuntu Focal: > > > > > > > > > > > > https://packages.ubuntu.com/focal/python3-rtslib-fb > > > > > > > > > > Nevermind, DevStack installs the projects with pip, so the one in > > > > > cinder's requirements.txt should already be present. In that case, > > > > > yeah, just set it in the config? > > > > > > > > Yes correct, my question was more to see if there were any known > > > > issues with using lioadm on Focal. Anyway I've pushed the following > > > > WIP change for devstack to switch over to lioadm when using Focal: > > > > > > > > WIP cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu > > > > https://review.opendev.org/c/openstack/devstack/+/779624 > > > > > > For the record, we use the default (tgt) on the main cinder gates, as well > > > as a lioadm job (defined in cinder-tempest-plugin). If I read the code > > > correctly, that change would break tgt for everyone on ubuntu. > > > > > > Please raise this in the cinder meeting (Wednesday). > > > > i dont think we can wait that long im pretty sure this is causing this error > > form talking to lee earlier to day > > http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message: > > %5C%22Unable%20to%20detach%20the%20device%20from%20the%20live%20config%5C%22 > > %20AND%20loglevel:%20ERROR > > > > i guess it almost wednesday already but its startin to casue issues in > > multiple poject gates right before code freeze so we need to adress this > > before we end up rechecking things over and over on many patches. If M3 wasn't *tomorrow* I'd agree but I don't think anyone wants to change the default iSCSI target this close to feature freeze. I've also been unable to reproduce the above failure with a multi LVM/iSCSI + tgtadm setup so I'm not entirely confident about the switch to lioadm actually resolving that issue at the moment. > But then don't error out if someone tries to use tgtadm, which is what would > happen if that patch was merged (if I didn't misread the code). There's nothing stopping someone from declaring CINDER_ISCSI_HELPER=tgtadm to override the default so I'm not sure what you're suggesting. To that end I've posted the following to ensure the single host tgtadm jobs in the cinder-tempest-plugin use the correct target: Set CINDER_ISCSI_HELPER explicitly for tgtadm job https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/779697 If you know of anymore please let me know! Cheers, Lee From hberaud at redhat.com Wed Mar 10 09:52:41 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 10 Mar 2021 10:52:41 +0100 Subject: [release] Xena PTG Message-ID: Oï releasers, The PTG is fast approaching (Apr 19 - 23). To help to organize the gathering: 1) please fill the doodle[1] with the time slots that fit well for you; 2) please add your PTG topics in our etherpad[2]. Voting will be closed on the 21st of March. Thanks for your reading, [1] https://doodle.com/poll/8d8n2picqnhchhsv [2] https://etherpad.opendev.org/p/xena-ptg-os-relmgt -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Mar 10 10:23:48 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 10 Mar 2021 11:23:48 +0100 Subject: [neutron] Xena PTG Message-ID: <20210310102348.pcltupec4waqt6hp@p1.localdomain> Hi, If You plan to attend Neutron sessions during the Xena PTG, please fill in doodle [1] with the time slots which are good for You. Please do that before 19.03.2021 (next Friday) so I will have time to book slots which are the best for most of us. Please also add any topics You want to be discussed to the etherpad [2] [1] https://doodle.com/poll/cc2ste3emzw7ekrh?utm_source=poll&utm_medium=link [2] https://etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tcr1br24 at gmail.com Wed Mar 10 08:11:06 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Wed, 10 Mar 2021 16:11:06 +0800 Subject: [ISSUE]Openstack create VNF instance error Message-ID: Dear Sir, This our testbed: Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) We have trouble when creating vnf on compute node using OpenStack CLI: #openstack vnf create vnfd1 vnf1 Here are the log message and ovs information.*COMPUTE NODE1:* nova-compute.log ============================= Attempting claim on node cmp001: memory 2048 MB, disk 15 GB, vcpus 1 CPU Total memory: 7976 MB, used: 6656.00 MB memory limit not specified, defaulting to unlimited Total disk: 96 GB, used: 45.00 GB disk limit not specified, defaulting to unlimited Total vcpu: 4 VCPU, used: 3.00 VCPU vcpu limit not specified, defaulting to unlimited Claim successful on node cmp001 Creating image ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-a514ba2a-7ada-4a35-944e-0c861055112e eacae20dfeb9442eb8643bb8784b03d1 caee3c14634f42438ec8479eb49ba388 - default default] Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--may-exist', 'add-br', 'br-int', '--', 'set', 'Bridge', 'br-int', 'datapath_type=netdev']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-br br-int -- set Bridge br-int datapath_type=netdev Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. ERROR vif_plug_ovs.ovsdb.impl_vsctl Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--may-exist', 'add-port', 'br-int', 'vhu5e8b5246-cd', '--', 'set', 'Interface', 'vhu5e8b5246-cd', 'external_ids:iface-id=5e8b5246-cd05-4735-9692-29d51db6e72d', 'external_ids:iface-status=active', 'external_ids:attached-mac=fa:16:3e:8f:b1:a1', 'external_ids:vm-uuid=30675d24-bd74-409c-ba36-4c59a72283ec', 'type=dpdkvhostuser']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-port br-int vhu5e8b5246-cd -- set Interface vhu5e8b5246-cd external_ids:iface-id=5e8b5246-cd05-4735-9692-29d51db6e72d external_ids:iface-status=active external_ids:attached-mac=fa:16:3e:8f:b1:a1 external_ids:vm-uuid=30675d24-bd74-409c-ba36-4c59a72283ec type=dpdkvhostuser Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2021-03-09 06:30:36.594 3282 ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-a514ba2a-7ada-4a35-944e-0c861055112e eacae20dfeb9442eb8643bb8784b03d1 caee3c14634f42438ec8479eb49ba388 - default default] Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--columns=mtu_request', 'list', 'Interface']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --columns=mtu_request list Interface Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2021-03-09 06:30:36.594 3282 ERROR os_vif [req-a514ba2a-7ada-4a35-944e-0c861055112e eacae20dfeb9442eb8643bb8784b03d1 caee3c14634f42438ec8479eb49ba388 - default default] Failed to plug vif VIFVHostUser(active=False,address=fa:16:3e:8f:b1:a1,has_traffic_filtering=False,id=5e8b5246-cd05-4735-9692-29d51db6e72d,mode='client',network=Network(6be19cbf-c6a7-4735-a0ee-42e1c5a664e2),path='/var/run/openvswitch/vhu5e8b5246-cd',plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='vhu5e8b5246-cd'): oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --columns=mtu_request list Interface Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n' ============================= ovs information: =========================== _uuid : 4a56f11f-f839-4032-ba85-2eec47260144 bridges : [1a600d85-9849-459a-8453-9a28eee63dc4, f21a1111-3c01-4988-99b7-ca5ab4a73051] cur_cfg : 653 datapath_types : [netdev, system] db_version : "7.16.1" dpdk_initialized : false dpdk_version : none external_ids : {hostname="cmp001", "odl_os_hostconfig_config_odl_l2"="{\"allowed_network_types\": [\"local\", \"flat\", \"vlan\", \"vxlan\", \"gre\"], \"bridge_mappings\": {}, \"datapath_type\": \"netdev\", \"supported_vnic_types\": [{\"vif_details\": {\"uuid\": \"4a56f11f-f839-4032-ba85-2eec47260144\", \"host_addresses\": [\"cmp001\"], \"has_datapath_type_netdev\": true, \"support_vhost_user\": true, \"port_prefix\": \"vhu\", \"vhostuser_socket_dir\": \"/var/run/openvswitch\", \"vhostuser_ovs_plug\": true, \"vhostuser_mode\": \"client\", \"vhostuser_socket\": \"/var/run/openvswitch/vhu$PORT_ID\"}, \"vif_type\": \"vhostuser\", \"vnic_type\": \"normal\"}]}", odl_os_hostconfig_hostid="cmp001", rundir="/var/run/openvswitch", system-id="ec007ae5-4b61-4f8e-af2b-b1b97413d20d"} iface_types : [erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lisp, patch, stt, system, tap, vxlan] manager_options : [11b3ada9-d9b3-42bb-97fc-d254364acecd] next_cfg : 653 other_config : {local_ip="10.1.0.5", provider_mappings="physnet1:br-floating"} ovs_version : "2.11.1" ssl : [] statistics : {} system_type : ubuntu system_version : "18.04" =========================== # ovs-vsctl get bridge br-int datapath_type system =========================== *COMPUTE NODE 2:* nova-compute.log =========================== ERROR nova.compute.manager Failed to build and run instance: libvirt.libvirtError: internal error: process exited while connecting to monitor: qemu-system-x86_64: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhub701eeae-c0: Failed to connect socket /var/run/openvswitch/vhub701eeae-c0: No such file or directory Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2353, in _build_and_run_instance block_device_info=block_device_info) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3204, in spawn destroy_disks_on_failure=True) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5724, in _create_domain_and_network destroy_disks_on_failure) File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5693, in _create_domain_and_network post_xml_callback=post_xml_callback) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5627, in _create_domain guest.launch(pause=pause) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 144, in launch self._encoded_xml, errors='ignore') File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 139, in launch return self._domain.createWithFlags(flags) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 190, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 148, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 129, in execute six.reraise(c, e, tb) File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "/usr/lib/python3/dist-packages/libvirt.py", line 1110, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirt.libvirtError: internal error: process exited while connecting to monitor: 2021-03-10T07:59:04.064555Z qemu-system-x86_64: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhub701eeae-c0: Failed to connect socket /var/run/openvswitch/vhub701eeae-c0: No such file or directory INFO nova.compute.manager Took 0.68 seconds to deallocate network for instance. INFO nova.scheduler.client.report Deleted allocation for instance 308de941-1192-4dbd-8c4d-d7860139d6d4 =========================== ovs information: =========================== _uuid : b8f5b869-e794-4052-8896-3ee5d323a0b7 bridges : [5b6ab0c7-d8f1-43b0-9ab7-1e77280cd3ff, 70810190-6796-4500-bda4-dae856bcede1] cur_cfg : 1267 datapath_types : [netdev, system] db_version : "7.16.1" dpdk_initialized : false dpdk_version : none external_ids : {hostname="cmp002", "odl_os_hostconfig_config_odl_l2"="{\"allowed_network_types\": [\"local\", \"flat\", \"vlan\", \"vxlan\", \"gre\"], \"bridge_mappings\": {}, \"datapath_type\": \"netdev\", \"supported_vnic_types\": [{\"vif_details\": {\"uuid\": \"b8f5b869-e794-4052-8896-3ee5d323a0b7\", \"host_addresses\": [\"cmp002\"], \"has_datapath_type_netdev\": true, \"support_vhost_user\": true, \"port_prefix\": \"vhu\", \"vhostuser_socket_dir\": \"/var/run/openvswitch\", \"vhostuser_ovs_plug\": true, \"vhostuser_mode\": \"client\", \"vhostuser_socket\": \"/var/run/openvswitch/vhu$PORT_ID\"}, \"vif_type\": \"vhostuser\", \"vnic_type\": \"normal\"}]}", odl_os_hostconfig_hostid="cmp002", rundir="/var/run/openvswitch", system-id="dddc6d6c-0e7c-4813-97c9-7c72eaf9f46b"} iface_types : [erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lisp, patch, stt, system, tap, vxlan] manager_options : [29b4ef4b-e2ba-43f3-aa42-b1bacaf9dd19, 75ae95f6-d251-4152-9e0d-5b65e5f7bb77] next_cfg : 1267 other_config : {local_ip="10.1.0.6", provider_mappings="physnet1:br-floating"} ovs_version : "2.11.1" ssl : [] statistics : {} system_type : ubuntu system_version : "18.04" =========================== # ovs-vsctl get bridge br-int datapath_type netdev =========================== We don't use DPDK on our Openvswitch. Can anyone give us some advice on this issue? Thanks for helping us. [image: Mailtrack] Sender notified by Mailtrack 03/10/21, 04:09:46 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Mar 10 08:33:39 2021 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 10 Mar 2021 09:33:39 +0100 Subject: [ISSUE]Openstack create VNF instance ERROR In-Reply-To: References: Message-ID: Hi, Based on the screenshots ovsdb and libvirt seems to be unable to connect to ovsdb. I would check if configuration for ovsdb connection (like ovsdb_connection for neutron), and it is really possible to use those addresses to connect manually to ovsdb. Not sure if Openstack stein supported/tested together with neon ODL, I can't find the page where these are listed. Regards Lajos (lajoskatona) Jhen-Hao Yu ezt írta (időpont: 2021. márc. 9., K, 22:56): > Dear Sir, > > This our testbed: > Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) > > We have trouble when creating vnf on compute node using OpenStack CLI: > #openstack vnf create vnfd1 vnf1 > > Here are the log message and ovs information. > COMPUTE NODE1: > nova-compute.log > [image: error1.png] > [image: list1.png] > [image: vsctl1.png] > > COMPUTE NODE 2: > nova-compute.log > [image: error2.png] > [image: list2.png] > [image: vsctl2.png] > > We don't use DPDK on our Openvswitch. > Can anyone give us some advice on this issue? > > Thanks for helping us. > > [image: Mailtrack] > Sender > notified by > Mailtrack > 03/09/21, > 10:20:09 PM > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error1.png Type: image/png Size: 279362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list1.png Type: image/png Size: 51268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl1.png Type: image/png Size: 44054 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error2.png Type: image/png Size: 260121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list2.png Type: image/png Size: 52846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl2.png Type: image/png Size: 35301 bytes Desc: not available URL: From bekir.fajkovic at citynetwork.eu Wed Mar 10 12:04:58 2021 From: bekir.fajkovic at citynetwork.eu (Bekir Fajkovic) Date: Wed, 10 Mar 2021 13:04:58 +0100 Subject: Question regarding the production status of DB2 in Trove Message-ID: Hello! One question regarding the development status of DB2 inside Trove. I see that DB2 is still in experimental phase, when could it be expected for that datastore type to get production ready status? Best Regards. Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcr1br24 at gmail.com Wed Mar 10 12:39:48 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Wed, 10 Mar 2021 20:39:48 +0800 Subject: [ISSUE]Openstack create VNF instance ERROR In-Reply-To: References: Message-ID: Hi, Lajos, We use OPNFV-9.0.0 to deploy Openstack, ODL and vswitch. ovsdb_connection in neutron.conf is set "tcp:127.0.0.1:6639" The vswitch created tapXXXXXXX for instance attach to it before but somehow create vhuXXXXXXX which fail to create the instance. Thank you. [image: Mailtrack] Sender notified by Mailtrack 03/10/21, 08:30:41 PM Lajos Katona 於 2021年3月10日 週三 下午4:33寫道: > Hi, > Based on the screenshots ovsdb and libvirt seems to be unable to connect > to ovsdb. > I would check if configuration for ovsdb connection (like ovsdb_connection > for neutron), and it is > really possible to use those addresses to connect manually to ovsdb. > Not sure if Openstack stein supported/tested together with neon ODL, I > can't find the page where > these are listed. > > Regards > Lajos (lajoskatona) > > Jhen-Hao Yu ezt írta (időpont: 2021. márc. 9., K, > 22:56): > >> Dear Sir, >> >> This our testbed: >> Openstack (stein) + Opendaylight (neon) + ovs (v2.11.1) >> >> We have trouble when creating vnf on compute node using OpenStack CLI: >> #openstack vnf create vnfd1 vnf1 >> >> Here are the log message and ovs information. >> COMPUTE NODE1: >> nova-compute.log >> [image: error1.png] >> [image: list1.png] >> [image: vsctl1.png] >> >> COMPUTE NODE 2: >> nova-compute.log >> [image: error2.png] >> [image: list2.png] >> [image: vsctl2.png] >> >> We don't use DPDK on our Openvswitch. >> Can anyone give us some advice on this issue? >> >> Thanks for helping us. >> >> [image: Mailtrack] >> Sender >> notified by >> Mailtrack >> 03/09/21, >> 10:20:09 PM >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error1.png Type: image/png Size: 279362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list1.png Type: image/png Size: 51268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl1.png Type: image/png Size: 44054 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error2.png Type: image/png Size: 260121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: list2.png Type: image/png Size: 52846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vsctl2.png Type: image/png Size: 35301 bytes Desc: not available URL: From christian.rohmann at inovex.de Wed Mar 10 14:21:27 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 10 Mar 2021 15:21:27 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <095A8B69-9EDC-4AF7-90AC-D16B0C484361@gmail.com> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> <095A8B69-9EDC-4AF7-90AC-D16B0C484361@gmail.com> Message-ID: <2ca7b032-dc3a-2648-a08c-1df8cc7c0542@inovex.de> Hey Artem, On 09/03/2021 14:23, Artem Goncharov wrote: > This is just a tiny subset of OpenStack resources with no possible flexibility vs native implementation in SDK/CLI I totally agree. I was just saying that by not having a cleanup approach provided by OpenStack there will just be more and more tools popping up, each solving the same issue ... and never being tested and updated along with OpenStack releases and new project / resources being added. So thanks again for your work on https://review.opendev.org/c/openstack/python-openstackclient/+/734485 ! Regards Christian From smooney at redhat.com Wed Mar 10 14:42:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 10 Mar 2021 14:42:38 +0000 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: <4299490.gzMO86gykG@p1> References: <2609049.HbdjPCY3gI@p1> <4299490.gzMO86gykG@p1> Message-ID: <41fa3457970dcd126798f90e163aebbcde3f14c0.camel@redhat.com> ok since there appear to be no stong objection form neutron we proably should deprecate the policy in nova then so we can either remove it next cycle or change its default value melanie are your plannign to propose a patch to do that? On Sat, 2021-03-06 at 17:00 +0100, Slawek Kaplonski wrote: > Hi, > > Dnia sobota, 6 marca 2021 13:53:02 CET Sean Mooney pisze: > > On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: > > > Hi, > > > > > > Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: > > > > Hello all, > > > > > > > > I'm seeking input from the neutron and nova teams regarding policy > > > > enforcement for allowing attachment to external networks. Details below. > > > > > > > > Recently we've been looking at an issue that was reported quite a long > > > > time ago (2017) [1] where we have a policy check in nova-compute that > > > > controls whether to allow users to attach an external network to their > > > > instances. > > > > > > > > This has historically been a pain point for operators as (1) it goes > > > > against convention of having policy checks in nova-api only and (2) > > > > setting the policy to anything other than the default requires deploying > > > > a policy file change to all of the compute hosts in the deployment. > > > > > > > > The launchpad bug report mentions neutron refactoring work that was > > > > happening at the time, which was thought might make the > > > > 'network:attach_external_network' policy check on the nova side > > > > redundant. > > > > > > > > Years have passed since then and customers are still running into this > > > > problem, so we are thinking, can this policy check be removed on the > > > > nova-compute side now? > > > > > > > > I did a local test with devstack to verify what the behavior is if we > > > > were to remove the 'network:attach_external_network' policy check > > > > entirely [2] and found that neutron appears to properly enforce > > > > permission to attach to external networks itself. It appears that the > > > > enforcement on the neutron side makes the nova policy check redundant. > > > > > > > > When I tried to boot an instance to attach to an external network, > > > > neutron API returned the following: > > > > > > > > INFO neutron.pecan_wsgi.hooks.translation > > > > [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client > > > > error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create > > > > port on this network > > > > [...] > > > > INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c > > > > req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST > > > > /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 > > > > > > I just checked in Neutron code and we don't have any policy rule related > > > directly to the creation of ports on the external network. > > > Probably what You had there is the fact that Your router:external network > > > was owned by other tenant and due to that You wasn't able to create port > > > directly on it. If as an admin You would create external network which > > > would belong to Your tenant, You would be allowed to create port there. > > > > > > > Can anyone from the neutron team confirm whether it would be OK for us > > > > to remove our nova-compute policy check for external network attach > > > > permission and let neutron take care of the check? > > > > > > I don't know exactly the reasons why it is forbiden on Nova's side but TBH > > > I don't see any reason why we should forbid pluging instances directly to > > > the network marked as router:external=True. > > > > i have listed the majority of my consers in > > https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the > > downstream bug related to this. > > there are a number of issue sthat i was concerd about but tl;dr > > - booting ip form external network consumes ip from the floating ip subnet > > withtout using quota - by default neutron upstream and downstream is > > configured to provide nova metadata api access via the neutron router not > > the dhcp server so by default the metadata api will not work with external > > network. that would require neueton to be configre to use the dhcp server > > for metadta or config driver or else insance wont get ssh keys ingject by > > cloud init. > > - there might be security considertaions. typeically external networks are > > vlan or flat networks and in some cases operators may not want tenats to be > > able to boot on such networks expsially with vnic-type=driect-physical > > since that might allow them to violate tenant isolation if the top of rack > > switch was not configured by a heracical port binding driver to provide > > adiquite isolation in that case. this is not so much because this is an > > external network and more a concern anytime you do PF passtough but there > > may be other implication to allowing this by default. that said if neutron > > has a way to express policy in this regard nova does not have too. > > Those are all valid points, true. But TBH, if administrator created such > network as pool of FIPs for the user, then users will not be able to plug vms > directly to that network as they aren't owners of the network so neutron will > forbid that. > > > > > router:external=True is really used to mark a network as providing > > connectivity such that it can be used for the gateway port of neutron > > routers. the workaroud that i have come up with currently is to mark the > > network as shared and then use neturon rbac to only share it with the teant > > that owns it. > > > > i assigning external network to speficic tenat being useful when you want to > > provde a specific ip allocation pool to them or just a set of ips. i > > understand that the current motivation for this request is commign form > > some edge deployments. in general i dont thinkthis would be widely used but > > for those that need its better ux then marking it as shared. > > > > > > And on the nova side, I assume we would need a deprecation cycle before > > > > removing the 'network:attach_external_network' policy. If we can get > > > > confirmation from the neutron team, is anyone opposed to the idea of > > > > deprecating the 'network:attach_external_network' policy in the Wallaby > > > > cycle, to be removed in the Xena release? > > > > > > > > I would appreciate your thoughts. > > > > > > > > Cheers, > > > > -melanie > > > > > > > > [1] https://bugs.launchpad.net/nova/+bug/1675486 > > > > [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 > > From akekane at redhat.com Wed Mar 10 14:47:39 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 10 Mar 2021 20:17:39 +0530 Subject: [glance] Xena PTG (Apr 19 - 23) Message-ID: Hello All, Greetings!!! Xena PTG is announced and if you haven't already registered, please do so as soon as possible [1]. I have created a Virtual PTG planning etherpad [2]. Please add your PTG topic in the etherpad [2]. If you feel your topic needs cross project cooperation please note that in the etherpad which other teams are needed. I am planning to reserve time slots for the PTG between 1400 UTC to 1700 UTC. Please let me know if you have any concerns or suggestions with the given time slots. [1] https://www.openstack.org/ptg/ [2] https://etherpad.opendev.org/p/xena-ptg-glance-planning Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From foundjem at ieee.org Wed Mar 10 15:51:57 2021 From: foundjem at ieee.org (Armstrong Foundjem) Date: Wed, 10 Mar 2021 10:51:57 -0500 Subject: [release][heat][ironic][requirements][swift][OpenStackSDK] Cycle With Intermediary Unreleased Deliverables References: <8FD9C7CB-96FE-4970-B290-432ABCCC8FCF.ref@ieee.org> Message-ID: <8FD9C7CB-96FE-4970-B290-432ABCCC8FCF@ieee.org> Hello! Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Wallaby cycle: heat-agents ironic-prometheus-exporter ironic-ui ovn-octavia-provider python-openstackclient requirements swift Those should be released ASAP, and in all cases before 22 March, 2021, so that we have a release to include in the final Wallaby release. - Armstrong Foundjem (armstrong) -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Wed Mar 10 17:06:41 2021 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Wed, 10 Mar 2021 18:06:41 +0100 Subject: [largescale-sig] Next meeting: March 10, 15utc In-Reply-To: References: Message-ID: Hi, we had the Large Scale SIG meeting today. Meeting logs are available at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-03-10-15.00.log.html We discussed topics for a new video meeting in 2 weeks. Details will be sent later. regards, Belmiro On Mon, Mar 8, 2021 at 12:43 PM Thierry Carrez wrote: > Hi everyone, > > Our next Large Scale SIG meeting will be this Wednesday in > #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it > translates locally at: > > https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210310T15 > > Belmiro Moreira will chair this meeting. A number of topics have already > been added to the agenda, including discussing CentOS Stream, reflecting > on last video meeting and pick a topic for the next one. > > Feel free to add other topics to our agenda at: > https://etherpad.openstack.org/p/large-scale-sig-meeting > > Regards, > > -- > Thierry Carrez > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Mar 10 17:36:44 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 10 Mar 2021 10:36:44 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Tue, Mar 9, 2021 at 5:13 PM Wesley Hayutin wrote: > > > On Mon, Mar 8, 2021 at 12:46 AM Marios Andreou wrote: > >> >> >> On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin >> wrote: >> >>> >>> >>> On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy wrote: >>> >>>> Hello All, >>>> >>>> Just a reminder that we will be starting to implement steps to migrate >>>> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >>>> >>>> The plan is outlined in: >>>> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >>>> >>>> In summary, on Sunday, we plan to: >>>> - Move the master integration line for promotions to build containers >>>> and images on centos-8 stream nodes >>>> - Change the release files to bring down centos-8 stream repos for use >>>> in test jobs (test jobs will still start on centos-8 nodes - changing this >>>> nodeset will happen later) >>>> - Image build and container build check jobs will be moved to >>>> non-voting during this transition. >>>> >>> >>>> We have already run all the test jobs in RDO with centos-8 stream >>>> content running on centos-8 nodes to prequalify this transition. >>>> >>>> We will update this list with status as we go forward with next steps. >>>> >>>> Thanks! >>>> >>> >>> OK... status update. >>> >>> Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure >>> Monday wasn't a disaster upstream. TripleO master jobs have successfully >>> been migrated to CentOS-8-Stream today. You should see "8-stream" now in >>> /etc/yum.repos.d/tripleo-centos.* repos. >>> >>> >> >> \o/ this is fantastic! >> >> nice work all thanks to everyone involved for getting this done with >> minimal disruption >> >> tripleo-ci++ >> >> >> >> >> >>> Your CentOS-8-Stream Master hash is: >>> >>> edd46672cb9b7a661ecf061942d71a72 >>> >>> Your master repos are: >>> https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo >>> >>> Containers, and overcloud images should all be centos-8-stream. >>> >>> The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. >>> >>> Please open launchpad bugs w/ the "alert" tag if you are having any issues. >>> >>> Thanks and well done all! >>> >>> >>> >>>> >>>> >>>> >>>> >>> > OK.... stable/victoria will start to migrate this evening to > centos-8-stream > > We are looking to promote the following [1]. Again if you hit any issues, > please just file a launchpad bug w/ the "alert" tag. > > Thanks > > > [1] > https://trunk.rdoproject.org/api-centos8-victoria/api/civotes_agg_detail.html?ref_hash=457ea897ac3b7552b82c532adcea63f0 > > OK... stable/victoria is now on centos-8-stream. Holler via launchpad if you hit something... now we're working on stable/ussuri :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Mar 10 17:41:28 2021 From: melwittt at gmail.com (melanie witt) Date: Wed, 10 Mar 2021 09:41:28 -0800 Subject: [nova][neutron] Can we remove the 'network:attach_external_network' policy check from nova-compute? In-Reply-To: <41fa3457970dcd126798f90e163aebbcde3f14c0.camel@redhat.com> References: <2609049.HbdjPCY3gI@p1> <4299490.gzMO86gykG@p1> <41fa3457970dcd126798f90e163aebbcde3f14c0.camel@redhat.com> Message-ID: <5ebd4340-16b4-5e0c-cd35-f70bbf000cbc@gmail.com> On 3/10/21 06:42, Sean Mooney wrote: > ok since there appear to be no stong objection form neutron > we proably should deprecate the policy in nova then so we can either remove it next cycle > or change its default value melanie are your plannign to propose a patch to do that? Sorry I hadn't replied again yet but I wanted to check on the history of network:attach_external_network policy before going ahead. I found the commit that introduced restrictions on the nova side [3], the commit message reads: "Require admin context for interfaces on ext network Currently any user can attach an interface to a neutron external network, if the neutron plugin supports the port binding extension. In this case, nova will create neutron ports using the admin client, thus bypassing neutron authZ checks for creating ports on external networks. This patch adds a check in nova to verify the API request has an admin context when a request for an interface is made on a neutron external network." and that was converted into a policy check not long afterward [4]. The restriction was added to fix a bug [5] where users could get into a situation where they've been allowed to attach to an external network and then were unable to delete their instance later because the port create was done as admin and the port delete was done as the user. I'm not quite sure if years later, we're in the clear now with regard to the idea of removing this policy check. I will be looking through the code to see if I can figure out whether [5] could become a problem again or if things have changed in a way that it cannot be a problem. I'll go ahead and send this now while I look so that everyone can lend their thoughts on ^ in the meantime. Cheers, -melanie [3] https://github.com/openstack/nova/commit/7d1b4117fda7709307a35e56625cfa7709a6b795 [4] https://github.com/openstack/nova/commit/674954f731bf4b66356fadaa5baaeb58279c5832 [5] https://bugs.launchpad.net/nova/+bug/1284718 > On Sat, 2021-03-06 at 17:00 +0100, Slawek Kaplonski wrote: >> Hi, >> >> Dnia sobota, 6 marca 2021 13:53:02 CET Sean Mooney pisze: >>> On Sat, 2021-03-06 at 08:37 +0100, Slawek Kaplonski wrote: >>>> Hi, >>>> >>>> Dnia piątek, 5 marca 2021 17:26:19 CET melanie witt pisze: >>>>> Hello all, >>>>> >>>>> I'm seeking input from the neutron and nova teams regarding policy >>>>> enforcement for allowing attachment to external networks. Details below. >>>>> >>>>> Recently we've been looking at an issue that was reported quite a long >>>>> time ago (2017) [1] where we have a policy check in nova-compute that >>>>> controls whether to allow users to attach an external network to their >>>>> instances. >>>>> >>>>> This has historically been a pain point for operators as (1) it goes >>>>> against convention of having policy checks in nova-api only and (2) >>>>> setting the policy to anything other than the default requires deploying >>>>> a policy file change to all of the compute hosts in the deployment. >>>>> >>>>> The launchpad bug report mentions neutron refactoring work that was >>>>> happening at the time, which was thought might make the >>>>> 'network:attach_external_network' policy check on the nova side >>>>> redundant. >>>>> >>>>> Years have passed since then and customers are still running into this >>>>> problem, so we are thinking, can this policy check be removed on the >>>>> nova-compute side now? >>>>> >>>>> I did a local test with devstack to verify what the behavior is if we >>>>> were to remove the 'network:attach_external_network' policy check >>>>> entirely [2] and found that neutron appears to properly enforce >>>>> permission to attach to external networks itself. It appears that the >>>>> enforcement on the neutron side makes the nova policy check redundant. >>>>> >>>>> When I tried to boot an instance to attach to an external network, >>>>> neutron API returned the following: >>>>> >>>>> INFO neutron.pecan_wsgi.hooks.translation >>>>> [req-58fdb103-cd20-48c9-b73b-c9074061998c >>>>> req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] POST failed (client >>>>> error): Tenant 7c60976c662a414cb2661831ff41ee30 not allowed to create >>>>> port on this network >>>>> [...] >>>>> INFO neutron.wsgi [req-58fdb103-cd20-48c9-b73b-c9074061998c >>>>> req-4d68df7e-e0fd-4b1e-9b57-733731123d46 demo demo] 127.0.0.1 "POST >>>>> /v2.0/ports HTTP/1.1" status: 403 len: 360 time: 0.1582518 >>>> >>>> I just checked in Neutron code and we don't have any policy rule related >>>> directly to the creation of ports on the external network. >>>> Probably what You had there is the fact that Your router:external network >>>> was owned by other tenant and due to that You wasn't able to create port >>>> directly on it. If as an admin You would create external network which >>>> would belong to Your tenant, You would be allowed to create port there. >>>> >>>>> Can anyone from the neutron team confirm whether it would be OK for us >>>>> to remove our nova-compute policy check for external network attach >>>>> permission and let neutron take care of the check? >>>> >>>> I don't know exactly the reasons why it is forbiden on Nova's side but TBH >>>> I don't see any reason why we should forbid pluging instances directly to >>>> the network marked as router:external=True. >>> >>> i have listed the majority of my consers in >>> https://bugzilla.redhat.com/show_bug.cgi?id=1933047#c6 which is one of the >>> downstream bug related to this. >>> there are a number of issue sthat i was concerd about but tl;dr >>> - booting ip form external network consumes ip from the floating ip subnet >>> withtout using quota - by default neutron upstream and downstream is >>> configured to provide nova metadata api access via the neutron router not >>> the dhcp server so by default the metadata api will not work with external >>> network. that would require neueton to be configre to use the dhcp server >>> for metadta or config driver or else insance wont get ssh keys ingject by >>> cloud init. >>> - there might be security considertaions. typeically external networks are >>> vlan or flat networks and in some cases operators may not want tenats to be >>> able to boot on such networks expsially with vnic-type=driect-physical >>> since that might allow them to violate tenant isolation if the top of rack >>> switch was not configured by a heracical port binding driver to provide >>> adiquite isolation in that case. this is not so much because this is an >>> external network and more a concern anytime you do PF passtough but there >>> may be other implication to allowing this by default. that said if neutron >>> has a way to express policy in this regard nova does not have too. >> >> Those are all valid points, true. But TBH, if administrator created such >> network as pool of FIPs for the user, then users will not be able to plug vms >> directly to that network as they aren't owners of the network so neutron will >> forbid that. >> >>> >>> router:external=True is really used to mark a network as providing >>> connectivity such that it can be used for the gateway port of neutron >>> routers. the workaroud that i have come up with currently is to mark the >>> network as shared and then use neturon rbac to only share it with the teant >>> that owns it. >>> >>> i assigning external network to speficic tenat being useful when you want to >>> provde a specific ip allocation pool to them or just a set of ips. i >>> understand that the current motivation for this request is commign form >>> some edge deployments. in general i dont thinkthis would be widely used but >>> for those that need its better ux then marking it as shared. >>> >>>>> And on the nova side, I assume we would need a deprecation cycle before >>>>> removing the 'network:attach_external_network' policy. If we can get >>>>> confirmation from the neutron team, is anyone opposed to the idea of >>>>> deprecating the 'network:attach_external_network' policy in the Wallaby >>>>> cycle, to be removed in the Xena release? >>>>> >>>>> I would appreciate your thoughts. >>>>> >>>>> Cheers, >>>>> -melanie >>>>> >>>>> [1] https://bugs.launchpad.net/nova/+bug/1675486 >>>>> [2] https://bugs.launchpad.net/nova/+bug/1675486/comments/4 >> >> > > > From anlin.kong at gmail.com Wed Mar 10 19:51:53 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 11 Mar 2021 08:51:53 +1300 Subject: Question regarding the production status of DB2 in Trove In-Reply-To: References: Message-ID: Hi Bekir, DB2 datastore driver has not been maintained for a long time, to make it work: 1. We need some volunteer. 2. The drive needs to be refactored. 3. CI job needs to be added for the datastore. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Thu, Mar 11, 2021 at 3:22 AM Bekir Fajkovic < bekir.fajkovic at citynetwork.eu> wrote: > Hello! > > One question regarding the development status of DB2 inside Trove. I see > that DB2 is still in experimental > phase, when could it be expected for that datastore type to get production > ready status? > > Best Regards. > > *Bekir Fajkovic* > Senior DBA > Mobile: +46 70 019 48 47 > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Mar 10 20:37:13 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 10 Mar 2021 15:37:13 -0500 Subject: [cinder] review priorities for the next few days Message-ID: <16bfaaa8-5761-eb83-efda-cf5030b10a49@gmail.com> Here's the list of cinder and driver features that haven't yet merged: https://etherpad.opendev.org/p/cinder-wallaby-features Please make reviewing these your top priorities. We'll get back to reviewing bug fixes next week. cheers, brian From dmendiza at redhat.com Wed Mar 10 21:47:55 2021 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Wed, 10 Mar 2021 15:47:55 -0600 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: References: Message-ID: <50af2372-3f3f-e549-f0e9-295b03b8b1e3@redhat.com> On 3/8/21 12:08 PM, Kendall Nelson wrote: > Greetings! > > As you hopefully already know, our next PTG will be virtual again, and > held from Monday, April 19 to Friday, April 23. We will have the same > schedule set up available as last time with three windows of time spread > across the day to cover all timezones with breaks in between. > > *To signup your team, you must complete **_BOTH_** the survey[1] AND > reserve time in the ethercalc[2] by March 25 at 7:00 UTC.* > > We ask that the PTL/SIG Chair/Team lead sign up for time to have their > discussions in with 4 rules/guidelines. > > 1. Cross project discussions (like SIGs or support project teams) should > be scheduled towards the start of the week so that any discussions that > might shape those of other teams happen first. > 2. No team should sign up for more than 4 hours per UTC day to help keep > participants actively engaged. > 3. No team should sign up for more than 16 hours across all time slots > to avoid burning out our contributors and to enable participation in > multiple teams discussions. > > Again, you need to fill out BOTH the ethercalc AND the survey to > complete your team's sign up. > > If you have any issues with signing up your team, due to conflict or > otherwise, please let me know! While we are trying to empower you to > make your own decisions as to when you meet and for how long (after all, > you know your needs and teams timezones better than we do), we are here > to help! > > Once your team is signed up, please register! And remind your team to > register! Registration is free, but since it will be how we contact you > with passwords, event details, etc. it is still important! > > Continue to check back for updates at openstack.org/ptg > . > > -the Kendalls (diablo_rojo & wendallkaters) > > > [1] Team Survey: > https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey > > [2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi > > [3] PTG Registration: https://april2021-ptg.eventbrite.com > > Is the Ethercalc link correct? I'm getting 502 Bad Gateway errors trying to load it in my browser. - Douglas Mendizábal (redrobot) From fungi at yuggoth.org Wed Mar 10 22:02:08 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Mar 2021 22:02:08 +0000 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: <50af2372-3f3f-e549-f0e9-295b03b8b1e3@redhat.com> References: <50af2372-3f3f-e549-f0e9-295b03b8b1e3@redhat.com> Message-ID: <20210310220208.566ffhqtowb3ena6@yuggoth.org> On 2021-03-10 15:47:55 -0600 (-0600), Douglas Mendizabal wrote: [...] > Is the Ethercalc link correct? I'm getting 502 Bad Gateway errors > trying to load it in my browser. I got the same error briefly too when I tried it, but then reloaded a few minutes later and it came up fine. May have been a transient problem now corrected, or could be something iffy with a load balancer or... -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gagehugo at gmail.com Wed Mar 10 22:07:52 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 10 Mar 2021 16:07:52 -0600 Subject: [security][security-sig] No meeting tomorrow March 11th Message-ID: The security sig meeting tomorrow has been cancelled. We will meet again at the regular time next week. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From shengqin922 at 163.com Thu Mar 11 02:40:59 2021 From: shengqin922 at 163.com (ZTE) Date: Thu, 11 Mar 2021 10:40:59 +0800 (CST) Subject: [election][zun] PTL candidacy for Xena Message-ID: <7b159b78.ff6.1781f289f98.Coremail.shengqin922@163.com> Hi! I'd like to propose my candidacy and continue serving as Zun PTL in the Xena cycle. Over the Wallaby release, the Zun team continues to keep the Zun stable and reliable, and also some great works have been done. My goals for the Xena cycle are to continue the progress made in the following areas: * zun-conductor: Add a service called zun-conductor * CRI: Use the CRI interface to get inventory when cri driver configed for Capsule. * Affinity&anti-affinity: Add the affinity and anti-affinity policies to zun. Thank you for taking the time to consider me for Xena PTL. Best regards, Shengqin Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasemin.demiral at tubitak.gov.tr Thu Mar 11 08:14:57 2021 From: yasemin.demiral at tubitak.gov.tr (Yasemin =?utf-8?Q?DEM=C4=B0RAL_=28B=C4=B0LGEM_BTE=29?=) Date: Thu, 11 Mar 2021 11:14:57 +0300 (EET) Subject: [trove] trove-agent can't connect postgresql container In-Reply-To: References: Message-ID: <565885711.55511185.1615450497419.JavaMail.zimbra@tubitak.gov.tr> Hi, I work on postgresql 12.4 datastore at OpenStack Victoria. Postgresql can't create user and database automatically with trove, but when i created database and user manually, i can connect to psql. I think postgresql container can't communicate the trove agent. How can I fix this? Thank you Yasemin DEMİRAL Araştırmacı Bulut Bilişim ve Büyük Veri Araştırma Lab. B ilişim Teknolojileri Enstitüsü TÜBİTAK BİLGEM 41470 Gebze, KOCAELİ T +90 262 675 2417 F +90 262 646 3187 [ http://bilgem.tubitak.gov.tr/ | www.bilgem.tubitak.gov.tr ] [ mailto:yasemin.demiral at tubitak.gov.tr | yasemin.demiral at tubitak.gov.tr ] [ mailto:ozgur.gun at tubitak.gov.tr | ................................................................ ] [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi | Sorumluluk Reddi ] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bilgem.jpg Type: image/jpeg Size: 3031 bytes Desc: not available URL: From marios at redhat.com Thu Mar 11 10:12:52 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 11 Mar 2021 12:12:52 +0200 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley wrote: > On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > > We got two similar release processing failures: > [...] > > Since this "host key verification failed" error hit tripleo-ipsec > > three times in a row, I suspect we have something stuck (or we > > always hit the same cache/worker). > > The errors happened when run on nodes in different providers. > Looking at the build logs, I also notice that they fail for the > stable/rocky branch but succeeded for other branches of the same > repository. They're specifically erroring when trying to reach the > Gerrit server over SSH, the connection details for which are encoded > on the .gitreview file in each branch. This leads me to wonder > whether there's something about the fact that the stable/rocky > branch of tripleo-ipsec is still using the old review.openstack.org > hostname to push, and maybe we're not pre-seeding an appropriate > hostkey entry for that in the known_hosts file? > Hi Thierry, Jeremy is there something tripleo can do to help here? Should I update that https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 to use opendev.org? Though I know Elod would have words with me if I did ;) as we declared it EOL @ https://review.opendev.org/c/openstack/releases/+/779218 regards, marios > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 11 10:32:16 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 11:32:16 +0100 Subject: [release][oslo][monasca][keystone] Moving projects to independent Message-ID: Hello teams, We are about to move a couple of projects under the independent release models [1]. Those did not have any change merged over the cycle, we think that it is the right time to transition them to independent. Oslo: - futurist - debtcollector - osprofiler Keystone: - pycadf Monasca: - monasca-statsd Before we push the button we want to give ourselves a last chance to raise the hand if you think that it's an issue. Concerning Oslo's projects we already had a related discussion a few months ago and all participants agreed with that [2]. Please let us know ASAP if you disagree with that choice. Thanks for your attention. [1] https://review.opendev.org/q/topic:%22move-to-independent%22+(status:open%20OR%20status:merged) [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018527.html -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Mar 11 11:39:26 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Mar 2021 12:39:26 +0100 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: References: Message-ID: We had a release job failure during the processing of the tag event when 1.3.0 was (successfully) pushed to openstack/monasca-grafana-datasource. Tags on this repository trigger the release-openstack-javascript job, which failed during pre playbook when trying to run yarn --version with the following error: /usr/share/yarn/lib/cli.js:46100 let { ^ SyntaxError: Unexpected token { at exports.runInThisContext (vm.js:53:16) at Module._compile (module.js:373:25) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Module.require (module.js:353:17) at require (internal/module.js:12:17) at Object. (/usr/share/yarn/bin/yarn.js:24:13) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) See https://zuul.opendev.org/t/openstack/build /cdffd2a26a0d4a5b8137edb392fa5971 This prevented the job from running (likely resulting in nothing being uploaded to NPM? Not a JS job specialist), which in turn prevented announce-release job from announcing it. -- Thierry Carrez (ttx) From thierry at openstack.org Thu Mar 11 11:42:37 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Mar 2021 12:42:37 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: Marios Andreou wrote: > > > On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley > wrote: > > On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > > We got two similar release processing failures: > [...] > > Since this "host key verification failed" error hit tripleo-ipsec > > three times in a row, I suspect we have something stuck (or we > > always hit the same cache/worker). > > The errors happened when run on nodes in different providers. > Looking at the build logs, I also notice that they fail for the > stable/rocky branch but succeeded for other branches of the same > repository. They're specifically erroring when trying to reach the > Gerrit server over SSH, the connection details for which are encoded > on the .gitreview file in each branch. This leads me to wonder > whether there's something about the fact that the stable/rocky > branch of tripleo-ipsec is still using the old review.openstack.org > > hostname to push, and maybe we're not pre-seeding an appropriate > hostkey entry for that in the known_hosts file? > > > Hi Thierry, Jeremy > > is there something tripleo can do to help here? > > Should I update that > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 > > to use opendev.org ? Though I know Elod would have > words with me if I did ;) as we declared it EOL @ > https://review.opendev.org/c/openstack/releases/+/779218 > I'm not sure... We'll discuss it during the release meeting today. -- Thierry From hberaud at redhat.com Thu Mar 11 12:17:15 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 13:17:15 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient Message-ID: Hello folks, As discussed this morning on #openstack-release the masakari team needs OpenstackSDK changes that haven't been released. They have been merged after the final release [1]. Those aren't landed in OpenstackSDK version 0.54.0 [2]. Here is the delta of changes merged between 0.54.0 and 736f3aa1 ( https://review.opendev.org/c/openstack/openstacksdk/+/777299): $ git log --no-merges --online 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 736f3aa1 add masakari enabled to segment Only these changes are in the delta. Do you mind if we start releasing a new version 0.55.0 to land these changes and unlock the path of the masakari team? Client library deadline is today. Also this topic opens the door for another discussion. Indeed it's questionable that openstacksdk should follow the early library deadline, for that precise reason. We release python-*client the same time as parent projects so that late features can be taken into account. It seems appropriate that openstacksdk is in the same bucket All those safeguards are a lot less needed now that we move slower and break less things. That could be translated by moving OpenstackSDK from `type: library` [3] to `type: client-library` [4]. Let us know what you think about the FFE. Let's open the discussions about the type shifting. [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 [2] https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 [3] https://releases.openstack.org/reference/deliverable_types.html#library [4] https://releases.openstack.org/reference/deliverable_types.html#client-library -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Thu Mar 11 13:21:52 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 11 Mar 2021 14:21:52 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient In-Reply-To: References: Message-ID: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> Hi all, I have no problems with releasing another SDK version now. Currently 2 another mandatory changes landed in OSC, so it can be released if necessary as well. As a food for the discussion - SDK is not only client lib, but also a lib used in the core components (afaik Nova). So just changing label is not necessarily helping (correct me if I misinterpret the meanings of lib/client-lib) Regards, Artem > On 11. Mar 2021, at 13:17, Herve Beraud wrote: > > Hello folks, > > As discussed this morning on #openstack-release the masakari team needs OpenstackSDK changes that haven't been released. They have been merged after the final release [1]. Those aren't landed in OpenstackSDK version 0.54.0 [2]. > > Here is the delta of changes merged between 0.54.0 and 736f3aa1 (https://review.opendev.org/c/openstack/openstacksdk/+/777299 ): > > $ git log --no-merges --online 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 > 736f3aa1 add masakari enabled to segment > > Only these changes are in the delta. > > Do you mind if we start releasing a new version 0.55.0 to land these changes and unlock the path of the masakari team? > > Client library deadline is today. > > Also this topic opens the door for another discussion. Indeed it's questionable that openstacksdk should follow the early library deadline, for that precise reason. We release python-*client the same time as parent projects so that late features can be taken into account. It seems appropriate that openstacksdk is in the same bucket > > All those safeguards are a lot less needed now that we move slower and break less things. > > That could be translated by moving OpenstackSDK from `type: library` [3] to `type: client-library` [4]. > > Let us know what you think about the FFE. > Let's open the discussions about the type shifting. > > [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 > [2] https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 > [3] https://releases.openstack.org/reference/deliverable_types.html#library > [4] https://releases.openstack.org/reference/deliverable_types.html#client-library > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Thu Mar 11 13:21:59 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 11 Mar 2021 14:21:59 +0100 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: References: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> <1924f447-2199-dfa4-4a70-575cda438107@inovex.de> <63f1d37c-f7fd-15b1-5f71-74e26c44ea94@inovex.de> Message-ID: <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> Hey Brian, On 25/02/2021 19:05, Brian Rosmaita wrote: >> Please kindly let me know if there is anything required to get this >> merged. > > We have to release wallaby os-brick next week, so highest priority > right now are os-brick reviews, but we'll get you some feedback on > your patch as soon as we can. There is one +1 from Sofia on the change now. Let me know if there is anything else missing or needs changing. Is this something that still could go into Wallaby BTW? Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu Mar 11 13:22:37 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 11 Mar 2021 14:22:37 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: <4fb951fa-0b94-dac2-97db-6f799c3986ce@est.tech> For the meeting, in advance, my opinion: What Jeremy wrote looks promising. I mean it might be that the root cause is the wrong hostname/hostkey. I think we have two options: 1. fix the hostkey in the known_hosts file (though I don't know how we could do this o:)) 2. technically the tagging has not happened yet (as far as I see), so we ask Marios (:)) to update the .gitreview and then we give another round for the 'remove' and 'readd' rocky-eol tag in tripleo-ipsec.yaml. We can discuss this further @ the meeting today. Cheers, Előd On 2021. 03. 11. 12:42, Thierry Carrez wrote: > Marios Andreou wrote: >> >> >> On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley > > wrote: >> >>     On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: >>      > We got two similar release processing failures: >>     [...] >>      > Since this "host key verification failed" error hit tripleo-ipsec >>      > three times in a row, I suspect we have something stuck (or we >>      > always hit the same cache/worker). >> >>     The errors happened when run on nodes in different providers. >>     Looking at the build logs, I also notice that they fail for the >>     stable/rocky branch but succeeded for other branches of the same >>     repository. They're specifically erroring when trying to reach the >>     Gerrit server over SSH, the connection details for which are encoded >>     on the .gitreview file in each branch. This leads me to wonder >>     whether there's something about the fact that the stable/rocky >>     branch of tripleo-ipsec is still using the old review.openstack.org >>     >>     hostname to push, and maybe we're not pre-seeding an appropriate >>     hostkey entry for that in the known_hosts file? >> >> >> Hi Thierry, Jeremy >> >> is there something tripleo can do to help here? >> >> Should I update that >> https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 >> >> to use opendev.org ? Though I know Elod would >> have words with me if I did ;) as we declared it EOL @ >> https://review.opendev.org/c/openstack/releases/+/779218 >> > > I'm not sure... We'll discuss it during the release meeting today. > From midhunlaln66 at gmail.com Thu Mar 11 13:24:31 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Thu, 11 Mar 2021 18:54:31 +0530 Subject: Windows vm is not launching from horizon Message-ID: Hi all, I successfully installed the openstack rocky version and everything is working properly. ---->Uploaded all images to openstack and I can see all images listed through openstack CLI. +--------------------------------------+---------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------+--------+ | 1912a80a-1139-4e00-b0d9-e98dc801e54b | CentOS-7-x86_64 active | | 595b928f-8a32-4107-b6e8-fea294d3f7f1 | CentOS-8-x86_64 active| | f531869a-e0ac-4690-8f79-4ff64d284576 | Ubuntu-18.04-x86_64 active| | 7860193b-81bd-4a22-846c-ae3a89117f9d | Ubuntu-20.04-x86_64 active | 1c3a9878-6db3-441e-9a85-73d3bc4a87a7 | cirros active | d321f3a3-08a3-47bf-8f53-6aa46a6a4b52 | windows10 active --->same images I can see through horizon dashboard,but i am not able to launch windows VM from horizon dashboard .Balance all images launching from dashboard. Getting below error; Error: Failed to perform requested operation on instance "windows", the instance has an error status: Please try again later [Error: Build of instance 15a3047d-dfca-4615-9d34-c46764b703ff aborted: Volume 5ca83635-a2fe-458c-9053-cb7a9804156b did not finish being created even after we waited 190 seconds or 61 attempts. And its status is downloading.]. --->I am able to launch windows VM from openstack CLI What is the issue ?why I am not able to launch from dashboard?please help me. Thanks & Regards Midhunlal N B +918921245637 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Mar 11 13:51:46 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Mar 2021 13:51:46 +0000 Subject: Windows vm is not launching from horizon In-Reply-To: References: Message-ID: <5c4b80a2e9c6daeb3e9f5c49b7533dac04bc2106.camel@redhat.com> On Thu, 2021-03-11 at 18:54 +0530, Midhunlal Nb wrote: > Hi all, > I successfully installed the openstack rocky version and everything is > working properly. > > ---->Uploaded all images to openstack and I can see all images listed > through openstack CLI. > +--------------------------------------+---------------------+--------+ > > ID | > Name | Status | > +--------------------------------------+---------------------+--------+ > > 1912a80a-1139-4e00-b0d9-e98dc801e54b | CentOS-7-x86_64 active | > > 595b928f-8a32-4107-b6e8-fea294d3f7f1 | CentOS-8-x86_64 > active| > > f531869a-e0ac-4690-8f79-4ff64d284576 | Ubuntu-18.04-x86_64 >  active| > > 7860193b-81bd-4a22-846c-ae3a89117f9d | Ubuntu-20.04-x86_64 active > > 1c3a9878-6db3-441e-9a85-73d3bc4a87a7 | cirros >      active > > d321f3a3-08a3-47bf-8f53-6aa46a6a4b52 | windows10 >   active > > --->same images I can see through horizon dashboard,but i am not able to > launch windows VM from horizon dashboard .Balance all images launching from > dashboard. > > Getting below error; > Error: Failed to perform requested operation on instance "windows", the > instance has an error status: Please try again later [Error: Build of > instance 15a3047d-dfca-4615-9d34-c46764b703ff aborted: Volume > 5ca83635-a2fe-458c-9053-cb7a9804156b did not finish being created even > after we waited 190 seconds or 61 attempts. And its status is downloading.]. > > --->I am able to launch windows VM from openstack CLI > > What is the issue ?why I am not able to launch from dashboard?please help your cinder sotrage is likel not that fast and the windows image is much larger then the other images so its takeing longer then our default timeout to create the volume. im assuming your using somthing like the cinder LVM backend. try adding  [DEFAULT] block_device_allocate_retries_interval=10 to your nova.conf or nova-cpu.conf if this is a devstack install if you look in the cinder driver log you will likely see its still creating the volume form the image and if you use htop/ps on the host with the cinder volume driver you will see the qemu-img command running. other ways to work around this is to confgiure glance to use cider for image storage that will allow the cinder backend to create a volumn snapshot for the new volume instaed of making a full copy. over all that is much more effeicnt and works similar to when you use ceph for both glance and cinder. > me. > > Thanks & Regards > Midhunlal N B > +918921245637 From kira034 at 163.com Thu Mar 11 14:19:21 2021 From: kira034 at 163.com (Hongbin Lu) Date: Thu, 11 Mar 2021 22:19:21 +0800 (CST) Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations March 2021 End In-Reply-To: References: Message-ID: <164d5ea5.6b38.17821a7ffd3.Coremail.kira034@163.com> Hi, FYI. Zun has a candidancy for Xena: http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020998.html . Shengqin will continue to serve as Zun PTL. Best regards, Hongbin At 2021-03-10 07:52:01, "Kendall Nelson" wrote: Hello! The PTL and TC Nomination period is now over. The official candidate lists for PTLs [0] and TC seats [1] are available on the election website. -- PTL Election Details -- There are 8 projects without candidates, so according to this resolution[2], the TC will have to decide how the following projects will proceed: Barbican, Cyborg, Keystone, Mistral, Monasca, Senlin, Zaqar, Zun -- TC Election Details -- There are 0 projects that will have elections. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start Mar 12, 2021 23:45 UTC. Thank you, -Kendall Nelson (diablo_rojo) & the Election Officials [0] https://governance.openstack.org/election/#xena-ptl-candidates [1] https://governance.openstack.org/election/#xena-tc-candidates [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Mar 11 14:25:17 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 11 Mar 2021 15:25:17 +0100 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> <68d4b804-5729-e313-7f29-6c7b14166c5c@nemebean.com> <176d8d1747f.e25ad718873974.5391430581306589135@ghanshyammann.com> Message-ID: On Mon, 11 Jan 2021 at 18:09, Ben Nemec wrote: > On 1/6/21 3:23 PM, Pierre Riteau wrote: > > On Wed, 6 Jan 2021 at 18:58, Ghanshyam Mann wrote: > >> > >> ---- On Wed, 06 Jan 2021 10:34:35 -0600 Ben Nemec wrote ---- > >> > > >> > > >> > On 1/5/21 3:51 PM, Jeremy Stanley wrote: > >> > > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > >> > >> There have been many patches submitted to drop the Python 3.7 > >> > >> classifier from setup.cfg: > >> > >> https://review.opendev.org/q/%2522remove+py37%2522 > >> > >> The justification is that Wallaby tested runtimes only include 3.6 and 3.8. > >> > >> > >> > >> Most projects are merging these patches, but I've seen a couple of > >> > >> objections from ironic and horizon: > >> > >> > >> > >> - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > >> > >> - https://review.opendev.org/c/openstack/horizon/+/769237 > >> > >> > >> > >> What are the thoughts of the TC and of the overall community on this? > >> > >> Should we really drop these classifiers when there are no > >> > >> corresponding CI jobs, even though more Python versions may well be > >> > >> supported? > >> > > > >> > > My recollection of the many discussions we held was that the runtime > >> > > document would recommend the default python3 available in our > >> > > targeted platforms, but that we would also make a best effort to > >> > > test with the latest python3 available to us at the start of the > >> > > cycle as well. It was suggested more than once that we should test > >> > > all minor versions in between, but this was ruled out based on the > >> > > additional CI resources it would consume for minimal gain. Instead > >> > > we deemed that testing our target version and the latest available > >> > > would give us sufficient confidence that, if those worked, the > >> > > versions in between them were likely fine as well. Based on that, I > >> > > think the versions projects claim to work with should be contiguous > >> > > ranges, not contiguous lists of the exact versions tested (noting > >> > > that those aren't particularly *exact* versions to begin with). > >> > > > >> > > Apologies for the lack of references to old discussions, I can > >> > > probably dig some up from the ML and TC meetings several years back > >> > > of folks think it will help inform this further. > >> > > > >> > > >> > For what little it's worth, that jives with my hazy memories of the > >> > discussion too. The assumption was that if we tested the upper and lower > >> > bounds of our Python versions then the ones in the middle would be > >> > unlikely to break. It was a compromise to support multiple versions of > >> > Python without spending a ton of testing resources on it. > >> > >> > >> Exactly, py3.7 is not broken for OpenStack so declaring it not supported is not the right thing. > >> I remember the discussion when we declared the wallaby (probably from Victoria) testing runtime, > >> we decided if we test py3.6 and py3.8 it means we are not going to break py3.7 support so indirectly > >> it is tested and supported. > >> > >> And testing runtime does not mean we have to drop everything else testing means projects are all > >> welcome to keep running the py3.7 testing job on the gate there is no harm in that. > >> > >> In both cases, either project has an explicit py3.7 job or not we should not remove it from classifiers. > >> > >> > >> -gmann > > > > Thanks everyone for your input. Then should we request that those > > patches dropping the 3.7 classifier are abandoned, or reverted if > > already merged? > > > > That would be my takeaway from this discussion, yes. I saw that many projects which had merged the "remove py37" patches (e.g. Masakari, Vitrage) have now reverted them, thanks! Skimming through Gerrit, I noticed that Cyborg hasn't merged the revert commits: - https://review.opendev.org/c/openstack/cyborg/+/770719 - https://review.opendev.org/c/openstack/python-cyborgclient/+/770911 From mbultel at redhat.com Thu Mar 11 14:38:40 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Thu, 11 Mar 2021 15:38:40 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> Message-ID: Hey, Thank you Gael, +2 obviously :) Mathieu On Tue, Mar 9, 2021 at 5:06 PM Marios Andreou wrote: > > > On Tue, Mar 9, 2021 at 5:52 PM Gaël Chamoulaud > wrote: > >> On 09/Mar/2021 17:46, Marios Andreou wrote: >> > >> > >> > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud >> wrote: >> > >> > Hi TripleO Devs, >> > >> > David is already a key member of our team since a long time now, he >> > provided all the needed ansible roles for the Validation Framework >> into >> > tripleo-ansible-operator. He continuously provides excellent code >> reviews >> > and he >> > is a source of great ideas for the future of the Validation >> Framework. >> > That's >> > why we would highly benefit from his addition to the core reviewer >> team. >> > >> > Assuming that there are no objections, we will add David to the >> core team >> > next >> > week. >> > >> > >> > o/ Gael >> > >> > so it is clear and fair for everyone (e.g. I've been approached by >> others about >> > candidates for tripleo-core) >> > >> > I'd like to be clear on your proposal here because I don't think we >> have a >> > 'validation framework core' group in gerrit - do we? >> > >> > Is your proposal that David is added to the tripleo-core group [1] with >> the >> > understanding that voting rights will be exercised only in the >> following repos: >> > tripleo-validations, validations-common and validations-libs? >> >> Yes exactly! Sorry for the confusion. >> > > > ACK no problem ;) As I said we need to be transparent and fair towards > everyone. > > +1 from me to your proposal. > > Being obligated to do so at PTL ;) I did a quick review of activities. I > can see that David has been particularly active in Wallaby [1] but has made > tripleo contributions going back to 2017 [2] - I cannot see some reason to > object to the proposal! > > regards, marios > > [1] > https://www.stackalytics.io/?module=tripleo-group&project_type=openstack&user_id=davidjpeacock&metric=marks&release=wallaby > [2] https://review.opendev.org/q/owner:davidjpeacock > > > >> >> > thanks, marios >> > >> > [1] https://review.opendev.org/admin/groups/ >> > 0319cee8020840a3016f46359b076fa6b6ea831a >> > >> > >> > >> > >> > >> > Thanks, David, for your excellent work! >> > >> > -- >> > Gaël Chamoulaud - (He/Him/His) >> > .::. Red Hat .::. OpenStack .::. >> > .::. DFG:DF Squad:VF .::. >> > >> >> -- >> Gaël Chamoulaud - (He/Him/His) >> .::. Red Hat .::. OpenStack .::. >> .::. DFG:DF Squad:VF .::. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Mar 11 14:39:45 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 11 Mar 2021 14:39:45 +0000 Subject: [cinder] review priorities for the next few days In-Reply-To: <16bfaaa8-5761-eb83-efda-cf5030b10a49@gmail.com> References: <16bfaaa8-5761-eb83-efda-cf5030b10a49@gmail.com> Message-ID: Brian, Can you also add https://review.opendev.org/c/openstack/cinder/+/773854 To drivers for Wallaby. It has been waiting for reviews for quite some time. Thanks, Arkayd -----Original Message----- From: Brian Rosmaita Sent: Wednesday, March 10, 2021 2:37 PM To: openstack-discuss at lists.openstack.org Subject: [cinder] review priorities for the next few days [EXTERNAL EMAIL] Here's the list of cinder and driver features that haven't yet merged: https://etherpad.opendev.org/p/cinder-wallaby-features Please make reviewing these your top priorities. We'll get back to reviewing bug fixes next week. cheers, brian From skaplons at redhat.com Thu Mar 11 14:40:59 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 11 Mar 2021 15:40:59 +0100 Subject: [neutron] Drivers meeting - agenda for 12.03.2021 Message-ID: <20210311144059.aelx36hhnndswo2c@p1.localdomain> Hi, For tomorrow's drivers meeting we have 2 RFEs related to the integration of Neutron with Designate: * https://bugs.launchpad.net/neutron/+bug/1904559 * https://bugs.launchpad.net/neutron/+bug/1918424 I didn't marked them as triaged yet as I'm not really sure what more data we could need in both cases. I hope that we will get some more questions during the meeting so we can at least together triage them :) Of course, please check those RFEs and ask for anything You need there, even before the meeting :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From yasemin.demiral at tubitak.gov.tr Thu Mar 11 14:46:48 2021 From: yasemin.demiral at tubitak.gov.tr (Yasemin =?utf-8?Q?DEM=C4=B0RAL_=28B=C4=B0LGEM_BTE=29?=) Date: Thu, 11 Mar 2021 17:46:48 +0300 (EET) Subject: Windows vm is not launching from horizon In-Reply-To: References: Message-ID: <1915077082.56120094.1615474008539.JavaMail.zimbra@tubitak.gov.tr> Hi Do you have virtio driver your windows images ? You should create windows images with [ https://github.com/cloudbase/windows-openstack-imaging-tools | https://github.com/cloudbase/windows-openstack-imaging-tools ] for openstack. You can try this: [ https://cloudbase.it/windows-cloud-images/ | https://cloudbase.it/windows-cloud-images/ ] Thanks Yasemin DEMİRAL Araştırmacı Bulut Bilişim ve Büyük Veri Araştırma Lab. B ilişim Teknolojileri Enstitüsü TÜBİTAK BİLGEM 41470 Gebze, KOCAELİ T +90 262 675 2417 F +90 262 646 3187 [ http://bilgem.tubitak.gov.tr/ | www.bilgem.tubitak.gov.tr ] [ mailto:yasemin.demiral at tubitak.gov.tr | yasemin.demiral at tubitak.gov.tr ] [ mailto:ozgur.gun at tubitak.gov.tr | ................................................................ ] [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi | Sorumluluk Reddi ] Kimden: "Midhunlal Nb" Kime: "openstack-discuss" , "Satish Patel" Gönderilenler: 11 Mart Perşembe 2021 16:24:31 Konu: Windows vm is not launching from horizon Hi all, I successfully installed the openstack rocky version and everything is working properly. ---->Uploaded all images to openstack and I can see all images listed through openstack CLI. +--------------------------------------+---------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------+--------+ | 1912a80a-1139-4e00-b0d9-e98dc801e54b | CentOS-7-x86_64 active | | 595b928f-8a32-4107-b6e8-fea294d3f7f1 | CentOS-8-x86_64 active| | f531869a-e0ac-4690-8f79-4ff64d284576 | Ubuntu-18.04-x86_64 active| | 7860193b-81bd-4a22-846c-ae3a89117f9d | Ubuntu-20.04-x86_64 active | 1c3a9878-6db3-441e-9a85-73d3bc4a87a7 | cirros active | d321f3a3-08a3-47bf-8f53-6aa46a6a4b52 | windows10 active --->same images I can see through horizon dashboard,but i am not able to launch windows VM from horizon dashboard .Balance all images launching from dashboard. Getting below error; Error: Failed to perform requested operation on instance "windows", the instance has an error status: Please try again later [Error: Build of instance 15a3047d-dfca-4615-9d34-c46764b703ff aborted: Volume 5ca83635-a2fe-458c-9053-cb7a9804156b did not finish being created even after we waited 190 seconds or 61 attempts. And its status is downloading.]. ---> I am able to launch windows VM from openstack CLI What is the issue ?why I am not able to launch from dashboard?please help me. Thanks & Regards Midhunlal N B +918921245637 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bilgem.jpg Type: image/jpeg Size: 3031 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Mar 11 14:47:09 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 11 Mar 2021 08:47:09 -0600 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations March 2021 End In-Reply-To: <164d5ea5.6b38.17821a7ffd3.Coremail.kira034@163.com> References: <164d5ea5.6b38.17821a7ffd3.Coremail.kira034@163.com> Message-ID: <17821c1743d.f17233d0360467.502278629083295333@ghanshyammann.com> ---- On Thu, 11 Mar 2021 08:19:21 -0600 Hongbin Lu wrote ---- > Hi, > FYI. Zun has a candidancy for Xena: http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020998.html . Shengqin will continue to serve as Zun PTL. Thanks, We noted that in etherpad[1] and as the next step TC will discuss on appointing Shengqin as Zun PTL. [1] https://etherpad.opendev.org/p/xena-leaderless -gmann > > Best regards,Hongbin > > > > > > At 2021-03-10 07:52:01, "Kendall Nelson" wrote: > Hello! > The PTL and TC Nomination period is now over. The official candidate lists > for PTLs [0] and TC seats [1] are available on the election website. > > -- PTL Election Details -- > There are 8 projects without candidates, so according to this > resolution[2], the TC will have to decide how the following > projects will proceed: Barbican, Cyborg, Keystone, Mistral, Monasca, Senlin, Zaqar, Zun > > -- TC Election Details -- > There are 0 projects that will have elections. > > Now begins the campaigning period where candidates and electorate may debate their statements. > > Polling will start Mar 12, 2021 23:45 UTC. > > Thank you, > -Kendall Nelson (diablo_rojo) & the Election Officials > > [0] https://governance.openstack.org/election/#xena-ptl-candidates > [1] https://governance.openstack.org/election/#xena-tc-candidates > [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html > > > > From hberaud at redhat.com Thu Mar 11 14:57:05 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 15:57:05 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient In-Reply-To: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> References: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> Message-ID: Le jeu. 11 mars 2021 à 14:22, Artem Goncharov a écrit : > Hi all, > > I have no problems with releasing another SDK version now. Currently 2 > another mandatory changes landed in OSC, so it can be released if necessary > as well. > Thanks, then I'll proceed. > > As a food for the discussion - SDK is not only client lib, but also a lib > used in the core components (afaik Nova). So just changing label is not > necessarily helping (correct me if I misinterpret the meanings of > lib/client-lib) > Good point, I added this topic to our meeting agenda to discuss more deeply about that. > > Regards, > Artem > > On 11. Mar 2021, at 13:17, Herve Beraud wrote: > > Hello folks, > > As discussed this morning on #openstack-release the masakari team needs > OpenstackSDK changes that haven't been released. They have been merged > after the final release [1]. Those aren't landed in OpenstackSDK version > 0.54.0 [2]. > > Here is the delta of changes merged between 0.54.0 and 736f3aa1 ( > https://review.opendev.org/c/openstack/openstacksdk/+/777299): > > $ git log --no-merges --online > 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 > 736f3aa1 add masakari enabled to segment > > Only these changes are in the delta. > > Do you mind if we start releasing a new version 0.55.0 to land these > changes and unlock the path of the masakari team? > > Client library deadline is today. > > Also this topic opens the door for another discussion. Indeed it's > questionable that openstacksdk should follow the early library deadline, > for that precise reason. We release python-*client the same time as parent > projects so that late features can be taken into account. It seems > appropriate that openstacksdk is in the same bucket > > All those safeguards are a lot less needed now that we move slower and > break less things. > > That could be translated by moving OpenstackSDK from `type: library` [3] > to `type: client-library` [4]. > > Let us know what you think about the FFE. > Let's open the discussions about the type shifting. > > [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 > [2] > https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 > [3] > https://releases.openstack.org/reference/deliverable_types.html#library > [4] > https://releases.openstack.org/reference/deliverable_types.html#client-library > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 11 15:00:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 16:00:24 +0100 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <4fb951fa-0b94-dac2-97db-6f799c3986ce@est.tech> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> <4fb951fa-0b94-dac2-97db-6f799c3986ce@est.tech> Message-ID: 2) can be done easily. My 2 cents on it for now Le jeu. 11 mars 2021 à 14:24, Előd Illés a écrit : > For the meeting, in advance, my opinion: > > What Jeremy wrote looks promising. I mean it might be that the root > cause is the wrong hostname/hostkey. > > I think we have two options: > > 1. fix the hostkey in the known_hosts file (though I don't know how we > could do this o:)) > 2. technically the tagging has not happened yet (as far as I see), so we > ask Marios (:)) to update the .gitreview and then we give another round > for the 'remove' and 'readd' rocky-eol tag in tripleo-ipsec.yaml. > > We can discuss this further @ the meeting today. > > Cheers, > > Előd > > > On 2021. 03. 11. 12:42, Thierry Carrez wrote: > > Marios Andreou wrote: > >> > >> > >> On Tue, Mar 9, 2021 at 7:20 PM Jeremy Stanley >> > wrote: > >> > >> On 2021-03-09 12:39:57 +0100 (+0100), Thierry Carrez wrote: > >> > We got two similar release processing failures: > >> [...] > >> > Since this "host key verification failed" error hit tripleo-ipsec > >> > three times in a row, I suspect we have something stuck (or we > >> > always hit the same cache/worker). > >> > >> The errors happened when run on nodes in different providers. > >> Looking at the build logs, I also notice that they fail for the > >> stable/rocky branch but succeeded for other branches of the same > >> repository. They're specifically erroring when trying to reach the > >> Gerrit server over SSH, the connection details for which are encoded > >> on the .gitreview file in each branch. This leads me to wonder > >> whether there's something about the fact that the stable/rocky > >> branch of tripleo-ipsec is still using the old review.openstack.org > >> > >> hostname to push, and maybe we're not pre-seeding an appropriate > >> hostkey entry for that in the known_hosts file? > >> > >> > >> Hi Thierry, Jeremy > >> > >> is there something tripleo can do to help here? > >> > >> Should I update that > >> > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 > >> < > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2> > > >> to use opendev.org ? Though I know Elod would > >> have words with me if I did ;) as we declared it EOL @ > >> https://review.opendev.org/c/openstack/releases/+/779218 > >> > > > > I'm not sure... We'll discuss it during the release meeting today. > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Mar 11 15:11:17 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 11 Mar 2021 16:11:17 +0100 Subject: [release][OpenstackSDK][masakari] request FFE on OpenstackSDK for python-masakariclient In-Reply-To: References: <327D3543-7F8B-4EEB-9D72-FF2EC66C72E1@gmail.com> Message-ID: Please have a look at the patch and let us know if that works for you. https://review.opendev.org/c/openstack/releases/+/780013 Thanks & Regards Le jeu. 11 mars 2021 à 15:57, Herve Beraud a écrit : > > > Le jeu. 11 mars 2021 à 14:22, Artem Goncharov > a écrit : > >> Hi all, >> >> I have no problems with releasing another SDK version now. Currently 2 >> another mandatory changes landed in OSC, so it can be released if necessary >> as well. >> > > Thanks, then I'll proceed. > > >> >> As a food for the discussion - SDK is not only client lib, but also a lib >> used in the core components (afaik Nova). So just changing label is not >> necessarily helping (correct me if I misinterpret the meanings of >> lib/client-lib) >> > > Good point, I added this topic to our meeting agenda to discuss more > deeply about that. > > >> >> Regards, >> Artem >> >> On 11. Mar 2021, at 13:17, Herve Beraud wrote: >> >> Hello folks, >> >> As discussed this morning on #openstack-release the masakari team needs >> OpenstackSDK changes that haven't been released. They have been merged >> after the final release [1]. Those aren't landed in OpenstackSDK version >> 0.54.0 [2]. >> >> Here is the delta of changes merged between 0.54.0 and 736f3aa1 ( >> https://review.opendev.org/c/openstack/openstacksdk/+/777299): >> >> $ git log --no-merges --online >> 0.54.0..736f3aa16c7ee95eb5a42f28062305c83cfd05e1 >> 736f3aa1 add masakari enabled to segment >> >> Only these changes are in the delta. >> >> Do you mind if we start releasing a new version 0.55.0 to land these >> changes and unlock the path of the masakari team? >> >> Client library deadline is today. >> >> Also this topic opens the door for another discussion. Indeed it's >> questionable that openstacksdk should follow the early library deadline, >> for that precise reason. We release python-*client the same time as parent >> projects so that late features can be taken into account. It seems >> appropriate that openstacksdk is in the same bucket >> >> All those safeguards are a lot less needed now that we move slower and >> break less things. >> >> That could be translated by moving OpenstackSDK from `type: library` [3] >> to `type: client-library` [4]. >> >> Let us know what you think about the FFE. >> Let's open the discussions about the type shifting. >> >> [1] https://review.opendev.org/c/openstack/openstacksdk/+/777299 >> [2] >> https://opendev.org/openstack/releases/commit/b87ad4371321d5b0dde7ed0b236238585d5b74b3 >> [3] >> https://releases.openstack.org/reference/deliverable_types.html#library >> [4] >> https://releases.openstack.org/reference/deliverable_types.html#client-library >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Thu Mar 11 15:12:08 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Thu, 11 Mar 2021 16:12:08 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> Message-ID: Hi, I'm not core, just a member of the VF squad. I want to say that I would certainly like David to be VF Core. On Thu, Mar 11, 2021 at 3:44 PM Mathieu Bultel wrote: > Hey, > > Thank you Gael, > > +2 obviously :) > > Mathieu > > On Tue, Mar 9, 2021 at 5:06 PM Marios Andreou wrote: > >> >> >> On Tue, Mar 9, 2021 at 5:52 PM Gaël Chamoulaud >> wrote: >> >>> On 09/Mar/2021 17:46, Marios Andreou wrote: >>> > >>> > >>> > On Tue, Mar 9, 2021 at 4:54 PM Gaël Chamoulaud >>> wrote: >>> > >>> > Hi TripleO Devs, >>> > >>> > David is already a key member of our team since a long time now, he >>> > provided all the needed ansible roles for the Validation Framework >>> into >>> > tripleo-ansible-operator. He continuously provides excellent code >>> reviews >>> > and he >>> > is a source of great ideas for the future of the Validation >>> Framework. >>> > That's >>> > why we would highly benefit from his addition to the core reviewer >>> team. >>> > >>> > Assuming that there are no objections, we will add David to the >>> core team >>> > next >>> > week. >>> > >>> > >>> > o/ Gael >>> > >>> > so it is clear and fair for everyone (e.g. I've been approached by >>> others about >>> > candidates for tripleo-core) >>> > >>> > I'd like to be clear on your proposal here because I don't think we >>> have a >>> > 'validation framework core' group in gerrit - do we? >>> > >>> > Is your proposal that David is added to the tripleo-core group [1] >>> with the >>> > understanding that voting rights will be exercised only in the >>> following repos: >>> > tripleo-validations, validations-common and validations-libs? >>> >>> Yes exactly! Sorry for the confusion. >>> >> >> >> ACK no problem ;) As I said we need to be transparent and fair towards >> everyone. >> >> +1 from me to your proposal. >> >> Being obligated to do so at PTL ;) I did a quick review of activities. I >> can see that David has been particularly active in Wallaby [1] but has made >> tripleo contributions going back to 2017 [2] - I cannot see some reason to >> object to the proposal! >> >> regards, marios >> >> [1] >> https://www.stackalytics.io/?module=tripleo-group&project_type=openstack&user_id=davidjpeacock&metric=marks&release=wallaby >> [2] https://review.opendev.org/q/owner:davidjpeacock >> >> >> >>> >>> > thanks, marios >>> > >>> > [1] https://review.opendev.org/admin/groups/ >>> > 0319cee8020840a3016f46359b076fa6b6ea831a >>> > >>> > >>> > >>> > >>> > >>> > Thanks, David, for your excellent work! >>> > >>> > -- >>> > Gaël Chamoulaud - (He/Him/His) >>> > .::. Red Hat .::. OpenStack .::. >>> > .::. DFG:DF Squad:VF .::. >>> > >>> >>> -- >>> Gaël Chamoulaud - (He/Him/His) >>> .::. Red Hat .::. OpenStack .::. >>> .::. DFG:DF Squad:VF .::. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Mar 11 16:04:59 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 11 Mar 2021 21:34:59 +0530 Subject: [ops][glance][security] looking for metadefs users Message-ID: Hello operators and other people interested in metadefs, The Glance team will be giving the metadefs some love in the Xena development cycle in order to address OSSN-0088 [0]. The people who designed and implemented metadefs are long gone, and in determining how to fix OSSN-0088, we would like to understand how people are actually using them in the wild so we don't restrict them so much as to make them useless. We are looking for an operator who uses metadefs to give us a walkthrough on how you are using them at the Xena (virtual) PTG. We are planning to have this session on 23rd April around 1400 UTC. You can find more details about the same in PTG planning etherpad [1]. We are also willing to meet outside the PTG schedule in case the current scheduled time might be blocking the people. I will also reply to this as a reminder mail once our PTG schedule is final. If you do not use the metadef API for some reason related to its inability to solve a problem, lack of flexibility, or other reasons (but wish you could), we would also like to hear about that. We need to know if the feature is worth fixing and maintaining going forward. And when we say "an operator", we don't mean just one ... ideally, we'd like to have a few real-life use cases to consider. If this is affecting you (as an operator) then you can reach us either by mail or #openstack-glance IRC channel or glance weekly meeting [2] which will be held every Thursday around 1400 UTC. [0] https://wiki.openstack.org/wiki/OSSN/OSSN-0088 [1] https://etherpad.opendev.org/p/xena-ptg-glance-planning [2] https://etherpad.opendev.org/p/glance-team-meeting-agenda Thank you and Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Mar 11 16:23:34 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 16:23:34 +0000 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> Message-ID: <20210311162334.iyy7lzjdkumveuia@yuggoth.org> On 2021-03-11 12:12:52 +0200 (+0200), Marios Andreou wrote: [...] > is there something tripleo can do to help here? > > Should I update that > https://opendev.org/openstack/tripleo-ipsec/src/commit/ec8aec1d42b9085ed9152ff767eb6744095d9e16/.gitreview#L2 > to use opendev.org? Though I know Elod would have words with me if I did ;) > as we declared it EOL @ > https://review.opendev.org/c/openstack/releases/+/779218 Yeah, it's interesting that the job wants to push anything for that branch to start with. But when we finish the work to delete those EOL branches I suppose the problem will disappear anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 11 16:26:55 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 16:26:55 +0000 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: <20210311162334.iyy7lzjdkumveuia@yuggoth.org> References: <47ea6656-bbbc-2827-6a9b-86237a552a70@openstack.org> <20210309171916.rjn5va55lp5ccgmw@yuggoth.org> <20210311162334.iyy7lzjdkumveuia@yuggoth.org> Message-ID: <20210311162655.wyao6mywy3ejllew@yuggoth.org> On 2021-03-11 16:23:34 +0000 (+0000), Jeremy Stanley wrote: [...] > Yeah, it's interesting that the job wants to push anything for that > branch to start with. But when we finish the work to delete those > EOL branches I suppose the problem will disappear anyway. Oh, right, the branch is not *yet* EOL, and it's breaking when trying to push the rocky-eol tag I guess? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Thu Mar 11 16:33:18 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 11 Mar 2021 18:33:18 +0200 Subject: [TripleO] next irc meeting Tuesday Mar 16 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 16 March at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on Mar 02nd - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-03-02-14.00.html Hope you can make it on Tuesday, thanks, marios From smooney at redhat.com Thu Mar 11 17:24:26 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Mar 2021 17:24:26 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: Message-ID: On Thu, 2021-03-11 at 21:34 +0530, Abhishek Kekane wrote: > Hello operators and other people interested in metadefs, > > The Glance team will be giving the metadefs some love in the Xena > development cycle in order to address OSSN-0088 [0]. > > The people who designed and implemented metadefs are long gone, and in > determining how to fix OSSN-0088, we would like to understand how people > are actually using them in the wild so we don't restrict them so much as to > make them useless. the metadef api was orginally created as a centralised catalog for defineing all teh tuneable that can be defiend via metadata,extra specs or as attibutes on vairous resouces across multipel porjects. https://docs.openstack.org/glance/latest/user/metadefs-concepts.html#background has a table covering most of them it was intended to provide a programitc way for clients to discover what option are valid and is use by horizon and heat to generate uis and validate input. https://pasteboard.co/JS99sgU.png the list of available extra specs for the flavor metadta api is generated dirctly form the metadefs api including the desciption we se for "hw:mem_page_size". wehere a validator is specifd such as for hw:cpu_policy a drop down list in the case of enums or other validation can be applied by horizon to the parmaters. https://github.com/openstack/glance/blob/45749c30c1c02375a85eb17be0ccd983c695953f/etc/metadefs/compute-cpu-pinning.json#L23-L31 "cpu_policy": { "title": "CPU Pinning policy", "description": "Type of CPU pinning policy.", "type": "string", "enum": [ "shared", "dedicated" ] }, this same information used to be used in heat whan you a heat template to generate and validat the allowed values for some parmater although i dont knwo if that is still used. heat certely uses info form nova to get the list of flavor exctra but i belive it also used this informate when generation the ui for templates that defiled new flavors. while this was never integratein into the unified openstack client to enabel validation of flavors,images ectra that was part of the eventual design goal. at present this is the only openstack api i am currently aware of that allows you to programaticaly diccover this informateion. > > We are looking for an operator who uses metadefs to give us a walkthrough > on how you are using them at the Xena (virtual) PTG. We are planning to > have this session on 23rd April around 1400 UTC. You can find more details > about the same in PTG planning etherpad [1]. We are also willing to meet > outside the PTG schedule in case the current scheduled time might be > blocking the people. I will also reply to this as a reminder mail once our > PTG schedule is final. > > If you do not use the metadef API for some reason related to its inability > to solve a problem, lack of flexibility, or other reasons (but wish you > could), we would also like to hear about that. We need to know if the > feature is worth fixing and maintaining going forward. i still think this is a valueable feature that i which was used more often it may seam odd now that galce was choosen as the central registry for storing this information but if this api was removed i think it would be important for all project that have this type of metadta to have an alternitve metond to advertise this info. > > And when we say "an operator", we don't mean just one ... ideally, we'd > like to have a few real-life use cases to consider. looking at the OSSN https://wiki.openstack.org/wiki/OSSN/OSSN-0088 i am rather suprised that writing to this api was not admin only. i had alway tought that it was in the past and that readign form it was the only thing that was globally accessable as a normal user. i would suggest that one possible fix would be to alter the policy so that writing to this api is admin only. at the ptg we coudl discuss shoudl that be extended to user too but i dont personally see a good usecase for normally users to be able to create new metadefs. disableing it would break the current functionaliyt in horizon so i do not think that would be a good ensuer experince. > > If this is affecting you (as an operator) then you can reach us either by > mail or #openstack-glance IRC channel or glance weekly meeting [2] which > will be held every Thursday around 1400 UTC. > > [0] https://wiki.openstack.org/wiki/OSSN/OSSN-0088 > [1] https://etherpad.opendev.org/p/xena-ptg-glance-planning > [2] https://etherpad.opendev.org/p/glance-team-meeting-agenda > > > Thank you and Best Regards, > > Abhishek Kekane From yasufum.o at gmail.com Thu Mar 11 17:25:30 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Fri, 12 Mar 2021 02:25:30 +0900 Subject: [tacker] Xena PTG Message-ID: <025d0b80-5571-882a-a93f-331e4a25813f@gmail.com> Hi, Next vPTG is going to be held 19-23 April. I've prepared etherpad for the PTG[1] and reserved same timeslots as previous, 6-8 am UTC from 20th Apr[2]. Please register from [3] and fill "Attendees" and "Topics" in the etherpad for your proposals. [1] https://etherpad.opendev.org/p/tacker-xena-ptg [2] https://ethercalc.net/oz7q0gds9zfi [3] https://april2021-ptg.eventbrite.com/ Thanks, Yasufumi From dms at danplanet.com Thu Mar 11 17:46:47 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 11 Mar 2021 09:46:47 -0800 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: (Sean Mooney's message of "Thu, 11 Mar 2021 17:24:26 +0000") References: Message-ID: > it was intended to provide a programitc way for clients to discover > what option are valid and is use by horizon and heat to generate uis > and validate input. https://pasteboard.co/JS99sgU.png Ah, okay, good to know that Horizon uses this, thanks. That closes the loop quite a bit, and I assume explains why some people submit updates to the static definitions now and then. > this same information used to be used in heat whan you a heat template > to generate and validat the allowed values for some parmater although > i dont knwo if that is still used. heat certely uses info form nova > to get the list of flavor exctra but i belive it also used this > informate when generation the ui for templates that defiled new > flavors. Okay, it would be good to know if Heat uses this as well. > looking at the OSSN https://wiki.openstack.org/wiki/OSSN/OSSN-0088 i > am rather suprised that writing to this api was not admin only. i had > alway tought that it was in the past and that readign form it was the > only thing that was globally accessable as a normal user. Yeah, the API seems to clearly allow for public and private namespaces (at least) and loosely ties the structures in the database to a namespace for ownership. I think that means it was expected to be usable by regular users and providing some amount of isolation between them. It does not, however, seem to be very good at keeping private things private :) A lot of things in glance from the same time period did admin-only policy enforcement in code instead of policy, which is probably another indication that it was _expected_ to be usable by regular users. > i would suggest that one possible fix would be to alter the policy so > that writing to this api is admin only. at the ptg we coudl discuss > shoudl that be extended to user too but i dont personally see a good > usecase for normally users to be able to create new metadefs. > > disableing it would break the current functionaliyt in horizon so i do > not think that would be a good ensuer experince. Yeah, if Horizon uses this to make things pick-and-choose'able to the user, it would be nice to not just fully disable it. Right now, regular users can create these resources, and may not be aware that the names they choose are leaked to other users. Further, the creation of those things are not constrained by any limit, which is also a problem. If the main usage pattern of this is for admins to define these things (or just take all the defaults) and users just need to be able to see the largely-public lists of things they can choose from, then limiting creation to admins by default instead of fully disabling everything seems like a good course of action. I guess the remaining thing I'd like to know is: does anyone want or expect unprivileged users to be able to create these resources? Admins should still be aware that the naming is potentially leaky, and especially if they create these resources for special customers. They also may want to audit their systems for any resources that users have created up to this point, which may expose something if they keep read access enabled for everyone. It might be good if we can amend the recommendation to explain the impact of disabling everything on Horizon, along with the recommendation to restrict creation to admin-only and audit. Not sure what the procedure is for that. Thanks Sean! --Dan From fungi at yuggoth.org Thu Mar 11 18:12:12 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 18:12:12 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: Message-ID: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> On 2021-03-11 09:46:47 -0800 (-0800), Dan Smith wrote: [...] > It might be good if we can amend the recommendation to explain the > impact of disabling everything on Horizon, along with the recommendation > to restrict creation to admin-only and audit. Not sure what the > procedure is for that. The recommendation is in a wiki article[1] (in the form of an OSSN document), so can be freely edited. But if someone makes significant updates to the recommendation then we should probably also send an errata announcement to the openstack-announce and openstack-discuss mailing lists detailing what's changed since initial publication. The OSSN process[2] doesn't mandate any particular errata steps, but we can use our own judgement to determine what may be additionally worth announcing/updating for it. [1] https://wiki.openstack.org/wiki/OSSN/OSSN-0088 [2] https://wiki.openstack.org/wiki/Security/Security_Note_Process -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tobias.urdin at binero.com Thu Mar 11 16:18:40 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 11 Mar 2021 16:18:40 +0000 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> , Message-ID: <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> Hello, Not sure if you are having the same issue as us, but we are following https://bugs.launchpad.net/neutron/+bug/1901707 but are patching it with something similar to https://review.opendev.org/c/openstack/nova/+/741529 to workaround the issue until it's completely solved. Best regards ________________________________ From: Ignazio Cassano Sent: Wednesday, March 10, 2021 7:57:21 AM To: Sean Mooney Cc: openstack-discuss; Slawek Kaplonski Subject: Re: [stein][neutron] gratuitous arp Hello All, please, are there news about bug 1815989 ? On stein I modified code as suggested in the patches. I am worried when I will upgrade to train: wil this bug persist ? On which openstack version this bug is resolved ? Ignazio Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano > ha scritto: Hello, I tried to update to last stein packages on yum and seems this bug still exists. Before the yum update I patched some files as suggested and and ping to vm worked fine. After yum update the issue returns. Please, let me know If I must patch files by hand or some new parameters in configuration can solve and/or the issue is solved in newer openstack versions. Thanks Ignazio Il Mer 29 Apr 2020, 19:49 Sean Mooney > ha scritto: On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > Many thanks. > Please keep in touch. here are the two patches. the first https://review.opendev.org/#/c/724386/ is the actual change to add the new config opition this needs a release note and some tests but it shoudl be functional hence the [WIP] i have not enable the workaround in any job in this patch so the ci run will assert this does not break anything in the default case the second patch is https://review.opendev.org/#/c/724387/ which enables the workaround in the multi node ci jobs and is testing that live migration exctra works when the workaround is enabled. this should work as it is what we expect to happen if you are using a moderne nova with an old neutron. its is marked [DNM] as i dont intend that patch to merge but if the workaround is useful we migth consider enableing it for one of the jobs to get ci coverage but not all of the jobs. i have not had time to deploy a 2 node env today but ill try and test this locally tomorow. > Ignazio > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > ha scritto: > > > so bing pragmatic i think the simplest path forward given my other patches > > have not laned > > in almost 2 years is to quickly add a workaround config option to disable > > mulitple port bindign > > which we can backport and then we can try and work on the actual fix after. > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that shoudl > > serve as a workaround > > for thos that hav this issue but its a regression in functionality. > > > > i can create a patch that will do that in an hour or so and submit a > > followup DNM patch to enabel the > > workaound in one of the gate jobs that tests live migration. > > i have a meeting in 10 mins and need to finish the pacht im currently > > updating but ill submit a poc once that is done. > > > > im not sure if i will be able to spend time on the actul fix which i > > proposed last year but ill see what i can do. > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > PS > > > I have testing environment on queens,rocky and stein and I can make test > > > as you need. > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > Hello Sean, > > > > the following is the configuration on my compute nodes: > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > As far as firewall driver > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > testing > > > > environment and the > > > > same firewall driver. > > > > Live migration on provider network on queens works fine. > > > > It does not work fine on rocky and stein (vm lost connection after it > > > > is > > > > migrated and start to respond only when the vm send a network packet , > > > > for > > > > example when chrony pools the time server). > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > smooney at redhat.com> > > > > ha scritto: > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > Hello, some updated about this issue. > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > sent by > > > > > > qemu during live miration. > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > bugged. > > > > > > > > > > it is not correct. > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > its mac > > > > > learning frames > > > > > instead > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > but was fixed by > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > can you confirm you are not using the broken 2.6.0 release and are > > > > using > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > libvirt/qemu > > > > > > packages I installed on queens (I updated compute and controllers > > > > node > > > > > > > > > > on > > > > > > queens for obtaining same libvirt/qemu version deployed on rocky > > > > and > > > > > > > > > > stein). > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > fine. > > > > > > On rocky and stein not, so I think the issue is related to > > > > openstack > > > > > > components . > > > > > > > > > > on queens we have only a singel prot binding and nova blindly assumes > > > > > that the port binding details wont > > > > > change when it does a live migration and does not update the xml for > > > > the > > > > > netwrok interfaces. > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > post_livemigration > > > > > in rocky+ neutron optionally uses the multiple port bindings flow to > > > > > prebind the port to the destiatnion > > > > > so it can update the xml if needed and if post copy live migration is > > > > > enable it will asyconsly activate teh dest port > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > if you are using the iptables firewall os-vif will have precreated > > > > the > > > > > ovs port and intermediate linux bridge before the > > > > > migration started which will allow neutron to wire it up (put it on > > > > the > > > > > correct vlan and install security groups) before > > > > > the vm completes the migraton. > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > port > > > > > but libvirt deletes it and recreats it too. > > > > > as a result there is a race when using openvswitch firewall that can > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > Best Regards > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, I have this problem with rocky or newer with > > > > iptables_hybrid > > > > > > > > firewall. > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky the > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > spec intoduced teh ablity to shorten the outage by pre biding the > > > > > > > > > > port and > > > > > > > activating it when > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > live > > > > > > > > > > migrate. > > > > > > > > > > > > > > this reduces the outage time although i cant be fully elimiated > > > > as > > > > > > > > > > some > > > > > > > level of packet loss is > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be aware > > > > that > > > > > > > > > > if a > > > > > > > network partion happens > > > > > > > during a post copy live migration the vm will crash and need to > > > > be > > > > > > > restarted. > > > > > > > it is generally safe to use and will imporve the migration > > > > performace > > > > > > > > > > but > > > > > > > unlike pre copy migration if > > > > > > > the guess resumes on the dest and the mempry page has not been > > > > copied > > > > > > > > > > yet > > > > > > > then it must wait for it to be copied > > > > > > > and retrive it form the souce host. if the connection too the > > > > souce > > > > > > > > > > host > > > > > > > is intrupted then the vm cant > > > > > > > do that and the migration will fail and the instance will crash. > > > > if > > > > > > > > > > you > > > > > > > are using precopy migration > > > > > > > if there is a network partaion during the migration the > > > > migration will > > > > > > > fail but the instance will continue > > > > > > > to run on the source host. > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to be > > > > aware > > > > > > > > > > of > > > > > > > that behavior change. > > > > > > > > > > > > > > > Thanks > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > ha > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm migrate > > > > > > > > > > from one > > > > > > > > > > > > > > node > > > > > > > > > > to another I cannot ping it for several minutes. If in the > > > > vm I > > > > > > > > > > put a > > > > > > > > > > script that ping the gateway continously, the live > > > > migration > > > > > > > > > > works > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > and > > > > > > > > > > I can ping it. Why this happens ? I read something about > > > > > > > > > > gratuitous > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an older > > > > > > > > > > protocal > > > > > > > > > > > > > > called > > > > > > > > > RARP > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you using > > > > > > > > > > iptables > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can do > > > > until > > > > > > > > > > we > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > currently libvirt handels interface plugging for kernel ovs > > > > when > > > > > > > > > > using > > > > > > > > > > > > > > the > > > > > > > > > openvswitch firewall driver > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > but it > > > > > > > > > > and > > > > > > > > > > > > > > the > > > > > > > > > neutron patch are > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > while > > > > > > > > > > libvirt > > > > > > > > > > > > > > is > > > > > > > > > pluging the vif there will always be > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > then mac > > > > > > > > > > > > > > learning > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have opnestack > > > > > > > > > > rock or > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > it should reduce the downtime. in this conficution we do not > > > > have > > > > > > > > > > the > > > > > > > > > > > > > > race > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Thu Mar 11 18:50:18 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 11 Mar 2021 10:50:18 -0800 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> (Jeremy Stanley's message of "Thu, 11 Mar 2021 18:12:12 +0000") References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> Message-ID: > The recommendation is in a wiki article[1] (in the form of an OSSN > document), so can be freely edited. But if someone makes significant > updates to the recommendation then we should probably also send an > errata announcement to the openstack-announce and openstack-discuss > mailing lists detailing what's changed since initial publication. > The OSSN process[2] doesn't mandate any particular errata steps, but > we can use our own judgement to determine what may be additionally > worth announcing/updating for it. Cool, thanks. I think the recommendation is "review and reconsider the default policy for this feature" and "here is what we think is a good default if you don't otherwise know". Changing our recommended default to be a more generally-applicable doesn't seem to alter the general message to me, so I'd tend think just editing the wiki page is fine. --Dan From anlin.kong at gmail.com Thu Mar 11 19:21:23 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 12 Mar 2021 08:21:23 +1300 Subject: [trove] trove-agent can't connect postgresql container In-Reply-To: <565885711.55511185.1615450497419.JavaMail.zimbra@tubitak.gov.tr> References: <565885711.55511185.1615450497419.JavaMail.zimbra@tubitak.gov.tr> Message-ID: We've had a chat on irc, for anyone interested in the discussion, please see http://eavesdrop.openstack.org/irclogs/%23openstack-trove/%23openstack-trove.2021-03-11.log.html --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Thu, Mar 11, 2021 at 9:15 PM Yasemin DEMİRAL (BİLGEM BTE) < yasemin.demiral at tubitak.gov.tr> wrote: > Hi, > > I work on postgresql 12.4 datastore at OpenStack Victoria. Postgresql > can't create user and database automatically with trove, but when i created > database and user manually, i can connect to psql. I think postgresql > container can't communicate the trove agent. How can I fix this? > > > Thank you > > *Yasemin DEMİRAL* > > Araştırmacı > > * Bulut Bilişim ve Büyük Veri Araştırma Lab.* > > *B**ilişim Teknolojileri Enstitüsü* > > TÜBİTAK BİLGEM > > 41470 Gebze, KOCAELİ > > *T* +90 262 675 2417 > > *F* +90 262 646 3187 > > www.bilgem.tubitak.gov.tr > > yasemin.demiral at tubitak.gov.tr > > ................................................................ > > > Sorumluluk Reddi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bilgem.jpg Type: image/jpeg Size: 3031 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 11 19:27:31 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 19:27:31 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> Message-ID: <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> On 2021-03-11 10:50:18 -0800 (-0800), Dan Smith wrote: [...] > I think the recommendation is "review and reconsider the default > policy for this feature" and "here is what we think is a good > default if you don't otherwise know". Changing our recommended > default to be a more generally-applicable doesn't seem to alter > the general message to me, so I'd tend think just editing the wiki > page is fine. Seems reasonable to me. What do other folks think? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From moreira.belmiro.email.lists at gmail.com Thu Mar 11 19:59:02 2021 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Thu, 11 Mar 2021 20:59:02 +0100 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> Message-ID: Hi, we use metadef to validate our custom metadata in Horizon when users create instances. In our use case, metadef should be written only by admin. +1 to review the default policy. Belmiro CERN On Thu, Mar 11, 2021 at 8:34 PM Jeremy Stanley wrote: > On 2021-03-11 10:50:18 -0800 (-0800), Dan Smith wrote: > [...] > > I think the recommendation is "review and reconsider the default > > policy for this feature" and "here is what we think is a good > > default if you don't otherwise know". Changing our recommended > > default to be a more generally-applicable doesn't seem to alter > > the general message to me, so I'd tend think just editing the wiki > > page is fine. > > Seems reasonable to me. What do other folks think? > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Mar 11 20:22:21 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 11 Mar 2021 14:22:21 -0600 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> Message-ID: <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> ---- On Thu, 11 Mar 2021 13:59:02 -0600 Belmiro Moreira wrote ---- > Hi,we use metadef to validate our custom metadata in Horizon when users create instances. > In our use case, metadef should be written only by admin.+1 to review the default policy. In a quick search, interop certification guidelines 1] also does not use these API capabilities so changing to admin should be fine from interop and so does from Tempest test modification point of view. [1] https://opendev.org/osf/interop -gmann > BelmiroCERN > On Thu, Mar 11, 2021 at 8:34 PM Jeremy Stanley wrote: > On 2021-03-11 10:50:18 -0800 (-0800), Dan Smith wrote: > [...] > > I think the recommendation is "review and reconsider the default > > policy for this feature" and "here is what we think is a good > > default if you don't otherwise know". Changing our recommended > > default to be a more generally-applicable doesn't seem to alter > > the general message to me, so I'd tend think just editing the wiki > > page is fine. > > Seems reasonable to me. What do other folks think? > -- > Jeremy Stanley > From fungi at yuggoth.org Thu Mar 11 20:54:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Mar 2021 20:54:17 +0000 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> Message-ID: <20210311205417.jjcdudejvo4ejdsr@yuggoth.org> On 2021-03-11 14:22:21 -0600 (-0600), Ghanshyam Mann wrote: [...] > In a quick search, interop certification guidelines 1] also does > not use these API capabilities so changing to admin should be fine > from interop and so does from Tempest test modification point of > view. [...] Yep, if you check out the original bug reports leading up to the OSSN, we did at least confirm these were not part of any trademark program requirement before recommending that access be blocked. That was one of our deciding factors in the disclosure timeline. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From victoria at vmartinezdelacruz.com Thu Mar 11 21:00:22 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Thu, 11 Mar 2021 22:00:22 +0100 Subject: [manila] [FFE] Request for "Update cephfs drivers to use ceph-mgr" and "create share from snapshot support for CephFS" Message-ID: Hi, I would like to ask for an FFE for the RFEs "Update cephfs drivers to use ceph-mgr" [0] and "Create share from snapshot support for CephFS" [1] Both RFEs are related. The first one updates the cephfs drivers to use the ceph-mgr interface for all manila operations. This change is required since the library currently used is already deprecated and it is expected to be removed in the next Ceph release. The second one leverages the previous one and adds a new functionality available through the ceph-mgr interface which is the create share from snapshot support. We have been working on both features for two cycles already and we could use a few more days to finish the testing. Looking forward to a positive response. Thanks, Victoria [0] https://blueprints.launchpad.net/manila/+spec/update-cephfs-drivers [1] https://blueprints.launchpad.net/manila/+spec/create-share-from-snapshot-cephfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Mar 11 21:29:55 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 11 Mar 2021 22:29:55 +0100 Subject: [masakari] The legacy (client) has been dropped... finally! Message-ID: Hello Fellow OpenStackers, The Masakari team is proud to announce that the legacy Masakari client is no more! It has been deprecated since Stein and was less featureful than the OpenStack client plugin that was fully supported for a long time. The OSC plugin is also entirely based on OpenStack SDK. All Masakari client parts are available there. (This applies to Wallaby [version 7.0.0] and later.) -yoctozepto From luke.camilleri at zylacomputing.com Thu Mar 11 21:39:05 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 11 Mar 2021 22:39:05 +0100 Subject: Keystone enforced policy issues - Victoria Message-ID: Hi there, we have been troubleshooting keystone policies for a couple of days now (Victoria) and would like to reach out to anyone with some experience on the keystone policies. Basically we would like to follow the standard admin, member and reader roles together with the scopes function and have therefore enabled the below two option in keystone.conf enforce_scope = true enforce_new_defaults = true once enabled we started seeing the below error related to token validation in keystone.log: 2021-03-11 19:50:12.009 1047463 WARNING keystone.server.flask.application [req-33cda154-1d54-447e-8563-0676dc5d8471 020bd854741e4ba69c87d4142cad97a5 c584121f9d1a4334a3853c61bb5b9d93 - default default] You are not authorized to perform the requested action: identity:validate_token.: keystone.exception.ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token. The policy was previously setup as: identity:validate_token: rule:service_admin_or_token_subject but has now been implemented with the new scope format to: identity:validate_token: (role:reader and system_scope:all) or rule:service_role or rule:token_subject If we change the policy to the old one, we stop receiving the identity:validate_token exception. This only happens in horizon and running the same commands in the CLI (python-openstack-client) does not output any errors. To work around this behavior we place the old policy for validate_token rule in /usr/share/openstack-dashboard/openstack_dashboard/conf/keystone_policy.yaml and in the file /etc/keystone/keystone.conf A second way how we can solve the validate_token exception is to disable the option which has been enabled above "enforce_new_defaults = true" which will obviously allow the deprecated policy rules to become effective and hence has the same behavior as implementing the old policy as we did We would like to know if anyone have had this behavior and how it has been solved maybe someone can point us in the right direction to identify better what is going on. Last but not least, it seems that from Horizon, the admin user is being assigned a project scoped token instead of a system scoped token, while from the CLI the same admin user can successfully issue a system:all token and run commands across all resources. We would be very happy to receive any form of input related to the above issues we are facing Thanks in advance for any assistance From smooney at redhat.com Thu Mar 11 22:04:36 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Mar 2021 22:04:36 +0000 Subject: [Nova][FFE] libvirt vdpa support Message-ID: <4f89537fa9972b37014e24f332e6a22e34f233f3.camel@redhat.com> Hi everyone its that time again where feature are almost ready but not quite. vDPA (vHost data path acceleration) is an option kernel device offload api that allows virtio complient software devices such as a virtio net interface to be offloaded to a software or hardware acclerator such as a nic. the full detail can be found in the spec: https://specs.openstack.org/openstack/nova-specs/specs/wallaby/approved/libvirt-vdpa-support.html The series is up for review and gibi and stephen have been activly reviewing it over the last few day. https://review.opendev.org/q/topic:%22vhost-vdpa%22+(status:open%20OR%20status:merged) while the majority of the code now has +2s it wont be approved before the end of the day so i am formal asking for a feature freeze exception to allow an addtional day or to finalise the feature. the current status of the seriese id that there is one bug related to claiming pci device and marking the parent as unavaiable and claiming the parent and marking the child VF/VDPA devices as unavaiable. i have adressed this locally and will be pushing a patch to adress that later this evening once i add unit tests for thoes edgecaes. there is one patch that is pendeing to block unsupported lifecycle operation which i will be writing tomorrow. that final patch will also document them both in a release note and in the api-guide. in the evnet the full seriese is not deamed to be ready in time i have also written seperate patch to block booting vms with VDPA ports now that neutron supports it but nova does not, https://review.opendev.org/c/openstack/nova/+/780065 if we do not merge the full serise i would like to ask that we merge teh first 3 patches. openstack/nova: master: block vm boot with vdpa ports openstack/nova: master: objects: Add 'VDPA' to 'PciDeviceType' openstack/nova: master: add constants for vnic type vdpa the reason for block patch is that now that neutron support vnic-type vdpa i belive its possibel to create a port of that type and for nova to boot a vm however it will not actully use vdpa. i belvie it will take this else branch https://github.com/openstack/nova/blob/master/nova/network/os_vif_util.py#L349-L357 and result in the vm booting with a stansard ovs prot as if it was vnic-type=normal which will be confusing to debug since you were expecting to get hardware offloaed ovs but just got a standard ovs port instead without hardware offloads. operators can also block vnic-type vdpa in the neutron config. https://github.com/openstack/neutron/blob/a9fc746249cd34cb7cc594c0b4d74d8ddf65bd46/neutron/conf/plugins/ml2/drivers/openvswitch/mech_ovs_conf.py#L21-L37 but i think the nova solution is nicer since it does not require them to update there configs if this feature is not complted in Wallaby. i do belive we can complete this feature with only a small amount of addtional work and can continue to add more testing and lifecycle operations next cycle. stephen has submited https://review.opendev.org/c/openstack/nova/+/780112 to start extending the functional test framework to emulate vdpa devices so that we can harden the validation in the futrue via functional tests in addtion to the existing unit test coverage. i have not reviewed it yet so cant say how long that will take but i think that is something we can work and ensure it merged early next cycle if its not completed early next week. regards sean. From gouthampravi at gmail.com Fri Mar 12 00:45:04 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 11 Mar 2021 16:45:04 -0800 Subject: [manila] [FFE] Request for "Update cephfs drivers to use ceph-mgr" and "create share from snapshot support for CephFS" In-Reply-To: References: Message-ID: On Thu, Mar 11, 2021 at 1:05 PM Victoria Martínez de la Cruz wrote: > > Hi, > > I would like to ask for an FFE for the RFEs "Update cephfs drivers to use ceph-mgr" [0] and "Create share from snapshot support for CephFS" [1] > > Both RFEs are related. > > The first one updates the cephfs drivers to use the ceph-mgr interface for all manila operations. This change is required since the library currently used is already deprecated and it is expected to be removed in the next Ceph release. > > The second one leverages the previous one and adds a new functionality available through the ceph-mgr interface which is the create share from snapshot support. > > We have been working on both features for two cycles already and we could use a few more days to finish the testing. Thank you for your work on this for the past two cycles. It is difficult to coordinate changes within CephFS and manila at the same time. The ceph community dropping support for ceph_volume_client does put manila users of CephFS in peril when they have to upgrade ceph in a future release. Since we don't know how long the Wallaby release will be used by our users, I don't mind us taking this extra time to test your refactor. The work here, per [0] and [1] should only affect an optional backend driver in manila, and adds no new requirements; nor does it affect other projects or clients. Is that correct? Since Manila doesn't get translations, we don't have the risk of introducing user facing strings in this driver code. So I'm okay with approving this FFE. Please ensure we can wrap this up early so we have sufficient time to test after these changes are merged. > > Looking forward to a positive response. > > Thanks, > > Victoria > > [0] https://blueprints.launchpad.net/manila/+spec/update-cephfs-drivers > [1] https://blueprints.launchpad.net/manila/+spec/create-share-from-snapshot-cephfs From ricolin at ricolky.com Fri Mar 12 04:58:31 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 12 Mar 2021 12:58:31 +0800 Subject: [heat] Xena PTG: same schedule as meeting time (4/21)(2 hours) Message-ID: Hi all As VPTG approaches (PTG will be held April 19th through April 23rd), I reserve a room on 4/21 Wed. from 14:00-16:00 (UTC time). We will use Meetpad. this time. Let me know if you have any questions. Please put your name in the PTG etherpad [1] if you plan to join. Also putting topic suggestions or comments on is more than welcome. Note: Don't forget to register PTG (for free) on [2]. [1] https://etherpad.opendev.org/p/xena-ptg-heat [2] PTG Registration: https://april2021-ptg.eventbrite.com *Rico Lin* OIF Board director, OpenSack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack *Email: ricolin at ricolky.com * *Phone: +886-963-612-021* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Fri Mar 12 05:05:22 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 12 Mar 2021 13:05:22 +0800 Subject: [Multi-arch SIG] Xena PTG Message-ID: Hi all As VPTG approaches (PTG will be held April 19th through April 23rd), I reserve a room on 4/20 Tuesday from 07:00-08:00 and 15:00-16:00 (UTC time). Exactly same as meeting schedule. We will use Meetpad this time. Let me know if you have any questions. You can find us on irc #openstack-multi-arch Please put your name in the PTG etherpad [1] if you plan to join. Also putting topic suggestions or comments on is more than welcome. Note: Don't forget to register PTG (for free) on [2]. [1] https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig [2] PTG Registration: https://april2021-ptg.eventbrite.com *Rico Lin* OIF Board director, OpenSack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack *Email: ricolin at ricolky.com * *Phone: +886-963-612-021* -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Fri Mar 12 05:28:57 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 12 Mar 2021 10:58:57 +0530 Subject: [ops][glance][security] looking for metadefs users In-Reply-To: <20210311205417.jjcdudejvo4ejdsr@yuggoth.org> References: <20210311181212.y2bgwbcqmbad3fdp@yuggoth.org> <20210311192731.gx3otn2w7kg7vi3q@yuggoth.org> <17822f45716.e9a71caf378190.8285456419800974556@ghanshyammann.com> <20210311205417.jjcdudejvo4ejdsr@yuggoth.org> Message-ID: On Fri, Mar 12, 2021 at 2:27 AM Jeremy Stanley wrote: > On 2021-03-11 14:22:21 -0600 (-0600), Ghanshyam Mann wrote: > [...] > > In a quick search, interop certification guidelines 1] also does > > not use these API capabilities so changing to admin should be fine > > from interop and so does from Tempest test modification point of > > view. > [...] > > Yep, if you check out the original bug reports leading up to the > OSSN, we did at least confirm these were not part of any trademark > program requirement before recommending that access be blocked. That > was one of our deciding factors in the disclosure timeline. > -- > Jeremy Stanley > Thanks to Sean and Belmiro for confirming how and where metadefs are used. I think it makes more sense now to keep these metadef create/update/delete APIs admin-only and grant read-only access to normal users. In the advisory we should also specify that there is still a possibility of information leak in this case. Thanks and Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongli.he at intel.com Fri Mar 12 07:18:58 2021 From: yongli.he at intel.com (yonglihe) Date: Fri, 12 Mar 2021 15:18:58 +0800 Subject: [Nova][FFE] Smart-nic support Message-ID: <83cf823f-04ab-4f3a-a6c3-3bca365303df@intel.com> Hi,  Everyone Smart nics management involved Nova, Neutron and Cyborg. After 2 releases of discussing, coding and reviewing, we have merged Cyborg and Neutron support. Nova patches also got lots of attention and had many rounds of review, plus several +2. I hope we could merge Nova patches also in this release. The following is the Nova patches topic, individual patches links, and related resources links Now we blocked by lots of comments when reaching to the end of the feature freeze. Those comments could be addressed in 2-3 days, then we could get 3 more rounds review, very likely we could make it by RC1. Those comments include: • functional test -- released already. • minor typo, text changes, and needs more code comments • several codes flow refactor request • how to store arq-uuid ==nova patch topic link: https://review.opendev.org/q/topic:%22bp%252Fsriov-smartnic-support%22+(status:open%20OR%20status:merged) ==nova individual patch link: 1) smartnic support - cyborg drive: two +2 https://review.opendev.org/c/openstack/nova/+/771362 2) smartnic support - new vnic type: rounds review https://review.opendev.org/c/openstack/nova/+/771363 3) smartnic support: main patch rounds review, one +2 https://review.opendev.org/c/openstack/nova/+/758944 4) smartnic support - reject server move and suspend, one +2 https://review.opendev.org/c/openstack/nova/+/779913 5) smartnic support - functional tests https://review.opendev.org/c/openstack/nova/+/780147 == resource: 1) blueprint sriov-smartnic-support https://blueprints.launchpad.net/nova/+spec/sriov-smartnic-support 2) Approved spec: merged https://review.opendev.org/c/openstack/nova-specs/+/742785 3) Cyborg NIC driver: merged https://review.opendev.org/c/openstack/cyborg/+/758942 4) neutron patch sets: merged neutron patch set: merged https://review.opendev.org/q/topic:%22bug%252F1906602%22+ 5) neutron-lib: merged Add new VNIC types for Cyborg provisioned ports https://review.opendev.org/c/openstack/neutron-lib/+/768324 6) neutron ml2 plugin support: merged [SR-IOV] Add support for ACCELERATOR_DIRECT VNIC type https://review.opendev.org/c/openstack/neutron/+/779292 Yongli He Regards From hberaud at redhat.com Fri Mar 12 08:49:11 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 12 Mar 2021 09:49:11 +0100 Subject: [release] Release countdown for week R-5 Mar 15 - Mar 19 Message-ID: Development Focus ----------------- We just passed feature freeze! Until release branches are cut, you should stop accepting featureful changes to deliverables following the cycle-with-rc release model, or to libraries. Exceptions should be discussed on separate threads on the mailing-list, and feature freeze exceptions approved by the team's PTL. Focus should be on finding and fixing release-critical bugs, so that release candidates and final versions of the Wallaby deliverables can be proposed, well ahead of the final Wallaby release date (14 April, 2021). General Information ------------------- We are still finishing up processing a few release requests, but the Wallaby release requirements are now frozen. If new library releases are needed to fix release-critical bugs in Wallaby, you must request a Requirements Freeze Exception (RFE) from the requirements team before we can do a new release to avoid having something released in Wallaby that is not actually usable. This is done by posting to the openstack-discuss mailing list with a subject line similar to: [$PROJECT][requirements] RFE requested for $PROJECT_LIB Include justification/reasoning for why a RFE is needed for this lib. If/when the requirements team OKs the post-freeze update, we can then process a new release. A soft String freeze is now in effect, in order to let the I18N team do the translation work in good conditions. In Horizon and the various dashboard plugins, you should stop accepting changes that modify user-visible strings. Exceptions should be discussed on the mailing-list. By 5 April, 2021 this will become a hard string freeze, with no changes in user-visible strings allowed. Actions ------- stable/wallaby branches should be created soon for all not-already-branched libraries. You should expect 2-3 changes to be proposed for each: a .gitreview update, a reno update (skipped for projects not using reno), and a tox.ini constraints URL update. Please review those in priority so that the branch can be functional ASAP. The Prelude section of reno release notes is rendered as the top level overview for the release. Any important overall messaging for Wallaby changes should be added there to make sure the consumers of your release notes see them. Finally, if you haven't proposed Wallaby cycle-highlights yet, you are already late to the party. Please see http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020714.html for details. Upcoming Deadlines & Dates -------------------------- RC1 deadline: 22 March, 2021 (R-3 week) Final RC deadline: 5 April, 2021 (R-1 week) Final Wallaby release: 14 April, 2021 Xena PTG: 19 - 23 April, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Mar 12 09:31:24 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 12 Mar 2021 10:31:24 +0100 Subject: [Nova][FFE] libvirt vdpa support In-Reply-To: <4f89537fa9972b37014e24f332e6a22e34f233f3.camel@redhat.com> References: <4f89537fa9972b37014e24f332e6a22e34f233f3.camel@redhat.com> Message-ID: On Thu, Mar 11, 2021 at 22:04, Sean Mooney wrote: > Hi everyone > its that time again where feature are almost ready but not quite. > vDPA (vHost data path acceleration) is an option kernel device > offload api > that allows virtio complient software devices such as a virtio net > interface to be > offloaded to a software or hardware acclerator such as a nic. > the full detail can be found in the spec: > https://specs.openstack.org/openstack/nova-specs/specs/wallaby/approved/libvirt-vdpa-support.html > > The series is up for review and gibi and stephen have been activly > reviewing it over the last few day. > https://review.opendev.org/q/topic:%22vhost-vdpa%22+(status:open%20OR%20status:merged) > > while the majority of the code now has +2s it wont be approved before > the end of the day so i am formal asking for > a feature freeze exception to allow an addtional day or to finalise > the feature. > > the current status of the seriese id that there is one bug related to > claiming pci device and marking the parent as unavaiable > and claiming the parent and marking the child VF/VDPA devices as > unavaiable. > i have adressed this locally and will be pushing a patch to adress > that later this evening once i add unit tests for thoes > edgecaes. > > there is one patch that is pendeing to block unsupported lifecycle > operation which i will be writing tomorrow. > that final patch will also document them both in a release note and > in the api-guide. > > in the evnet the full seriese is not deamed to be ready in time i > have also written seperate patch to block booting vms with VDPA ports > now that neutron supports it but nova does not, > https://review.opendev.org/c/openstack/nova/+/780065 if we do not > merge the full serise > i would like to ask that we merge teh first 3 patches. > > openstack/nova: master: block vm boot with vdpa ports > openstack/nova: master: objects: Add 'VDPA' to 'PciDeviceType' > openstack/nova: master: add constants for vnic type vdpa > > the reason for block patch is that now that neutron support vnic-type > vdpa i belive its possibel to create a port of that type > and for nova to boot a vm however it will not actully use vdpa. i > belvie it will take this else branch > https://github.com/openstack/nova/blob/master/nova/network/os_vif_util.py#L349-L357 > and result in the vm booting with a stansard ovs > prot as if it was vnic-type=normal which will be confusing to debug > since you were expecting to get hardware offloaed > ovs but just got a standard ovs port instead without hardware > offloads. operators can also block vnic-type vdpa in the neutron > config. > https://github.com/openstack/neutron/blob/a9fc746249cd34cb7cc594c0b4d74d8ddf65bd46/neutron/conf/plugins/ml2/drivers/openvswitch/mech_ovs_conf.py#L21-L37 > but i think the nova solution is nicer since it does not require them > to update there configs if this feature is not complted in Wallaby. > > i do belive we can complete this feature with only a small amount of > addtional work > and can continue to add more testing and lifecycle operations next > cycle. > stephen has submited > https://review.opendev.org/c/openstack/nova/+/780112 to start > extending the functional > test framework to emulate vdpa devices so that we can harden the > validation in the futrue via functional tests > in addtion to the existing unit test coverage. i have not reviewed it > yet so cant say how long that will take > but i think that is something we can work and ensure it merged early > next cycle if its not completed early next week. Both Stephen and I on board to continue reviewing the VDPA series further. I just did my review round this morning and the implementation looks good to me, the bug we see yesterday has been fixed. I have some minor comments, mostly cosmetic. I can review the ops blocking patch today and that will be the last functional patch in the series that needs to land. The reno patch can be merged later without risk. The functional tests also can be landed next week without risk. I know that Sean did testing with real hardware locally so I'm pretty confident about the series. So I'm OK with accepting the FFE request. If anybody has objection please reply. Cheers, gibi > > regards > sean. > > > > > From balazs.gibizer at est.tech Fri Mar 12 09:51:25 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 12 Mar 2021 10:51:25 +0100 Subject: [Nova][FFE] Smart-nic support In-Reply-To: <83cf823f-04ab-4f3a-a6c3-3bca365303df@intel.com> References: <83cf823f-04ab-4f3a-a6c3-3bca365303df@intel.com> Message-ID: On Fri, Mar 12, 2021 at 15:18, yonglihe wrote: > Hi, Everyone > > Smart nics management involved Nova, Neutron and Cyborg. After 2 > releases of discussing, coding and reviewing, we have merged Cyborg > and Neutron support. Nova patches also got lots of attention and had > many rounds of review, plus several +2. > > I hope we could merge Nova patches also in this release. The > following is the Nova patches topic, individual patches links, and > related resources links > > Now we blocked by lots of comments when reaching to the end of the > feature freeze. Those comments could be addressed in 2-3 days, then > we could get 3 more rounds review, very likely we could make it by > RC1. > > Those comments include: > • functional test -- released already. > • minor typo, text changes, and needs more code comments > • several codes flow refactor request > • how to store arq-uuid > > > ==nova patch topic link: > https://review.opendev.org/q/topic:%22bp%252Fsriov-smartnic-support%22+(status:open%20OR%20status:merged) > > ==nova individual patch link: > 1) smartnic support - cyborg drive: two +2 > https://review.opendev.org/c/openstack/nova/+/771362 > > 2) smartnic support - new vnic type: rounds review > https://review.opendev.org/c/openstack/nova/+/771363 > > 3) smartnic support: main patch rounds review, one +2 > https://review.opendev.org/c/openstack/nova/+/758944 > > 4) smartnic support - reject server move and suspend, one +2 > https://review.opendev.org/c/openstack/nova/+/779913 > > 5) smartnic support - functional tests > https://review.opendev.org/c/openstack/nova/+/780147 > > > == resource: > 1) blueprint sriov-smartnic-support > https://blueprints.launchpad.net/nova/+spec/sriov-smartnic-support > > 2) Approved spec: merged > https://review.opendev.org/c/openstack/nova-specs/+/742785 > > 3) Cyborg NIC driver: merged > https://review.opendev.org/c/openstack/cyborg/+/758942 > > 4) neutron patch sets: merged > neutron patch set: merged > https://review.opendev.org/q/topic:%22bug%252F1906602%22+ > > 5) neutron-lib: merged > Add new VNIC types for Cyborg provisioned ports > https://review.opendev.org/c/openstack/neutron-lib/+/768324 > > 6) neutron ml2 plugin support: merged > [SR-IOV] Add support for ACCELERATOR_DIRECT VNIC type > https://review.opendev.org/c/openstack/neutron/+/779292 We talked about this on IRC[1] this morning. In summary there are couple of sizable changes needed in the series which pushes the expected readiness of the patches to mid next week. As the series has OVO changes we feel this is too risky to merge close to RC1. So we agreed with Yongli to defer this feature to Xena. Cheers, gibi [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2021-03-12.log.html#t2021-03-12T09:12:23 > > Yongli He > Regards > > From balazs.gibizer at est.tech Fri Mar 12 10:26:37 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 12 Mar 2021 11:26:37 +0100 Subject: [nova][placement] Wallaby release In-Reply-To: <8KLAPQ.YM5K5P44TRVX2@est.tech> References: <8KLAPQ.YM5K5P44TRVX2@est.tech> Message-ID: Hi, So we hit Feature Freeze yesterday. I've update launchpad blueprint statuses to reflect reality. There are couple of series that was approved before the freeze but haven't landed yet. We are pushing these through the gate: * pci-socket-affinity https://review.opendev.org/c/openstack/nova/+/772779 * port-scoped-sriov-numa-affinity https://review.opendev.org/c/openstack/nova/+/773792 * https://review.opendev.org/q/topic:bp/compact-db-migrations-wallaby+status:open * https://review.opendev.org/q/topic:bp/allow-secure-boot-for-qemu-kvm-guests+status:open We are finishing up the work on https://review.opendev.org/q/topic:vhost-vdpa+status:open where FFE is being requested. Cheers, gibi On Mon, Mar 1, 2021 at 14:31, Balazs Gibizer wrote: > Hi, > > We are getting close to the Wallaby release. So I create a tracking > etherpad[1] with the schedule and TODOs. > > One thing that I want to highlight is that we will hit Feature Freeze > on 11th of March. As the timeframe between FF and RC1 is short I'm > not plannig with FFEs. Patches that are approved before 11 March EOB > can be rechecked or rebased if needed and then re-approved. If you > have a patch that is really close but not approved before the > deadline, and you think there are two cores that willing to review it > before RC1, then please send a mail to the ML with [nova][FFE] > subject prefix not later than 16th of March EOB. > > Cheers, > gibi > > [1] https://etherpad.opendev.org/p/nova-wallaby-rc-potential > > > From luke.camilleri at zylacomputing.com Fri Mar 12 11:30:15 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Fri, 12 Mar 2021 12:30:15 +0100 Subject: [keystone][ops]Keystone enforced policy issues - Victoria In-Reply-To: References: Message-ID: Updated subject with tags On 11/03/2021 22:39, Luke Camilleri wrote: > Hi there, we have been troubleshooting keystone policies for a couple > of days now (Victoria) and would like to reach out to anyone with some > experience on the keystone policies. Basically we would like to follow > the standard admin, member and reader roles together with the scopes > function and have therefore enabled the below two option in keystone.conf > > enforce_scope = true > enforce_new_defaults = true > > once enabled we started seeing the below error related to token > validation in keystone.log: > > 2021-03-11 19:50:12.009 1047463 WARNING > keystone.server.flask.application > [req-33cda154-1d54-447e-8563-0676dc5d8471 > 020bd854741e4ba69c87d4142cad97a5 c584121f9d1a4334a3853c61bb5b9d93 - > default default] You are not authorized to perform the requested > action: identity:validate_token.: keystone.exception.ForbiddenAction: > You are not authorized to perform the requested action: > identity:validate_token. > > The policy was previously setup as: > > identity:validate_token: rule:service_admin_or_token_subject > > but has now been implemented with the new scope format to: > > identity:validate_token: (role:reader and system_scope:all) or > rule:service_role or rule:token_subject > > If we change the policy to the old one, we stop receiving the > identity:validate_token exception. This only happens in horizon and > running the same commands in the CLI (python-openstack-client) does > not output any errors. To work around this behavior we place the old > policy for validate_token rule in > /usr/share/openstack-dashboard/openstack_dashboard/conf/keystone_policy.yaml > and in the file /etc/keystone/keystone.conf > > A second way how we can solve the validate_token exception is to > disable the option which has been enabled above "enforce_new_defaults > = true" which will obviously allow the deprecated policy rules to > become effective and hence has the same behavior as implementing the > old policy as we did > > We would like to know if anyone have had this behavior and how it has > been solved maybe someone can point us in the right direction to > identify better what is going on. > > Last but not least, it seems that from Horizon, the admin user is > being assigned a project scoped token instead of a system scoped > token, while from the CLI the same admin user can successfully issue a > system:all token and run commands across all resources. We would be > very happy to receive any form of input related to the above issues we > are facing > > Thanks in advance for any assistance > From hberaud at redhat.com Fri Mar 12 11:52:10 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 12 Mar 2021 12:52:10 +0100 Subject: [nova][placement] Wallaby release In-Reply-To: References: <8KLAPQ.YM5K5P44TRVX2@est.tech> Message-ID: Ack (from the release team). Le ven. 12 mars 2021 à 11:29, Balazs Gibizer a écrit : > Hi, > > So we hit Feature Freeze yesterday. I've update launchpad blueprint > statuses to reflect reality. > > There are couple of series that was approved before the freeze but > haven't landed yet. We are pushing these through the gate: > > * pci-socket-affinity > https://review.opendev.org/c/openstack/nova/+/772779 > * port-scoped-sriov-numa-affinity > https://review.opendev.org/c/openstack/nova/+/773792 > * > > https://review.opendev.org/q/topic:bp/compact-db-migrations-wallaby+status:open > * > > https://review.opendev.org/q/topic:bp/allow-secure-boot-for-qemu-kvm-guests+status:open > > We are finishing up the work on > https://review.opendev.org/q/topic:vhost-vdpa+status:open where FFE is > being requested. > > Cheers, > gibi > > > On Mon, Mar 1, 2021 at 14:31, Balazs Gibizer > wrote: > > Hi, > > > > We are getting close to the Wallaby release. So I create a tracking > > etherpad[1] with the schedule and TODOs. > > > > One thing that I want to highlight is that we will hit Feature Freeze > > on 11th of March. As the timeframe between FF and RC1 is short I'm > > not plannig with FFEs. Patches that are approved before 11 March EOB > > can be rechecked or rebased if needed and then re-approved. If you > > have a patch that is really close but not approved before the > > deadline, and you think there are two cores that willing to review it > > before RC1, then please send a mail to the ML with [nova][FFE] > > subject prefix not later than 16th of March EOB. > > > > Cheers, > > gibi > > > > [1] https://etherpad.opendev.org/p/nova-wallaby-rc-potential > > > > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Mar 12 15:07:27 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 12 Mar 2021 09:07:27 -0600 Subject: [openstack-community] [victoria][keystone][ops]Keystone enforced policy issues In-Reply-To: <95b2b307-64de-c843-0744-1126718cf24a@zylacomputing.com> References: <95b2b307-64de-c843-0744-1126718cf24a@zylacomputing.com> Message-ID: <51EA38A2-457F-435E-AB96-3C547D393F42@demarco.com> Adding the OpenStack discuss list > On Mar 12, 2021, at 6:34 AM, Luke Camilleri wrote: > > Hi there, we have been troubleshooting keystone policies for a couple of days now (Victoria) and would like to reach out to anyone with some experience on the keystone policies. Basically we would like to follow the standard admin, member and reader roles together with the scopes function and have therefore enabled the below two option in keystone.conf > > enforce_scope = true > enforce_new_defaults = true > > once enabled we started seeing the below error related to token validation in keystone.log: > > 2021-03-11 19:50:12.009 1047463 WARNING keystone.server.flask.application [req-33cda154-1d54-447e-8563-0676dc5d8471 020bd854741e4ba69c87d4142cad97a5 c584121f9d1a4334a3853c61bb5b9d93 - default default] You are not authorized to perform the requested action: identity:validate_token.: keystone.exception.ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token. > > The policy was previously setup as: > > identity:validate_token: rule:service_admin_or_token_subject > > but has now been implemented with the new scope format to: > > identity:validate_token: (role:reader and system_scope:all) or rule:service_role or rule:token_subject > > If we change the policy to the old one, we stop receiving the identity:validate_token exception. This only happens in horizon and running the same commands in the CLI (python-openstack-client) does not output any errors. To work around this behavior we place the old policy for validate_token rule in /usr/share/openstack-dashboard/openstack_dashboard/conf/keystone_policy.yaml and in the file /etc/keystone/keystone.conf > > A second way how we can solve the validate_token exception is to disable the option which has been enabled above "enforce_new_defaults = true" which will obviously allow the deprecated policy rules to become effective and hence has the same behavior as implementing the old policy as we did > > We would like to know if anyone have had this behavior and how it has been solved maybe someone can point us in the right direction to identify better what is going on. > > Last but not least, it seems that from Horizon, the admin user is being assigned a project scoped token instead of a system scoped token, while from the CLI the same admin user can successfully issue a system:all token and run commands across all resources. We would be very happy to receive any form of input related to the above issues we are facing > > Thanks in advance for any assistance > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community From kennelson11 at gmail.com Fri Mar 12 19:01:47 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 12 Mar 2021 11:01:47 -0800 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Hello All! I know we are past the deadline, but I wanted to do one final call. If you have highlights you want included in the release marketing for Wallaby, you must have patches pushed to the releases repo by Sunday March 14th at 6:00 UTC. If you can't have it pushed by then but want to be included, please contact me directly. Thanks! -Kendall (diablo_rojo) On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson wrote: > Hello Everyone! > > It's time to start thinking about calling out 'cycle-highlights' in your > deliverables! I have no idea how we are here AGAIN ALREADY, alas, here we > be. > > As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > the last few releases: Stein[1], Train[2]. > > We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. Looking through your release notes > might be a good place to start. > > *The deadline for cycle highlights is the end of the R-5 week [3] (next > week) on March 12th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the > openstack/releases repo like this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/$RELEASE/highlights.html > file or the output of the build-openstack-sphinx-docs job under > html/$RELEASE/highlights.html. > > Feel free to add me as a reviewer on your patches. > > Can't wait to see you all have accomplished this release! > > Thanks :) > > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/stein/highlights.html > [2] https://releases.openstack.org/train/highlights.html > [3] htt > > https://releases.openstack.org/wallaby/schedule.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Mar 12 19:05:36 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 12 Mar 2021 12:05:36 -0700 Subject: [tripleo] Update: migrating master from CentOS-8 to CentOS-8-Stream - starting this Sunday (March 07) In-Reply-To: References: Message-ID: On Wed, Mar 10, 2021 at 10:36 AM Wesley Hayutin wrote: > > > On Tue, Mar 9, 2021 at 5:13 PM Wesley Hayutin wrote: > >> >> >> On Mon, Mar 8, 2021 at 12:46 AM Marios Andreou wrote: >> >>> >>> >>> On Mon, Mar 8, 2021 at 1:27 AM Wesley Hayutin >>> wrote: >>> >>>> >>>> >>>> On Fri, Mar 5, 2021 at 10:53 AM Ronelle Landy >>>> wrote: >>>> >>>>> Hello All, >>>>> >>>>> Just a reminder that we will be starting to implement steps to migrate >>>>> from master centos-8 -> centos-8-stream on this Sunday - March 07, 2021. >>>>> >>>>> The plan is outlined in: >>>>> https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw#Check-list-for-dday >>>>> >>>>> In summary, on Sunday, we plan to: >>>>> - Move the master integration line for promotions to build containers >>>>> and images on centos-8 stream nodes >>>>> - Change the release files to bring down centos-8 stream repos for use >>>>> in test jobs (test jobs will still start on centos-8 nodes - changing this >>>>> nodeset will happen later) >>>>> - Image build and container build check jobs will be moved to >>>>> non-voting during this transition. >>>>> >>>> >>>>> We have already run all the test jobs in RDO with centos-8 stream >>>>> content running on centos-8 nodes to prequalify this transition. >>>>> >>>>> We will update this list with status as we go forward with next steps. >>>>> >>>>> Thanks! >>>>> >>>> >>>> OK... status update. >>>> >>>> Thanks to Ronelle, Ananya and Sagi for working this Sunday to ensure >>>> Monday wasn't a disaster upstream. TripleO master jobs have successfully >>>> been migrated to CentOS-8-Stream today. You should see "8-stream" now in >>>> /etc/yum.repos.d/tripleo-centos.* repos. >>>> >>>> >>> >>> \o/ this is fantastic! >>> >>> nice work all thanks to everyone involved for getting this done with >>> minimal disruption >>> >>> tripleo-ci++ >>> >>> >>> >>> >>> >>>> Your CentOS-8-Stream Master hash is: >>>> >>>> edd46672cb9b7a661ecf061942d71a72 >>>> >>>> Your master repos are: >>>> https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo >>>> >>>> Containers, and overcloud images should all be centos-8-stream. >>>> >>>> The tripleo upstream check jobs for container builds and overcloud images are NON-VOTING until all the centos-8 jobs have been migrated. We'll continue to migrate each branch this week. >>>> >>>> Please open launchpad bugs w/ the "alert" tag if you are having any issues. >>>> >>>> Thanks and well done all! >>>> >>>> >>>> >>>>> >>>>> >>>>> >>>>> >>>> >> OK.... stable/victoria will start to migrate this evening to >> centos-8-stream >> >> We are looking to promote the following [1]. Again if you hit any >> issues, please just file a launchpad bug w/ the "alert" tag. >> >> Thanks >> >> >> [1] >> https://trunk.rdoproject.org/api-centos8-victoria/api/civotes_agg_detail.html?ref_hash=457ea897ac3b7552b82c532adcea63f0 >> >> > > OK... stable/victoria is now on centos-8-stream. > Holler via launchpad if you hit something... > now we're working on stable/ussuri :) > OK.. stable/ussuri and stable/train will be converted to use centos-8-stream shortly just waiting on two patches in the gate [1]. Once they are merged we'll be updating zuul to start with a centos-8-stream node as well. The hard part is just about done. Again, thanks to Ananya and Ronelle! [1] https://review.opendev.org/c/openstack/tripleo-quickstart/+/779803/ https://review.opendev.org/c/openstack/tripleo-quickstart/+/779799/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ntt7 at psu.edu Fri Mar 12 19:53:33 2021 From: ntt7 at psu.edu (Tallman, Nathan) Date: Fri, 12 Mar 2021 19:53:33 +0000 Subject: Status of Qinling Message-ID: <79C191AF-0BE2-486C-9D21-1F158575A991@psu.edu> Hello OpenStack community, I'm trying to find out if Qinling is alive or dead. The GitHub repo says it's archived and no longer maintained, but the project page on openstack.org doesn't mention anything about Qinling being abandoned. Can anyone shed light on this for me? Thanks, Nathan -- Nathan Tallman Digital Preservation Librarian Penn State University Libraries (814) 865-0860 ntt7 at psu.edu Schedule a Meeting Chat with me on Teams [Microsoft Teams Logo] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 360 bytes Desc: image001.png URL: From fungi at yuggoth.org Fri Mar 12 20:34:47 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 12 Mar 2021 20:34:47 +0000 Subject: Status of Qinling In-Reply-To: <79C191AF-0BE2-486C-9D21-1F158575A991@psu.edu> References: <79C191AF-0BE2-486C-9D21-1F158575A991@psu.edu> Message-ID: <20210312203446.wekaefvvqwotf3y3@yuggoth.org> On 2021-03-12 19:53:33 +0000 (+0000), Tallman, Nathan wrote: > I'm trying to find out if Qinling is alive or dead. The GitHub > repo https://opendev.org/openstack/qinling/src/branch/master/README.rst > says it's archived and no longer maintained, but the project > page https://www.openstack.org/software/releases/rocky/components/qinling > on openstack.org doesn't mention anything about Qinling being > abandoned. Can anyone shed light on this for me? "Dead" (unless someone resurrects it). Rocky was roughly 2.5 years ago, which is the information you're looking at on that software page. The most recent release was Victoria and the upcoming one in a few weeks is Wallaby (release names go in English alphabetical order). The Qinling project was officially retired when https://review.opendev.org/764523 merged on 2020-12-09, so Victoria was the last official OpenStack release to include it: https://releases.openstack.org/victoria/ It will not be included in the Wallaby release. If there is interest in reviving Qinling, the OpenStack Technical Committee might consider allowing its inclusion in future releases. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ignaziocassano at gmail.com Fri Mar 12 06:43:22 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 12 Mar 2021 07:43:22 +0100 Subject: [stein][neutron] gratuitous arp In-Reply-To: <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> Message-ID: Hello Tobias, the result is the same as your. I do not know what happens in depth to evaluate if the behavior is the same. I solved on stein with patch suggested by Sean : force_legacy_port_bind workaround. So I am asking if the problem exists also on train. Ignazio Il Gio 11 Mar 2021, 19:27 Tobias Urdin ha scritto: > Hello, > > > Not sure if you are having the same issue as us, but we are following > https://bugs.launchpad.net/neutron/+bug/1901707 but > > are patching it with something similar to > https://review.opendev.org/c/openstack/nova/+/741529 to workaround the > issue until it's completely solved. > > > Best regards > > ------------------------------ > *From:* Ignazio Cassano > *Sent:* Wednesday, March 10, 2021 7:57:21 AM > *To:* Sean Mooney > *Cc:* openstack-discuss; Slawek Kaplonski > *Subject:* Re: [stein][neutron] gratuitous arp > > Hello All, > please, are there news about bug 1815989 ? > On stein I modified code as suggested in the patches. > I am worried when I will upgrade to train: wil this bug persist ? > On which openstack version this bug is resolved ? > Ignazio > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> Hello, I tried to update to last stein packages on yum and seems this bug >> still exists. >> Before the yum update I patched some files as suggested and and ping to >> vm worked fine. >> After yum update the issue returns. >> Please, let me know If I must patch files by hand or some new parameters >> in configuration can solve and/or the issue is solved in newer openstack >> versions. >> Thanks >> Ignazio >> >> >> Il Mer 29 Apr 2020, 19:49 Sean Mooney ha scritto: >> >>> On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: >>> > Many thanks. >>> > Please keep in touch. >>> here are the two patches. >>> the first https://review.opendev.org/#/c/724386/ is the actual change >>> to add the new config opition >>> this needs a release note and some tests but it shoudl be functional >>> hence the [WIP] >>> i have not enable the workaround in any job in this patch so the ci run >>> will assert this does not break >>> anything in the default case >>> >>> the second patch is https://review.opendev.org/#/c/724387/ which >>> enables the workaround in the multi node ci jobs >>> and is testing that live migration exctra works when the workaround is >>> enabled. >>> >>> this should work as it is what we expect to happen if you are using a >>> moderne nova with an old neutron. >>> its is marked [DNM] as i dont intend that patch to merge but if the >>> workaround is useful we migth consider enableing >>> it for one of the jobs to get ci coverage but not all of the jobs. >>> >>> i have not had time to deploy a 2 node env today but ill try and test >>> this locally tomorow. >>> >>> >>> >>> > Ignazio >>> > >>> > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < >>> smooney at redhat.com> >>> > ha scritto: >>> > >>> > > so bing pragmatic i think the simplest path forward given my other >>> patches >>> > > have not laned >>> > > in almost 2 years is to quickly add a workaround config option to >>> disable >>> > > mulitple port bindign >>> > > which we can backport and then we can try and work on the actual fix >>> after. >>> > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that >>> shoudl >>> > > serve as a workaround >>> > > for thos that hav this issue but its a regression in functionality. >>> > > >>> > > i can create a patch that will do that in an hour or so and submit a >>> > > followup DNM patch to enabel the >>> > > workaound in one of the gate jobs that tests live migration. >>> > > i have a meeting in 10 mins and need to finish the pacht im >>> currently >>> > > updating but ill submit a poc once that is done. >>> > > >>> > > im not sure if i will be able to spend time on the actul fix which i >>> > > proposed last year but ill see what i can do. >>> > > >>> > > >>> > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: >>> > > > PS >>> > > > I have testing environment on queens,rocky and stein and I can >>> make test >>> > > > as you need. >>> > > > Ignazio >>> > > > >>> > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < >>> > > > ignaziocassano at gmail.com> ha scritto: >>> > > > >>> > > > > Hello Sean, >>> > > > > the following is the configuration on my compute nodes: >>> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt >>> > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 >>> > > > > libvirt-libs-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 >>> > > > > libvirt-client-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 >>> > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 >>> > > > > libvirt-python-4.5.0-1.el7.x86_64 >>> > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 >>> > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 >>> > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu >>> > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 >>> > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 >>> > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 >>> > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch >>> > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch >>> > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 >>> > > > > >>> > > > > >>> > > > > As far as firewall driver >>> > > >>> > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: >>> > > > > >>> > > > > firewall_driver = iptables_hybrid >>> > > > > >>> > > > > I have same libvirt/qemu version on queens, on rocky and on stein >>> > > >>> > > testing >>> > > > > environment and the >>> > > > > same firewall driver. >>> > > > > Live migration on provider network on queens works fine. >>> > > > > It does not work fine on rocky and stein (vm lost connection >>> after it >>> > > >>> > > is >>> > > > > migrated and start to respond only when the vm send a network >>> packet , >>> > > >>> > > for >>> > > > > example when chrony pools the time server). >>> > > > > >>> > > > > Ignazio >>> > > > > >>> > > > > >>> > > > > >>> > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < >>> > > >>> > > smooney at redhat.com> >>> > > > > ha scritto: >>> > > > > >>> > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: >>> > > > > > > Hello, some updated about this issue. >>> > > > > > > I read someone has got same issue as reported here: >>> > > > > > > >>> > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 >>> > > > > > > >>> > > > > > > If you read the discussion, someone tells that the garp must >>> be >>> > > >>> > > sent by >>> > > > > > > qemu during live miration. >>> > > > > > > If this is true, this means on rocky/stein the qemu/libvirt >>> are >>> > > >>> > > bugged. >>> > > > > > >>> > > > > > it is not correct. >>> > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve >>> as >>> > > >>> > > its mac >>> > > > > > learning frames >>> > > > > > instead >>> > > >>> > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol >>> > > > > > >>> https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html >>> > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 >>> > > > > > >>> https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html >>> > > > > > but was fixed by >>> > > > > > >>> > > >>> > > >>> https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b >>> > > > > > can you confirm you are not using the broken 2.6.0 release and >>> are >>> > > >>> > > using >>> > > > > > 2.7 or newer or 2.4 and older. >>> > > > > > >>> > > > > > >>> > > > > > > So I tried to use stein and rocky with the same version of >>> > > >>> > > libvirt/qemu >>> > > > > > > packages I installed on queens (I updated compute and >>> controllers >>> > > >>> > > node >>> > > > > > >>> > > > > > on >>> > > > > > > queens for obtaining same libvirt/qemu version deployed on >>> rocky >>> > > >>> > > and >>> > > > > > >>> > > > > > stein). >>> > > > > > > >>> > > > > > > On queens live migration on provider network continues to >>> work >>> > > >>> > > fine. >>> > > > > > > On rocky and stein not, so I think the issue is related to >>> > > >>> > > openstack >>> > > > > > > components . >>> > > > > > >>> > > > > > on queens we have only a singel prot binding and nova blindly >>> assumes >>> > > > > > that the port binding details wont >>> > > > > > change when it does a live migration and does not update the >>> xml for >>> > > >>> > > the >>> > > > > > netwrok interfaces. >>> > > > > > >>> > > > > > the port binding is updated after the migration is complete in >>> > > > > > post_livemigration >>> > > > > > in rocky+ neutron optionally uses the multiple port bindings >>> flow to >>> > > > > > prebind the port to the destiatnion >>> > > > > > so it can update the xml if needed and if post copy live >>> migration is >>> > > > > > enable it will asyconsly activate teh dest port >>> > > > > > binding before post_livemigration shortenting the downtime. >>> > > > > > >>> > > > > > if you are using the iptables firewall os-vif will have >>> precreated >>> > > >>> > > the >>> > > > > > ovs port and intermediate linux bridge before the >>> > > > > > migration started which will allow neutron to wire it up (put >>> it on >>> > > >>> > > the >>> > > > > > correct vlan and install security groups) before >>> > > > > > the vm completes the migraton. >>> > > > > > >>> > > > > > if you are using the ovs firewall os-vif still precreates teh >>> ovs >>> > > >>> > > port >>> > > > > > but libvirt deletes it and recreats it too. >>> > > > > > as a result there is a race when using openvswitch firewall >>> that can >>> > > > > > result in the RARP packets being lost. >>> > > > > > >>> > > > > > > >>> > > > > > > Best Regards >>> > > > > > > Ignazio Cassano >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < >>> > > > > > >>> > > > > > smooney at redhat.com> >>> > > > > > > ha scritto: >>> > > > > > > >>> > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: >>> > > > > > > > > Hello, I have this problem with rocky or newer with >>> > > >>> > > iptables_hybrid >>> > > > > > > > > firewall. >>> > > > > > > > > So, can I solve using post copy live migration ??? >>> > > > > > > > >>> > > > > > > > so this behavior has always been how nova worked but rocky >>> the >>> > > > > > > > >>> > > > > > > > >>> > > > > > >>> > > > > > >>> > > >>> > > >>> https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html >>> > > > > > > > spec intoduced teh ablity to shorten the outage by pre >>> biding the >>> > > > > > >>> > > > > > port and >>> > > > > > > > activating it when >>> > > > > > > > the vm is resumed on the destiation host before we get to >>> pos >>> > > >>> > > live >>> > > > > > >>> > > > > > migrate. >>> > > > > > > > >>> > > > > > > > this reduces the outage time although i cant be fully >>> elimiated >>> > > >>> > > as >>> > > > > > >>> > > > > > some >>> > > > > > > > level of packet loss is >>> > > > > > > > always expected when you live migrate. >>> > > > > > > > >>> > > > > > > > so yes enabliy post copy live migration should help but be >>> aware >>> > > >>> > > that >>> > > > > > >>> > > > > > if a >>> > > > > > > > network partion happens >>> > > > > > > > during a post copy live migration the vm will crash and >>> need to >>> > > >>> > > be >>> > > > > > > > restarted. >>> > > > > > > > it is generally safe to use and will imporve the migration >>> > > >>> > > performace >>> > > > > > >>> > > > > > but >>> > > > > > > > unlike pre copy migration if >>> > > > > > > > the guess resumes on the dest and the mempry page has not >>> been >>> > > >>> > > copied >>> > > > > > >>> > > > > > yet >>> > > > > > > > then it must wait for it to be copied >>> > > > > > > > and retrive it form the souce host. if the connection too >>> the >>> > > >>> > > souce >>> > > > > > >>> > > > > > host >>> > > > > > > > is intrupted then the vm cant >>> > > > > > > > do that and the migration will fail and the instance will >>> crash. >>> > > >>> > > if >>> > > > > > >>> > > > > > you >>> > > > > > > > are using precopy migration >>> > > > > > > > if there is a network partaion during the migration the >>> > > >>> > > migration will >>> > > > > > > > fail but the instance will continue >>> > > > > > > > to run on the source host. >>> > > > > > > > >>> > > > > > > > so while i would still recommend using it, i it just good >>> to be >>> > > >>> > > aware >>> > > > > > >>> > > > > > of >>> > > > > > > > that behavior change. >>> > > > > > > > >>> > > > > > > > > Thanks >>> > > > > > > > > Ignazio >>> > > > > > > > > >>> > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney < >>> smooney at redhat.com> ha >>> > > > > > >>> > > > > > scritto: >>> > > > > > > > > >>> > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano >>> wrote: >>> > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm >>> migrate >>> > > > > > >>> > > > > > from one >>> > > > > > > > >>> > > > > > > > node >>> > > > > > > > > > > to another I cannot ping it for several minutes. If >>> in the >>> > > >>> > > vm I >>> > > > > > >>> > > > > > put a >>> > > > > > > > > > > script that ping the gateway continously, the live >>> > > >>> > > migration >>> > > > > > >>> > > > > > works >>> > > > > > > > >>> > > > > > > > fine >>> > > > > > > > > > >>> > > > > > > > > > and >>> > > > > > > > > > > I can ping it. Why this happens ? I read something >>> about >>> > > > > > >>> > > > > > gratuitous >>> > > > > > > > >>> > > > > > > > arp. >>> > > > > > > > > > >>> > > > > > > > > > qemu does not use gratuitous arp but instead uses an >>> older >>> > > > > > >>> > > > > > protocal >>> > > > > > > > >>> > > > > > > > called >>> > > > > > > > > > RARP >>> > > > > > > > > > to do mac address learning. >>> > > > > > > > > > >>> > > > > > > > > > what release of openstack are you using. and are you >>> using >>> > > > > > >>> > > > > > iptables >>> > > > > > > > > > firewall of openvswitch firewall. >>> > > > > > > > > > >>> > > > > > > > > > if you are using openvswtich there is is nothing we >>> can do >>> > > >>> > > until >>> > > > > > >>> > > > > > we >>> > > > > > > > > > finally delegate vif pluging to os-vif. >>> > > > > > > > > > currently libvirt handels interface plugging for >>> kernel ovs >>> > > >>> > > when >>> > > > > > >>> > > > > > using >>> > > > > > > > >>> > > > > > > > the >>> > > > > > > > > > openvswitch firewall driver >>> > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress >>> that >>> > > >>> > > but it >>> > > > > > >>> > > > > > and >>> > > > > > > > >>> > > > > > > > the >>> > > > > > > > > > neutron patch are >>> > > > > > > > > > https://review.opendev.org/#/c/640258 rather out >>> dated. >>> > > >>> > > while >>> > > > > > >>> > > > > > libvirt >>> > > > > > > > >>> > > > > > > > is >>> > > > > > > > > > pluging the vif there will always be >>> > > > > > > > > > a race condition where the RARP packets sent by qemu >>> and >>> > > >>> > > then mac >>> > > > > > > > >>> > > > > > > > learning >>> > > > > > > > > > packets will be lost. >>> > > > > > > > > > >>> > > > > > > > > > if you are using the iptables firewall and you have >>> opnestack >>> > > > > > >>> > > > > > rock or >>> > > > > > > > > > later then if you enable post copy live migration >>> > > > > > > > > > it should reduce the downtime. in this conficution we >>> do not >>> > > >>> > > have >>> > > > > > >>> > > > > > the >>> > > > > > > > >>> > > > > > > > race >>> > > > > > > > > > betwen neutron and libvirt so the rarp >>> > > > > > > > > > packets should not be lost. >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > > Please, help me ? >>> > > > > > > > > > > Any workaround , please ? >>> > > > > > > > > > > >>> > > > > > > > > > > Best Regards >>> > > > > > > > > > > Ignazio >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > >>> > > > > > >>> > > >>> > > >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Fri Mar 12 08:13:29 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 12 Mar 2021 08:13:29 +0000 Subject: [stein][neutron] gratuitous arp In-Reply-To: References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com>, Message-ID: <95ccfc366d4b497c8af232f38d07559f@binero.com> Hello, If it's the same as us, then yes, the issue occurs on Train and is not completely solved yet. Best regards ________________________________ From: Ignazio Cassano Sent: Friday, March 12, 2021 7:43:22 AM To: Tobias Urdin Cc: openstack-discuss Subject: Re: [stein][neutron] gratuitous arp Hello Tobias, the result is the same as your. I do not know what happens in depth to evaluate if the behavior is the same. I solved on stein with patch suggested by Sean : force_legacy_port_bind workaround. So I am asking if the problem exists also on train. Ignazio Il Gio 11 Mar 2021, 19:27 Tobias Urdin > ha scritto: Hello, Not sure if you are having the same issue as us, but we are following https://bugs.launchpad.net/neutron/+bug/1901707 but are patching it with something similar to https://review.opendev.org/c/openstack/nova/+/741529 to workaround the issue until it's completely solved. Best regards ________________________________ From: Ignazio Cassano > Sent: Wednesday, March 10, 2021 7:57:21 AM To: Sean Mooney Cc: openstack-discuss; Slawek Kaplonski Subject: Re: [stein][neutron] gratuitous arp Hello All, please, are there news about bug 1815989 ? On stein I modified code as suggested in the patches. I am worried when I will upgrade to train: wil this bug persist ? On which openstack version this bug is resolved ? Ignazio Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano > ha scritto: Hello, I tried to update to last stein packages on yum and seems this bug still exists. Before the yum update I patched some files as suggested and and ping to vm worked fine. After yum update the issue returns. Please, let me know If I must patch files by hand or some new parameters in configuration can solve and/or the issue is solved in newer openstack versions. Thanks Ignazio Il Mer 29 Apr 2020, 19:49 Sean Mooney > ha scritto: On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > Many thanks. > Please keep in touch. here are the two patches. the first https://review.opendev.org/#/c/724386/ is the actual change to add the new config opition this needs a release note and some tests but it shoudl be functional hence the [WIP] i have not enable the workaround in any job in this patch so the ci run will assert this does not break anything in the default case the second patch is https://review.opendev.org/#/c/724387/ which enables the workaround in the multi node ci jobs and is testing that live migration exctra works when the workaround is enabled. this should work as it is what we expect to happen if you are using a moderne nova with an old neutron. its is marked [DNM] as i dont intend that patch to merge but if the workaround is useful we migth consider enableing it for one of the jobs to get ci coverage but not all of the jobs. i have not had time to deploy a 2 node env today but ill try and test this locally tomorow. > Ignazio > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > ha scritto: > > > so bing pragmatic i think the simplest path forward given my other patches > > have not laned > > in almost 2 years is to quickly add a workaround config option to disable > > mulitple port bindign > > which we can backport and then we can try and work on the actual fix after. > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that shoudl > > serve as a workaround > > for thos that hav this issue but its a regression in functionality. > > > > i can create a patch that will do that in an hour or so and submit a > > followup DNM patch to enabel the > > workaound in one of the gate jobs that tests live migration. > > i have a meeting in 10 mins and need to finish the pacht im currently > > updating but ill submit a poc once that is done. > > > > im not sure if i will be able to spend time on the actul fix which i > > proposed last year but ill see what i can do. > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > PS > > > I have testing environment on queens,rocky and stein and I can make test > > > as you need. > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > Hello Sean, > > > > the following is the configuration on my compute nodes: > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > As far as firewall driver > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > testing > > > > environment and the > > > > same firewall driver. > > > > Live migration on provider network on queens works fine. > > > > It does not work fine on rocky and stein (vm lost connection after it > > > > is > > > > migrated and start to respond only when the vm send a network packet , > > > > for > > > > example when chrony pools the time server). > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > smooney at redhat.com> > > > > ha scritto: > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > Hello, some updated about this issue. > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > sent by > > > > > > qemu during live miration. > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > bugged. > > > > > > > > > > it is not correct. > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > its mac > > > > > learning frames > > > > > instead > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > but was fixed by > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > can you confirm you are not using the broken 2.6.0 release and are > > > > using > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > libvirt/qemu > > > > > > packages I installed on queens (I updated compute and controllers > > > > node > > > > > > > > > > on > > > > > > queens for obtaining same libvirt/qemu version deployed on rocky > > > > and > > > > > > > > > > stein). > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > fine. > > > > > > On rocky and stein not, so I think the issue is related to > > > > openstack > > > > > > components . > > > > > > > > > > on queens we have only a singel prot binding and nova blindly assumes > > > > > that the port binding details wont > > > > > change when it does a live migration and does not update the xml for > > > > the > > > > > netwrok interfaces. > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > post_livemigration > > > > > in rocky+ neutron optionally uses the multiple port bindings flow to > > > > > prebind the port to the destiatnion > > > > > so it can update the xml if needed and if post copy live migration is > > > > > enable it will asyconsly activate teh dest port > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > if you are using the iptables firewall os-vif will have precreated > > > > the > > > > > ovs port and intermediate linux bridge before the > > > > > migration started which will allow neutron to wire it up (put it on > > > > the > > > > > correct vlan and install security groups) before > > > > > the vm completes the migraton. > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > port > > > > > but libvirt deletes it and recreats it too. > > > > > as a result there is a race when using openvswitch firewall that can > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > Best Regards > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, I have this problem with rocky or newer with > > > > iptables_hybrid > > > > > > > > firewall. > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky the > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > spec intoduced teh ablity to shorten the outage by pre biding the > > > > > > > > > > port and > > > > > > > activating it when > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > live > > > > > > > > > > migrate. > > > > > > > > > > > > > > this reduces the outage time although i cant be fully elimiated > > > > as > > > > > > > > > > some > > > > > > > level of packet loss is > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be aware > > > > that > > > > > > > > > > if a > > > > > > > network partion happens > > > > > > > during a post copy live migration the vm will crash and need to > > > > be > > > > > > > restarted. > > > > > > > it is generally safe to use and will imporve the migration > > > > performace > > > > > > > > > > but > > > > > > > unlike pre copy migration if > > > > > > > the guess resumes on the dest and the mempry page has not been > > > > copied > > > > > > > > > > yet > > > > > > > then it must wait for it to be copied > > > > > > > and retrive it form the souce host. if the connection too the > > > > souce > > > > > > > > > > host > > > > > > > is intrupted then the vm cant > > > > > > > do that and the migration will fail and the instance will crash. > > > > if > > > > > > > > > > you > > > > > > > are using precopy migration > > > > > > > if there is a network partaion during the migration the > > > > migration will > > > > > > > fail but the instance will continue > > > > > > > to run on the source host. > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to be > > > > aware > > > > > > > > > > of > > > > > > > that behavior change. > > > > > > > > > > > > > > > Thanks > > > > > > > > Ignazio > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > ha > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm migrate > > > > > > > > > > from one > > > > > > > > > > > > > > node > > > > > > > > > > to another I cannot ping it for several minutes. If in the > > > > vm I > > > > > > > > > > put a > > > > > > > > > > script that ping the gateway continously, the live > > > > migration > > > > > > > > > > works > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > and > > > > > > > > > > I can ping it. Why this happens ? I read something about > > > > > > > > > > gratuitous > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an older > > > > > > > > > > protocal > > > > > > > > > > > > > > called > > > > > > > > > RARP > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you using > > > > > > > > > > iptables > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can do > > > > until > > > > > > > > > > we > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > currently libvirt handels interface plugging for kernel ovs > > > > when > > > > > > > > > > using > > > > > > > > > > > > > > the > > > > > > > > > openvswitch firewall driver > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > but it > > > > > > > > > > and > > > > > > > > > > > > > > the > > > > > > > > > neutron patch are > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > while > > > > > > > > > > libvirt > > > > > > > > > > > > > > is > > > > > > > > > pluging the vif there will always be > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > then mac > > > > > > > > > > > > > > learning > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have opnestack > > > > > > > > > > rock or > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > it should reduce the downtime. in this conficution we do not > > > > have > > > > > > > > > > the > > > > > > > > > > > > > > race > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Fri Mar 12 21:55:23 2021 From: mthode at mthode.org (Matthew Thode) Date: Fri, 12 Mar 2021 15:55:23 -0600 Subject: [requirements][all] Openstack Requirements is now frozen Message-ID: <20210312215523.wnyast62an3gg2wo@mthode.org> As the title states, requirements is now frozen, any updates to the master branch not already approved from now until requirements branches the stable/wallaby branch will require a FFE email to be submitted to this (openstack-discuss) mailing list. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Fri Mar 12 22:31:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 12 Mar 2021 17:31:43 -0500 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> References: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> <1924f447-2199-dfa4-4a70-575cda438107@inovex.de> <63f1d37c-f7fd-15b1-5f71-74e26c44ea94@inovex.de> <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> Message-ID: <764a5932-f9b2-a882-7992-e8c3711d8ad8@gmail.com> On 3/11/21 8:21 AM, Christian Rohmann wrote: > Hey Brian, > > On 25/02/2021 19:05, Brian Rosmaita wrote: >>> Please kindly let me know if there is anything required to get this >>> merged. >> >> We have to release wallaby os-brick next week, so highest priority >> right now are os-brick reviews, but we'll get you some feedback on >> your patch as soon as we can. > > There is one +1 from Sofia on the change now. > Let me know if there is anything else missing or needs changing. Commit message needs an update to reflect the current direction of the patch. > Is this something that still could go into Wallaby BTW? Yes indeed! Thanks for your patience. > Regards > > Christian > From smooney at redhat.com Fri Mar 12 22:36:51 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 12 Mar 2021 22:36:51 +0000 Subject: [stein][neutron] gratuitous arp In-Reply-To: <95ccfc366d4b497c8af232f38d07559f@binero.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> , <95ccfc366d4b497c8af232f38d07559f@binero.com> Message-ID: <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> On Fri, 2021-03-12 at 08:13 +0000, Tobias Urdin wrote: > Hello, > > If it's the same as us, then yes, the issue occurs on Train and is not completely solved yet. there is a downstream bug trackker for this https://bugzilla.redhat.com/show_bug.cgi?id=1917675 its fixed by a combination of 3 enturon patches and i think 1 nova one https://review.opendev.org/c/openstack/neutron/+/766277/ https://review.opendev.org/c/openstack/neutron/+/753314/ https://review.opendev.org/c/openstack/neutron/+/640258/ and https://review.opendev.org/c/openstack/nova/+/770745 the first tree neutron patches would fix the evauate case but break live migration the nova patch means live migration will work too although to fully fix the related live migration packet loss issues you need https://review.opendev.org/c/openstack/nova/+/747454/4 https://review.opendev.org/c/openstack/nova/+/742180/12 to fix live migration with network abckend that dont suppor tmultiple port binding and https://review.opendev.org/c/openstack/nova/+/602432 (the only one not merged yet.) for live migrateon with ovs and hybridg plug=false (e.g. ovs firewall driver, noop or ovn instead of ml2/ovs. multiple port binding was not actully the reason for this there was a race in neutorn itslef that would have haapend even without multiple port binding between the dhcp agent and l2 agent. some of those patches have been backported already and all shoudl eventually make ti to train the could be brought to stine potentially if peopel are open to backport/review them. > > > Best regards > > ________________________________ > From: Ignazio Cassano > Sent: Friday, March 12, 2021 7:43:22 AM > To: Tobias Urdin > Cc: openstack-discuss > Subject: Re: [stein][neutron] gratuitous arp > > Hello Tobias, the result is the same as your. > I do not know what happens in depth to evaluate if the behavior is the same. > I solved on stein with patch suggested by Sean : force_legacy_port_bind workaround. > So I am asking if the problem exists also on train. > Ignazio > > Il Gio 11 Mar 2021, 19:27 Tobias Urdin > ha scritto: > > Hello, > > > Not sure if you are having the same issue as us, but we are following https://bugs.launchpad.net/neutron/+bug/1901707 but > > are patching it with something similar to https://review.opendev.org/c/openstack/nova/+/741529 to workaround the issue until it's completely solved. > > > Best regards > > ________________________________ > From: Ignazio Cassano > > Sent: Wednesday, March 10, 2021 7:57:21 AM > To: Sean Mooney > Cc: openstack-discuss; Slawek Kaplonski > Subject: Re: [stein][neutron] gratuitous arp > > Hello All, > please, are there news about bug 1815989 ? > On stein I modified code as suggested in the patches. > I am worried when I will upgrade to train: wil this bug persist ? > On which openstack version this bug is resolved ? > Ignazio > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano > ha scritto: > Hello, I tried to update to last stein packages on yum and seems this bug still exists. > Before the yum update I patched some files as suggested and and ping to vm worked fine. > After yum update the issue returns. > Please, let me know If I must patch files by hand or some new parameters in configuration can solve and/or the issue is solved in newer openstack versions. > Thanks > Ignazio > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney > ha scritto: > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > Many thanks. > > Please keep in touch. > here are the two patches. > the first https://review.opendev.org/#/c/724386/ is the actual change to add the new config opition > this needs a release note and some tests but it shoudl be functional hence the [WIP] > i have not enable the workaround in any job in this patch so the ci run will assert this does not break > anything in the default case > > the second patch is https://review.opendev.org/#/c/724387/ which enables the workaround in the multi node ci jobs > and is testing that live migration exctra works when the workaround is enabled. > > this should work as it is what we expect to happen if you are using a moderne nova with an old neutron. > its is marked [DNM] as i dont intend that patch to merge but if the workaround is useful we migth consider enableing > it for one of the jobs to get ci coverage but not all of the jobs. > > i have not had time to deploy a 2 node env today but ill try and test this locally tomorow. > > > > > Ignazio > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney > > > ha scritto: > > > > > so bing pragmatic i think the simplest path forward given my other patches > > > have not laned > > > in almost 2 years is to quickly add a workaround config option to disable > > > mulitple port bindign > > > which we can backport and then we can try and work on the actual fix after. > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that shoudl > > > serve as a workaround > > > for thos that hav this issue but its a regression in functionality. > > > > > > i can create a patch that will do that in an hour or so and submit a > > > followup DNM patch to enabel the > > > workaound in one of the gate jobs that tests live migration. > > > i have a meeting in 10 mins and need to finish the pacht im currently > > > updating but ill submit a poc once that is done. > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > proposed last year but ill see what i can do. > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > PS > > > >  I have testing environment on queens,rocky and stein and I can make test > > > > as you need. > > > > Ignazio > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > ignaziocassano at gmail.com> ha scritto: > > > > > > > > > Hello Sean, > > > > > the following is the configuration on my compute nodes: > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > As far as firewall driver > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > testing > > > > > environment and the > > > > > same firewall driver. > > > > > Live migration on provider network on queens works fine. > > > > > It does not work fine on rocky and stein (vm lost connection after it > > > > > > is > > > > > migrated and start to respond only when the vm send a network packet , > > > > > > for > > > > > example when chrony pools the time server). > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > smooney at redhat.com> > > > > > ha scritto: > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > Hello, some updated about this issue. > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must be > > > > > > sent by > > > > > > > qemu during live miration. > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt are > > > > > > bugged. > > > > > > > > > > > > it is not correct. > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve as > > > > > > its mac > > > > > > learning frames > > > > > > instead > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > but was fixed by > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > can you confirm you are not using the broken 2.6.0 release and are > > > > > > using > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > libvirt/qemu > > > > > > > packages I installed on queens (I updated compute and controllers > > > > > > node > > > > > > > > > > > > on > > > > > > > queens for obtaining same libvirt/qemu version deployed on rocky > > > > > > and > > > > > > > > > > > > stein). > > > > > > > > > > > > > > On queens live migration on provider network continues to work > > > > > > fine. > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > openstack > > > > > > > components . > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly assumes > > > > > > that the port binding details wont > > > > > > change when it does a live migration and does not update the xml for > > > > > > the > > > > > > netwrok interfaces. > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > post_livemigration > > > > > > in rocky+ neutron optionally uses the multiple port bindings flow to > > > > > > prebind the port to the destiatnion > > > > > > so it can update the xml if needed and if post copy live migration is > > > > > > enable it will asyconsly activate teh dest port > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > if you are using the iptables firewall os-vif will have precreated > > > > > > the > > > > > > ovs port and intermediate linux bridge before the > > > > > > migration started which will allow neutron to wire it up (put it on > > > > > > the > > > > > > correct vlan and install security groups) before > > > > > > the vm completes the migraton. > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh ovs > > > > > > port > > > > > > but libvirt deletes it and recreats it too. > > > > > > as a result there is a race when using openvswitch firewall that can > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > smooney at redhat.com> > > > > > > > ha scritto: > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > iptables_hybrid > > > > > > > > > firewall. > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > spec intoduced teh ablity to shorten the outage by pre biding the > > > > > > > > > > > > port and > > > > > > > > activating it when > > > > > > > > the vm is resumed on the destiation host before we get to pos > > > > > > live > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully elimiated > > > > > > as > > > > > > > > > > > > some > > > > > > > > level of packet loss is > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be aware > > > > > > that > > > > > > > > > > > > if a > > > > > > > > network partion happens > > > > > > > > during a post copy live migration the vm will crash and need to > > > > > > be > > > > > > > > restarted. > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > performace > > > > > > > > > > > > but > > > > > > > > unlike pre copy migration if > > > > > > > > the guess resumes on the dest and the mempry page has not been > > > > > > copied > > > > > > > > > > > > yet > > > > > > > > then it must wait for it to be copied > > > > > > > > and retrive it form the souce host. if the connection too the > > > > > > souce > > > > > > > > > > > > host > > > > > > > > is intrupted then the vm cant > > > > > > > > do that and the migration will fail and the instance will crash. > > > > > > if > > > > > > > > > > > > you > > > > > > > > are using precopy migration > > > > > > > > if there is a network partaion during the migration the > > > > > > migration will > > > > > > > > fail but the instance will continue > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good to be > > > > > > aware > > > > > > > > > > > > of > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney > ha > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano wrote: > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm migrate > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > node > > > > > > > > > > > to another I cannot ping it for several minutes. If in the > > > > > > vm I > > > > > > > > > > > > put a > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > migration > > > > > > > > > > > > works > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > I can ping it. Why this happens ? I read something about > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an older > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > called > > > > > > > > > > RARP > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you using > > > > > > > > > > > > iptables > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we can do > > > > > > until > > > > > > > > > > > > we > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > currently libvirt handels interface plugging for kernel ovs > > > > > > when > > > > > > > > > > > > using > > > > > > > > > > > > > > > > the > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress that > > > > > > but it > > > > > > > > > > > > and > > > > > > > > > > > > > > > > the > > > > > > > > > > neutron patch are > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out dated. > > > > > > while > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > is > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > a race condition where the RARP packets sent by qemu and > > > > > > then mac > > > > > > > > > > > > > > > > learning > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have opnestack > > > > > > > > > > > > rock or > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > it should reduce the downtime. in this conficution we do not > > > > > > have > > > > > > > > > > > > the > > > > > > > > > > > > > > > > race > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From rosmaita.fossdev at gmail.com Sat Mar 13 05:00:37 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sat, 13 Mar 2021 00:00:37 -0500 Subject: [cinder] wallaby feature freeze Message-ID: <39b4a13a-7862-b484-285a-47f00f27190a@gmail.com> The Wallaby feature freeze is now in effect. The following cinder patches have already been approved, but have not yet merged, and are granted a Feature Freeze Exception: - Add support to store volume format info in Cinder https://review.opendev.org/c/openstack/cinder/+/761152 - Add Consistency Groups support in PowerStore driver https://review.opendev.org/c/openstack/cinder/+/775090 The following patch had been approved, but ran into merge conflicts and had to be resubmitted, and requires re-review and approval. It also has been granted a FFE: - NetApp ONTAP: Implement FlexGroup pool https://review.opendev.org/c/openstack/cinder/+/776713 The third party CI is not reporting on the following patch, which otherwise would have been approved at this point. It has an FFE while this gets worked out: - Revert to snapshot for volumes on DS8000 https://review.opendev.org/c/openstack/cinder/+/773937 Any other feature patch (including driver features) must apply for an FFE by responding to this email before Tuesday, 16 March at 16:00 UTC. Note that application for an FFE is not a guarantee that an FFE will be granted. Grant of an FFE is not a guarantee that the code will be merged for Wallaby. It is expected that patches granted an FFE will be merged by 21:00 UTC on Friday 19 March. Bugfixes do not require an FFE. To be included in Wallaby, they must be merged before the first Release Candidate is cut on Thursday 25 March. After that point, only release-critical bugs will be allowed into the Wallaby release. From openinfradn at gmail.com Sat Mar 13 09:22:53 2021 From: openinfradn at gmail.com (open infra) Date: Sat, 13 Mar 2021 14:52:53 +0530 Subject: How to enable GPU in k8s environment Message-ID: Hi, What are the config files to deal with when enabling GPU (both nvidia and itel)? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Mar 13 09:28:37 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 13 Mar 2021 10:28:37 +0100 Subject: [all][tc][legal] Possible GPL violation in several places Message-ID: Dear fellow OpenStackers, it has been brought to my attention that having any (Python) imports against a GPL lib (e.g Ansible) *might be* considered linking with all the repercussions of that (copyleft anyone?). Please see the original thread in Launchpad: https://bugs.launchpad.net/kolla-ansible/+bug/1918663 Not only the projects that have "Ansible" in name might be affected, e.g. Ironic also imports Ansible parts. Do note *I am not a lawyer* so I have no idea whether the Python importing is analogous to linking in terms of GPL. I am only forwarding the concerns reported to me in Launchpad. A quick Google search was inconclusive. I have done some analysis in the original Launchpad thread mentioned above. I am copying it here for ease of reference. """ Hmm, the reasoning is interesting. OpenStack Ansible and TripleO would probably also be interested in knowing whether GPL violation is happening or not. I am not in a position to answer this question. I will propagate this matter to the mailing list: http://lists.openstack.org/pipermail/openstack-discuss/ FWIW, TripleO is by Red Hat, just like Ansible, so one would assume they know what they are doing. OTOH, there is always room for a mistake. All in all, end users must use Ansible so they must agree to GPL anyhow, so the license switch would be mostly cosmetic: +/- the OpenStack licensing requirements [1]. That said, Ansible Collections for OpenStack are already licensed under GPL [2]. And a related (and very relevant) project using Ansible - Zuul - includes a note about partial GPL [3]. A quick search [4] reveals a lot of places that could be violating GPL in OpenStack (e.g. in Ironic, base CI jobs) if we followed this linking logic. [1] https://governance.openstack.org/tc/reference/licensing.html [2] https://opendev.org/openstack/ansible-collections-openstack [3] https://opendev.org/zuul/zuul [4] https://codesearch.opendev.org/?q=(from%7Cimport)%5Cs%2B%5Cbansible%5Cb&i=nope&files=&excludeFiles=&repos= """ -yoctozepto From kira034 at 163.com Sat Mar 13 09:57:22 2021 From: kira034 at 163.com (Hongbin Lu) Date: Sat, 13 Mar 2021 17:57:22 +0800 (CST) Subject: [neutron] Bug deputy report for the week of Mar-8-2021 Message-ID: <5a38316f.209b.1782b04dece.Coremail.kira034@163.com> Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. High * Functional test test_gateway_chassis_rebalance failing due to "failed to bind logical router": https://bugs.launchpad.net/neutron/+bug/1918266 * Neutron doesn't honor system-scope: https://bugs.launchpad.net/neutron/+bug/1918506 * quota: invalid JSON for reservation value when positive: https://bugs.launchpad.net/neutron/+bug/1918565 Medium * Slownesses on neutron API with many RBAC rules: https://bugs.launchpad.net/neutron/+bug/1918145 * [QoS][SR-IOV] Minimum BW dataplane enforcement fails if NIC does not support min_tx_rate: https://bugs.launchpad.net/neutron/+bug/1918464 * [OVN] BW limit QoS rules assigned to SR-IOV ports are created on NBDB: https://bugs.launchpad.net/neutron/+bug/1918702 * Toggling dhcp on and off in a subnet causes new instances to be unreachable: https://bugs.launchpad.net/neutron/+bug/1918914 Low * Driver VLAN do not create the VlanAllocation registers if "network_vlan_ranges" only specify the network: https://bugs.launchpad.net/neutron/+bug/1918274 RFE * [RFE] RFC 2317 support in Neutron with designate provider: https://bugs.launchpad.net/neutron/+bug/1918424 Won't Fix * Can't create m2 interface on the 15.3.2: https://bugs.launchpad.net/neutron/+bug/1918155 Invalid * Designate DNS – create TLD using valid Unicode string: https://bugs.launchpad.net/neutron/+bug/1918653 Incomplete * Race between sriov_config service and openib service: https://bugs.launchpad.net/neutron/+bug/1918255 * failed to name sriov vfs because the names reserved for representors: https://bugs.launchpad.net/neutron/+bug/1918397 Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Sat Mar 13 11:07:20 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Sat, 13 Mar 2021 12:07:20 +0100 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: References: Message-ID: Hi, Thanks for raising this. A note regarding ironic inline. On Sat, Mar 13, 2021 at 10:32 AM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > Dear fellow OpenStackers, > > it has been brought to my attention that having any (Python) imports > against a GPL lib (e.g Ansible) *might be* considered linking with all > the repercussions of that (copyleft anyone?). > > Please see the original thread in Launchpad: > https://bugs.launchpad.net/kolla-ansible/+bug/1918663 > > Not only the projects that have "Ansible" in name might be affected, > e.g. Ironic also imports Ansible parts. > Only in our ansible modules and only the BSD parts, namely https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/basic.py Dmitry > > Do note *I am not a lawyer* so I have no idea whether the Python > importing is analogous to linking in terms of GPL. I am only > forwarding the concerns reported to me in Launchpad. A quick Google > search was inconclusive. > > I have done some analysis in the original Launchpad thread mentioned > above. I am copying it here for ease of reference. > > """ > Hmm, the reasoning is interesting. OpenStack Ansible and TripleO would > probably also be interested in knowing whether GPL violation is > happening or not. I am not in a position to answer this question. I > will propagate this matter to the mailing list: > http://lists.openstack.org/pipermail/openstack-discuss/ > FWIW, TripleO is by Red Hat, just like Ansible, so one would assume > they know what they are doing. OTOH, there is always room for a > mistake. > All in all, end users must use Ansible so they must agree to GPL > anyhow, so the license switch would be mostly cosmetic: +/- the > OpenStack licensing requirements [1]. > That said, Ansible Collections for OpenStack are already licensed under > GPL [2]. > And a related (and very relevant) project using Ansible - Zuul - > includes a note about partial GPL [3]. > A quick search [4] reveals a lot of places that could be violating GPL > in OpenStack (e.g. in Ironic, base CI jobs) if we followed this > linking logic. > > [1] https://governance.openstack.org/tc/reference/licensing.html > [2] https://opendev.org/openstack/ansible-collections-openstack > [3] https://opendev.org/zuul/zuul > [4] > https://codesearch.opendev.org/?q=(from%7Cimport)%5Cs%2B%5Cbansible%5Cb&i=nope&files=&excludeFiles=&repos= > """ > > -yoctozepto > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Sat Mar 13 11:52:29 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Sat, 13 Mar 2021 12:52:29 +0100 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: <764a5932-f9b2-a882-7992-e8c3711d8ad8@gmail.com> References: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> <1924f447-2199-dfa4-4a70-575cda438107@inovex.de> <63f1d37c-f7fd-15b1-5f71-74e26c44ea94@inovex.de> <921d1e19-02ff-d26d-1b00-902da6b07e95@inovex.de> <764a5932-f9b2-a882-7992-e8c3711d8ad8@gmail.com> Message-ID: <3e265fa8-e402-97f6-0244-201c67416dd0@inovex.de> On 12/03/2021 23:31, Brian Rosmaita wrote: >> There is one +1 from Sofia on the change now. >> Let me know if there is anything else missing or needs changing. > > Commit message needs an update to reflect the current direction of the > patch. Argh, stupid mistake - I fixed it. Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Mar 13 13:02:50 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 13 Mar 2021 13:02:50 +0000 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: References: Message-ID: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> On 2021-03-13 10:28:37 +0100 (+0100), Radosław Piliszek wrote: [...] > And a related (and very relevant) project using Ansible - Zuul - > includes a note about partial GPL [3]. A quick search [4] reveals > a lot of places that could be violating GPL in OpenStack (e.g. in > Ironic, base CI jobs) if we followed this linking logic. [...] The reason parts of Zuul are GPL is that they actually include forks of some components of Ansible itself (for example, to be able to thoroughly redirect command outputs to a console stream). Zuul isolates the execution of those parts of its source in order to avoid causing the entire service to be GPL. Are you suggesting that files shipped in Ironic's deliverable repos directly import (in a Python sense) these GPL files? Nothing in the https://opendev.org/zuul/zuul-jobs repository is GPL. I've never seen any credible argument that merely executing a GPL program makes the calling program a derivative work. Also, if the Zuul jobs in project repositories really were GPL, that should still be fine according to our governance: Projects run as part of the OpenStack Infrastructure (in order to produce OpenStack software) may be licensed under any OSI-approved license. This includes tools that are run with or on OpenStack projects only during validation or testing phases of development (e.g., a source code linter). https://governance.openstack.org/tc/reference/licensing.html Anyway, we have a separate mailing list for such topics: http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss Let's please not jump to conclusions when it comes to software licenses. It's not as cut and dried as you might expect. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Sat Mar 13 14:14:13 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 13 Mar 2021 08:14:13 -0600 Subject: [tc][ptl][mistral] Leadership option for Mistral for Xena cycle Message-ID: <1782bf00412.12841034c449175.1405164895866307177@ghanshyammann.com> Hello Mistral Team, As Renat already sent an email about Mistral maintenance and help on PTL role[1], and as per TC discussion with the Mistral team, the DPL model[2] is one of the options we can try. As the Xena cycle election is completed now and Mistral is on the leaderless list[3], it is time to apply the DPL model. For this model, we need three mandatory liaisons 1. Release, 2. TACT SIG, 3. Security and these can be a single person or multiple[4]. Step to Apply: You need to propose the patch in governance with the three liaisons info; here is the example for Oslo project moving to DPL model - https://review.opendev.org/c/openstack/governance/+/757906 [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020137.html [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html [3] https://etherpad.opendev.org/p/xena-leaderless [4] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#required-roles -gmann From gmann at ghanshyammann.com Sat Mar 13 14:31:44 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 13 Mar 2021 08:31:44 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-5 Update Message-ID: <1782c000efd.c88e04e5449483.1996158145376826099@ghanshyammann.com> Hello Everyone, Please find the week's R-5 updates on 'Migrate RBAC Policy Format from JSON to YAML' wallaby community-wide goals. Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml Updates: ======= * 8 more projects merged the patches last week, thanks to everyone for active review and merging these. * Sahara and Heat are failing on config object initialization, I am debugging those and will fix them soon. * Puppet Openstack left with the last patch which is on hold due to gnoochi upgrade in RDO[1]. * Panko patch is blocked due to failed gate[2], it is all good to merge (has +A) once the gate is green. Progress Summary: =============== Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml * Projects completed: 29 * Projects required to merge the patches: 5 (Openstackm Ansible , Sahara, Heat, Puppet Openstack, Telemetry ) * Projects do not need any work: 16 [1] https://review.opendev.org/c/openstack/puppet-gnocchi/+/768690 [2] https://review.opendev.org/c/openstack/panko/+/768498 -gmann From radoslaw.piliszek at gmail.com Sat Mar 13 17:03:49 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 13 Mar 2021 18:03:49 +0100 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> References: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> Message-ID: Thanks Dmitry and Jeremy for your quick response. Also thanks Jeremy for replying in Launchpad as well. My answers inline. On Sat, Mar 13, 2021 at 12:07 PM Dmitry Tantsur wrote: >> Not only the projects that have "Ansible" in name might be affected, >> e.g. Ironic also imports Ansible parts. > > > Only in our ansible modules and only the BSD parts, namely https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/basic.py Yes, that's something I have missed. The main README suggests everything is GPL in there so I was confused. Perhaps this is what also confused the OP from Launchpad. On Sat, Mar 13, 2021 at 2:07 PM Jeremy Stanley wrote: > > On 2021-03-13 10:28:37 +0100 (+0100), Radosław Piliszek wrote: > [...] > > And a related (and very relevant) project using Ansible - Zuul - > > includes a note about partial GPL [3]. A quick search [4] reveals > > a lot of places that could be violating GPL in OpenStack (e.g. in > > Ironic, base CI jobs) if we followed this linking logic. > [...] > > The reason parts of Zuul are GPL is that they actually include forks > of some components of Ansible itself (for example, to be able to > thoroughly redirect command outputs to a console stream). Zuul > isolates the execution of those parts of its source in order to > avoid causing the entire service to be GPL. That's good to know and good that Zuul does it this way. Zuul was given as *the good* example. (if it was not clear from my message) > Are you suggesting that files shipped in Ironic's deliverable repos > directly import (in a Python sense) these GPL files? Nothing in the > https://opendev.org/zuul/zuul-jobs repository is GPL. I've never > seen any credible argument that merely executing a GPL program makes > the calling program a derivative work. Also, if the Zuul jobs in > project repositories really were GPL, that should still be fine > according to our governance: I'm not suggesting anything. Dmitry gave a good example that parts of Ansible are actually BSD-licensed (even though it's only obvious after inspecting each file) so it has to be analysed file-per-file then. Also, a purist may say that Ansible still "curses" such usage by GPL because, when you import in Python, you are actually executing the __init__s in the context of your software and those are licensed under GPL, especially the root one. > Anyway, we have a separate mailing list for such topics: > > http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss I have missed it. Should we post there? It looks pretty abandoned (perhaps for a good reason ;-) ). > Let's please not jump to conclusions when it comes to software > licenses. It's not as cut and dried as you might expect. I know, and thus expect, it to be a complex topic and that's the role of the word "possible" in the very subject. I am seeking advice. Also, for clarity, I am just propagating the original report with my extra findings. I was not aware that there could be any issue whatsoever with the licenses we have. -yoctozepto From ignaziocassano at gmail.com Sat Mar 13 17:20:37 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 13 Mar 2021 18:20:37 +0100 Subject: [Quuens][nova] lost config.drive file Message-ID: Hello, probably because an human error, we lost the file disk.config drive file for some instances under /var/lib/nova/instances/uuid directories. Please, is there something we can do for rebuild it for revert instances without rebuild ? Anycase, does the instance rebuilding regenerate the file? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Mar 13 19:49:05 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 13 Mar 2021 19:49:05 +0000 Subject: [all][tc][legal] Possible GPL violation in several places In-Reply-To: References: <20210313130250.5oomkcnf2zs7rkhg@yuggoth.org> Message-ID: <20210313194905.fq2sh6u5bt7vb2d4@yuggoth.org> On 2021-03-13 18:03:49 +0100 (+0100), Radosław Piliszek wrote: [...] > The main README suggests everything is GPL in there so I was > confused. Perhaps this is what also confused the OP from > Launchpad. [...] I suspect (though do not know for sure) that this is why the Ansible maintainers have moved those files all into a separate directory tree. I would not be surprised if they have plans to move them into a separate repository in the future so as to provide even clearer separation. Some of the copyright license situation in there was murky a few years back, so I expect they're still working to improve things in that regard. > a purist may say that Ansible still "curses" such usage by GPL > because, when you import in Python, you are actually executing the > __init__s in the context of your software and those are licensed under > GPL, especially the root one. The way this is usually done is via a line like: from ansible.module_utils.basic import AnsibleModule I don't believe that actually loads the GPL-licensed ansible/__init__.py file, but I get a bit lost in the nuances of the several different kinds of Python package namespaces. However (and to reiterate, I'm no lawyer, this is not legal advice) what's generally important to look at in these sorts of situations is intent. Law is not like a computer program, and so strict literal interpretations are quite frequently off-base. It's fairly clear the Ansible authors intend for you to be able to import those scripts from ansible.module_utils in more permissively-licensed programs, so by doing that we're not acting counter to their wishes. > I have missed it. Should we post there? It looks pretty abandoned > (perhaps for a good reason ;-) ). [...] It's infrequently-used because such questions arise infrequently (thankfully). If anyone feels we need to start the process of soliciting an actual legal opinion on these matters though, we should re-raise the topic there initially. But before doing that, check the list archives to make sure we haven't already had this discussion in prior years, and also do a bit of research to see whether the Ansible project has already published documentation about the intended license situation for the files you're concerned about. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From masayuki.igawa at gmail.com Sun Mar 14 00:21:52 2021 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Sun, 14 Mar 2021 09:21:52 +0900 Subject: [qa][ptg] Xena PTG Message-ID: <07a5dab0-6b3e-490d-b8a9-fcde1f762c22@www.fastmail.com> Hi, If you are planning to attend QA sessions during the Xena PTG, please fill in the doodle[1] with time slots which are good for you before 23rd March so that we can book the best time slots for most of us. And, please add any topic which you want to discuss on the etherpad[2]. [1] https://doodle.com/poll/5ixvdcbm488kc2ic [2] https://etherpad.opendev.org/p/qa-xena-ptg Best Regards, -- Masayuki Igawa From ignaziocassano at gmail.com Sat Mar 13 20:56:13 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 13 Mar 2021 21:56:13 +0100 Subject: [stein][neutron] gratuitous arp In-Reply-To: <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> <95ccfc366d4b497c8af232f38d07559f@binero.com> <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> Message-ID: Many thanks for your explanation, Sean. Ignazio Il Ven 12 Mar 2021, 23:44 Sean Mooney ha scritto: > On Fri, 2021-03-12 at 08:13 +0000, Tobias Urdin wrote: > > Hello, > > > > If it's the same as us, then yes, the issue occurs on Train and is not > completely solved yet. > there is a downstream bug trackker for this > > https://bugzilla.redhat.com/show_bug.cgi?id=1917675 > > its fixed by a combination of 3 enturon patches and i think 1 nova one > > https://review.opendev.org/c/openstack/neutron/+/766277/ > https://review.opendev.org/c/openstack/neutron/+/753314/ > https://review.opendev.org/c/openstack/neutron/+/640258/ > > and > https://review.opendev.org/c/openstack/nova/+/770745 > > the first tree neutron patches would fix the evauate case but break live > migration > the nova patch means live migration will work too although to fully fix > the related > live migration packet loss issues you need > > https://review.opendev.org/c/openstack/nova/+/747454/4 > https://review.opendev.org/c/openstack/nova/+/742180/12 > to fix live migration with network abckend that dont suppor tmultiple port > binding > and > https://review.opendev.org/c/openstack/nova/+/602432 (the only one not > merged yet.) > for live migrateon with ovs and hybridg plug=false (e.g. ovs firewall > driver, noop or ovn instead of ml2/ovs. > > multiple port binding was not actully the reason for this there was a race > in neutorn itslef that would have haapend > even without multiple port binding between the dhcp agent and l2 agent. > > some of those patches have been backported already and all shoudl > eventually make ti to train the could be brought to stine potentially > if peopel are open to backport/review them. > > > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano > > Sent: Friday, March 12, 2021 7:43:22 AM > > To: Tobias Urdin > > Cc: openstack-discuss > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello Tobias, the result is the same as your. > > I do not know what happens in depth to evaluate if the behavior is the > same. > > I solved on stein with patch suggested by Sean : force_legacy_port_bind > workaround. > > So I am asking if the problem exists also on train. > > Ignazio > > > > Il Gio 11 Mar 2021, 19:27 Tobias Urdin tobias.urdin at binero.com>> ha scritto: > > > > Hello, > > > > > > Not sure if you are having the same issue as us, but we are following > https://bugs.launchpad.net/neutron/+bug/1901707 but > > > > are patching it with something similar to > https://review.opendev.org/c/openstack/nova/+/741529 to workaround the > issue until it's completely solved. > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano ignaziocassano at gmail.com>> > > Sent: Wednesday, March 10, 2021 7:57:21 AM > > To: Sean Mooney > > Cc: openstack-discuss; Slawek Kaplonski > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello All, > > please, are there news about bug 1815989 ? > > On stein I modified code as suggested in the patches. > > I am worried when I will upgrade to train: wil this bug persist ? > > On which openstack version this bug is resolved ? > > Ignazio > > > > > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > > Hello, I tried to update to last stein packages on yum and seems this > bug still exists. > > Before the yum update I patched some files as suggested and and ping to > vm worked fine. > > After yum update the issue returns. > > Please, let me know If I must patch files by hand or some new parameters > in configuration can solve and/or the issue is solved in newer openstack > versions. > > Thanks > > Ignazio > > > > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney smooney at redhat.com>> ha scritto: > > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > > Many thanks. > > > Please keep in touch. > > here are the two patches. > > the first https://review.opendev.org/#/c/724386/ is the actual change > to add the new config opition > > this needs a release note and some tests but it shoudl be functional > hence the [WIP] > > i have not enable the workaround in any job in this patch so the ci run > will assert this does not break > > anything in the default case > > > > the second patch is https://review.opendev.org/#/c/724387/ which > enables the workaround in the multi node ci jobs > > and is testing that live migration exctra works when the workaround is > enabled. > > > > this should work as it is what we expect to happen if you are using a > moderne nova with an old neutron. > > its is marked [DNM] as i dont intend that patch to merge but if the > workaround is useful we migth consider enableing > > it for one of the jobs to get ci coverage but not all of the jobs. > > > > i have not had time to deploy a 2 node env today but ill try and test > this locally tomorow. > > > > > > > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < > smooney at redhat.com> > > > ha scritto: > > > > > > > so bing pragmatic i think the simplest path forward given my other > patches > > > > have not laned > > > > in almost 2 years is to quickly add a workaround config option to > disable > > > > mulitple port bindign > > > > which we can backport and then we can try and work on the actual fix > after. > > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that > shoudl > > > > serve as a workaround > > > > for thos that hav this issue but its a regression in functionality. > > > > > > > > i can create a patch that will do that in an hour or so and submit a > > > > followup DNM patch to enabel the > > > > workaound in one of the gate jobs that tests live migration. > > > > i have a meeting in 10 mins and need to finish the pacht im > currently > > > > updating but ill submit a poc once that is done. > > > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > > proposed last year but ill see what i can do. > > > > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > > PS > > > > > I have testing environment on queens,rocky and stein and I can > make test > > > > > as you need. > > > > > Ignazio > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > > ignaziocassano at gmail.com> ha > scritto: > > > > > > > > > > > Hello Sean, > > > > > > the following is the configuration on my compute nodes: > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > > > > As far as firewall driver > > > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > > > testing > > > > > > environment and the > > > > > > same firewall driver. > > > > > > Live migration on provider network on queens works fine. > > > > > > It does not work fine on rocky and stein (vm lost connection > after it > > > > > > > > is > > > > > > migrated and start to respond only when the vm send a network > packet , > > > > > > > > for > > > > > > example when chrony pools the time server). > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, some updated about this issue. > > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must > be > > > > > > > > sent by > > > > > > > > qemu during live miration. > > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt > are > > > > > > > > bugged. > > > > > > > > > > > > > > it is not correct. > > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve > as > > > > > > > > its mac > > > > > > > learning frames > > > > > > > instead > > > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > > but was fixed by > > > > > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > > can you confirm you are not using the broken 2.6.0 release and > are > > > > > > > > using > > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > > > libvirt/qemu > > > > > > > > packages I installed on queens (I updated compute and > controllers > > > > > > > > node > > > > > > > > > > > > > > on > > > > > > > > queens for obtaining same libvirt/qemu version deployed on > rocky > > > > > > > > and > > > > > > > > > > > > > > stein). > > > > > > > > > > > > > > > > On queens live migration on provider network continues to > work > > > > > > > > fine. > > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > > > openstack > > > > > > > > components . > > > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly > assumes > > > > > > > that the port binding details wont > > > > > > > change when it does a live migration and does not update the > xml for > > > > > > > > the > > > > > > > netwrok interfaces. > > > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > > post_livemigration > > > > > > > in rocky+ neutron optionally uses the multiple port bindings > flow to > > > > > > > prebind the port to the destiatnion > > > > > > > so it can update the xml if needed and if post copy live > migration is > > > > > > > enable it will asyconsly activate teh dest port > > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > > > if you are using the iptables firewall os-vif will have > precreated > > > > > > > > the > > > > > > > ovs port and intermediate linux bridge before the > > > > > > > migration started which will allow neutron to wire it up (put > it on > > > > > > > > the > > > > > > > correct vlan and install security groups) before > > > > > > > the vm completes the migraton. > > > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh > ovs > > > > > > > > port > > > > > > > but libvirt deletes it and recreats it too. > > > > > > > as a result there is a race when using openvswitch firewall > that can > > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > > > smooney at redhat.com> > > > > > > > > ha scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > > > iptables_hybrid > > > > > > > > > > firewall. > > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > > spec intoduced teh ablity to shorten the outage by pre > biding the > > > > > > > > > > > > > > port and > > > > > > > > > activating it when > > > > > > > > > the vm is resumed on the destiation host before we get to > pos > > > > > > > > live > > > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully > elimiated > > > > > > > > as > > > > > > > > > > > > > > some > > > > > > > > > level of packet loss is > > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be > aware > > > > > > > > that > > > > > > > > > > > > > > if a > > > > > > > > > network partion happens > > > > > > > > > during a post copy live migration the vm will crash and > need to > > > > > > > > be > > > > > > > > > restarted. > > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > > > performace > > > > > > > > > > > > > > but > > > > > > > > > unlike pre copy migration if > > > > > > > > > the guess resumes on the dest and the mempry page has not > been > > > > > > > > copied > > > > > > > > > > > > > > yet > > > > > > > > > then it must wait for it to be copied > > > > > > > > > and retrive it form the souce host. if the connection too > the > > > > > > > > souce > > > > > > > > > > > > > > host > > > > > > > > > is intrupted then the vm cant > > > > > > > > > do that and the migration will fail and the instance will > crash. > > > > > > > > if > > > > > > > > > > > > > > you > > > > > > > > > are using precopy migration > > > > > > > > > if there is a network partaion during the migration the > > > > > > > > migration will > > > > > > > > > fail but the instance will continue > > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good > to be > > > > > > > > aware > > > > > > > > > > > > > > of > > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney < > smooney at redhat.com> ha > > > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano > wrote: > > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm > migrate > > > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > > > node > > > > > > > > > > > > to another I cannot ping it for several minutes. If > in the > > > > > > > > vm I > > > > > > > > > > > > > > put a > > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > > > migration > > > > > > > > > > > > > > works > > > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > I can ping it. Why this happens ? I read something > about > > > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an > older > > > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > > > called > > > > > > > > > > > RARP > > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you > using > > > > > > > > > > > > > > iptables > > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we > can do > > > > > > > > until > > > > > > > > > > > > > > we > > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > > currently libvirt handels interface plugging for > kernel ovs > > > > > > > > when > > > > > > > > > > > > > > using > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress > that > > > > > > > > but it > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > neutron patch are > > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out > dated. > > > > > > > > while > > > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > > > is > > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > > a race condition where the RARP packets sent by qemu > and > > > > > > > > then mac > > > > > > > > > > > > > > > > > > learning > > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have > opnestack > > > > > > > > > > > > > > rock or > > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > > it should reduce the downtime. in this conficution we > do not > > > > > > > > have > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > > > race > > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcr1br24 at gmail.com Sat Mar 13 04:10:14 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Sat, 13 Mar 2021 12:10:14 +0800 Subject: [Consultation] SFC proxy Message-ID: Dear Sir, (We made some edits from our previous mail.) We are testing a simple SFC in OpenStack (Stein) + OpenDaylight (Neon) + Open vSwitch (v2.11.1). It's an all-in-one deployment (OPNFV-9.0.0). We have read the document: https://readthedocs.org/projects/odl-sfc/downloads/pdf/latest/ and https://github.com/opnfv/sfc/blob/master/docs/release/scenarios/os-odl-sfc-noha/scenario.description.rst The driver we use is odl_v2: [image: image.png] [image: image.png] Our SFC topology: [image: enter image description here] All on the same compute node. Build SFC through API: 1. openstack sfc flow classifier create --source-ip-prefix 10.20.0.0/24 --logical-source-port p0 FC1 2. openstack sfc port pair create --description "Firewall SF instance 1" --ingress p1 --egress p1 --service-function-parameters correlation=None PP1 3. openstack sfc port pair group create --port-pair PP1 PPG1 4. openstack sfc port chain create --port-pair-group PPG1 --flow-classifier FC1 --chain-parameters correlation=nsh PC1 we expect that the encapsulation/decapsulation will be performed by the back-end driver, so that the data packet passes through the nsh-unaware VNF like a regular data packet. However, we can see that the NSH packet reaches the VNF interface (screenshot attached). [image: 123.jpg] Flow table: [image: 螢幕擷取畫面 2021-02-06 153311.png] It seems NSH header not decapsulate before entering NSH-unaware VNF. Could you give us some advice? We really appreciate it. Regards, Zack [image: Mailtrack] Sender notified by Mailtrack 03/13/21, 12:07:12 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 3606 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 29563 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 螢幕擷取畫面 2021-02-06 153311.png Type: image/png Size: 181834 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 123.jpg Type: image/jpeg Size: 566882 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 15 09:09:56 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 10:09:56 +0100 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Hello Kendall, I assigned the topic `wallaby-cycle-highlights` to all highlights not yet merged. Let me know if it's not too late for those and if you want to continue with them. https://review.opendev.org/q/topic:%22wallaby-cycle-highlight%22+(status:open%20OR%20status:merged) Thanks Le ven. 12 mars 2021 à 20:08, Kendall Nelson a écrit : > Hello All! > > I know we are past the deadline, but I wanted to do one final call. If you > have highlights you want included in the release marketing for Wallaby, you > must have patches pushed to the releases repo by Sunday March 14th at 6:00 > UTC. > > If you can't have it pushed by then but want to be included, please > contact me directly. > > Thanks! > > -Kendall (diablo_rojo) > > On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson > wrote: > >> Hello Everyone! >> >> It's time to start thinking about calling out 'cycle-highlights' in your >> deliverables! I have no idea how we are here AGAIN ALREADY, alas, here we >> be. >> >> As PTLs, you probably get many pings towards the end of every release >> cycle by various parties (marketing, management, journalists, etc) asking >> for highlights of what is new and what significant changes are coming in >> the new release. By putting them all in the same place it makes them easy >> to reference because they get compiled into a pretty website like this from >> the last few releases: Stein[1], Train[2]. >> >> We don't need a fully fledged marketing message, just a few highlights >> (3-4 ideally), from each project team. Looking through your release >> notes might be a good place to start. >> >> *The deadline for cycle highlights is the end of the R-5 week [3] (next >> week) on March 12th.* >> >> How To Reminder: >> ------------------------- >> >> Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the >> openstack/releases repo like this: >> >> cycle-highlights: >> - Introduced new service to use unused host to mine bitcoin. >> >> The formatting options for this tag are the same as what you are probably >> used to with Reno release notes. >> >> Also, you can check on the formatting of the output by either running >> locally: >> >> tox -e docs >> >> And then checking the resulting doc/build/html/$RELEASE/highlights.html >> file or the output of the build-openstack-sphinx-docs job under >> html/$RELEASE/highlights.html. >> >> Feel free to add me as a reviewer on your patches. >> >> Can't wait to see you all have accomplished this release! >> >> Thanks :) >> >> -Kendall Nelson (diablo_rojo) >> >> [1] https://releases.openstack.org/stein/highlights.html >> [2] https://releases.openstack.org/train/highlights.html >> [3] htt >> >> https://releases.openstack.org/wallaby/schedule.html >> >> > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 15 11:37:16 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 15 Mar 2021 12:37:16 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib Message-ID: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> Hi, I'm raising this RFE to ask if we can release, and include new version of the neutron-lib in the Wallaby release. Neutron already migrated all our policy rules to use new personas like system-reader or system-admin. Recently Lance found bug [1] with those new personas. Fix for that is now merged in neutron-lib [2]. This fix required bump the oslo_context dependency in lower-constraints and that lead us to bump many other dependencies versions. But in fact all those new versions are now aligned with what we already have in the Neutron's requirements so in fact neutron-lib was tested with those new versions of the packages every time it was run with Neutron master branch. I already proposed release patch for neutron-lib [3]. Please let me know if I You need anything else regrarding this RFE. [1] https://launchpad.net/bugs/1918506 [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 [3] https://review.opendev.org/c/openstack/releases/+/780550 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From laurentfdumont at gmail.com Mon Mar 15 13:31:43 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 15 Mar 2021 09:31:43 -0400 Subject: OSP13 - Queens - Inconsistent data between in DB between ipamallocation and ipallocation Message-ID: Hey everyone, We just troubleshooted (and fixed) a weird issue with ipamallocation. It looked like ipallocations we're not properly removed from the ipamallocation table. This caused port creations to fail with an error about the IP being already allocated (when there wasn't an actual port using the IP). We found this old issue https://bugs.launchpad.net/neutron/+bug/1884532 but not a whole lot of details in there. Has anyone seen this before? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From igene at igene.tw Mon Mar 15 13:56:33 2021 From: igene at igene.tw (Gene Kuo) Date: Mon, 15 Mar 2021 13:56:33 +0000 Subject: [largescale-sig] Next meeting: March 10, 15utc In-Reply-To: References: Message-ID: Hi, I confirmed that I'm able to give a short talk to start off the discussion for the next video meeting. The topic will be "RabbitMQ Clusters at Large Scale OpenStack Infrastructure". Regards, Gene Kuo ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ 在 2021年3月11日星期四 01:06,Belmiro Moreira 寫道: > Hi, > we had the Large Scale SIG meeting today. > > Meeting logs are available at: > http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-03-10-15.00.log.html > > We discussed topics for a new video meeting in 2 weeks. > Details will be sent later. > > regards, > Belmiro > > On Mon, Mar 8, 2021 at 12:43 PM Thierry Carrez wrote: > >> Hi everyone, >> >> Our next Large Scale SIG meeting will be this Wednesday in >> #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it >> translates locally at: >> >> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210310T15 >> >> Belmiro Moreira will chair this meeting. A number of topics have already >> been added to the agenda, including discussing CentOS Stream, reflecting >> on last video meeting and pick a topic for the next one. >> >> Feel free to add other topics to our agenda at: >> https://etherpad.openstack.org/p/large-scale-sig-meeting >> >> Regards, >> >> -- >> Thierry Carrez -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Mon Mar 15 14:15:10 2021 From: mthode at mthode.org (Matthew Thode) Date: Mon, 15 Mar 2021 09:15:10 -0500 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> Message-ID: <20210315141510.fgcbpdd3z2xmncqg@mthode.org> On 21-03-15 12:37:16, Slawek Kaplonski wrote: > Hi, > > I'm raising this RFE to ask if we can release, and include new version of the > neutron-lib in the Wallaby release. > Neutron already migrated all our policy rules to use new personas like > system-reader or system-admin. Recently Lance found bug [1] with those new > personas. > Fix for that is now merged in neutron-lib [2]. > This fix required bump the oslo_context dependency in lower-constraints and > that lead us to bump many other dependencies versions. But in fact all those > new versions are now aligned with what we already have in the Neutron's > requirements so in fact neutron-lib was tested with those new versions of the > packages every time it was run with Neutron master branch. > > I already proposed release patch for neutron-lib [3]. > Please let me know if I You need anything else regrarding this RFE. > > [1] https://launchpad.net/bugs/1918506 > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat It looks like there are a lot of places neutron-lib is still used. My question here is if those projects need to update their requirements and re-release? -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 15 15:14:53 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 16:14:53 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315141510.fgcbpdd3z2xmncqg@mthode.org> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> Message-ID: If projects that use neutron-lib want this specific version of neutron-lib then I guess that yes they need to update their requirements and then re-release... We already freezed releases bor lib and client-lib few days ago if the update is outside this scope (services or UI) then I think we could accept it, else it will require lot of re-release and I don't think we want to do that, especially now that branching of those is on the rails. Le lun. 15 mars 2021 à 15:22, Matthew Thode a écrit : > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > Hi, > > > > I'm raising this RFE to ask if we can release, and include new version > of the > > neutron-lib in the Wallaby release. > > Neutron already migrated all our policy rules to use new personas like > > system-reader or system-admin. Recently Lance found bug [1] with those > new > > personas. > > Fix for that is now merged in neutron-lib [2]. > > This fix required bump the oslo_context dependency in lower-constraints > and > > that lead us to bump many other dependencies versions. But in fact all > those > > new versions are now aligned with what we already have in the Neutron's > > requirements so in fact neutron-lib was tested with those new versions > of the > > packages every time it was run with Neutron master branch. > > > > I already proposed release patch for neutron-lib [3]. > > Please let me know if I You need anything else regrarding this RFE. > > > > [1] https://launchpad.net/bugs/1918506 > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > It looks like there are a lot of places neutron-lib is still used. My > question here is if those projects need to update their requirements and > re-release? > > -- > Matthew Thode > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From munnaeebd at gmail.com Mon Mar 15 15:33:18 2021 From: munnaeebd at gmail.com (Md. Hejbul Tawhid MUNNA) Date: Mon, 15 Mar 2021 21:33:18 +0600 Subject: How to detach cinder volume Message-ID: Hi, We are using openstack rocky. One of our openstack vm was running with 3 volume , 1 is bootable and another 2 is normal volume . We have deleted 1 normal volume without properly detaching. So volume is deleted but instance is still showing 3 volume is attached. Now we can't snapshot the instance and facing some others issue. Please advise how we can detach the volume(deleted) from the instance Note: We have reset state volume attached status to detached and delete the volume. Regards, Munna -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 15 15:44:41 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 15 Mar 2021 16:44:41 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> Message-ID: <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> Hi, On Mon, Mar 15, 2021 at 04:14:53PM +0100, Herve Beraud wrote: > If projects that use neutron-lib want this specific version of neutron-lib > then I guess that yes they need to update their requirements and then > re-release... We already freezed releases bor lib and client-lib few days > ago if the update is outside this scope (services or UI) then I think we > could accept it, else it will require lot of re-release and I don't think > we want to do that, especially now that branching of those is on the rails. > > Le lun. 15 mars 2021 à 15:22, Matthew Thode a écrit : > > > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > > Hi, > > > > > > I'm raising this RFE to ask if we can release, and include new version > > of the > > > neutron-lib in the Wallaby release. > > > Neutron already migrated all our policy rules to use new personas like > > > system-reader or system-admin. Recently Lance found bug [1] with those > > new > > > personas. > > > Fix for that is now merged in neutron-lib [2]. > > > This fix required bump the oslo_context dependency in lower-constraints > > and > > > that lead us to bump many other dependencies versions. But in fact all > > those > > > new versions are now aligned with what we already have in the Neutron's > > > requirements so in fact neutron-lib was tested with those new versions > > of the > > > packages every time it was run with Neutron master branch. > > > > > > I already proposed release patch for neutron-lib [3]. > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > [1] https://launchpad.net/bugs/1918506 > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > It looks like there are a lot of places neutron-lib is still used. My > > question here is if those projects need to update their requirements and > > re-release? Fix included in that new version is relatively small. It is in the part of code which is used already by stadium projects which are using neutron_lib. But all of them also depends on Neutron so if neutron will bump minimum version of the neutron-lib, it will be "automatically" used by those stadium projects as well. And regarding all other lower-constaints changes in that new neutron-lib - as I said in my previous email, all those versions are already set as minimum in neutron so all of that was in fact effectively used, and is tested. If we will not have this fix in neutron-lib in Wallaby, it basically means that we will have broken support for those new personas like "system reader/admin". If that would be problem to make new neutron-lib release now, would it be possible to cut stable/wallaby branch in neutron-lib, backport that fix there and release new bugfix version of neutron-lib just after Wallaby will be released? Will it be possible to bump neutron's required neutron-lib version then? > > > > -- > > Matthew Thode > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mthode at mthode.org Mon Mar 15 15:48:04 2021 From: mthode at mthode.org (Matthew Thode) Date: Mon, 15 Mar 2021 10:48:04 -0500 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> Message-ID: <20210315154804.5fsfy6hm47xpielg@mthode.org> On 21-03-15 16:44:41, Slawek Kaplonski wrote: > Hi, > > On Mon, Mar 15, 2021 at 04:14:53PM +0100, Herve Beraud wrote: > > If projects that use neutron-lib want this specific version of neutron-lib > > then I guess that yes they need to update their requirements and then > > re-release... We already freezed releases bor lib and client-lib few days > > ago if the update is outside this scope (services or UI) then I think we > > could accept it, else it will require lot of re-release and I don't think > > we want to do that, especially now that branching of those is on the rails. > > > > Le lun. 15 mars 2021 à 15:22, Matthew Thode a écrit : > > > > > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > > > Hi, > > > > > > > > I'm raising this RFE to ask if we can release, and include new version > > > of the > > > > neutron-lib in the Wallaby release. > > > > Neutron already migrated all our policy rules to use new personas like > > > > system-reader or system-admin. Recently Lance found bug [1] with those > > > new > > > > personas. > > > > Fix for that is now merged in neutron-lib [2]. > > > > This fix required bump the oslo_context dependency in lower-constraints > > > and > > > > that lead us to bump many other dependencies versions. But in fact all > > > those > > > > new versions are now aligned with what we already have in the Neutron's > > > > requirements so in fact neutron-lib was tested with those new versions > > > of the > > > > packages every time it was run with Neutron master branch. > > > > > > > > I already proposed release patch for neutron-lib [3]. > > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > > > [1] https://launchpad.net/bugs/1918506 > > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > It looks like there are a lot of places neutron-lib is still used. My > > > question here is if those projects need to update their requirements and > > > re-release? > > Fix included in that new version is relatively small. It is in the part of code > which is used already by stadium projects which are using neutron_lib. But all > of them also depends on Neutron so if neutron will bump minimum version of the > neutron-lib, it will be "automatically" used by those stadium projects as well. > And regarding all other lower-constaints changes in that new neutron-lib - as I > said in my previous email, all those versions are already set as minimum in > neutron so all of that was in fact effectively used, and is tested. > > If we will not have this fix in neutron-lib in Wallaby, it basically means that > we will have broken support for those new personas like "system reader/admin". > > If that would be problem to make new neutron-lib release now, would it be > possible to cut stable/wallaby branch in neutron-lib, backport that fix there > and release new bugfix version of neutron-lib just after Wallaby will be > released? Will it be possible to bump neutron's required neutron-lib version > then? > > > > > > > -- > > > Matthew Thode > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat It sounds like other projects will NOT need a release after this is bumped. If that is the case the release has my signoff as requirements PTL. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 15 15:57:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 16:57:21 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210315154804.5fsfy6hm47xpielg@mthode.org> References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315141510.fgcbpdd3z2xmncqg@mthode.org> <20210315154441.swg3ak5h5xuv6hzl@p1.localdomain> <20210315154804.5fsfy6hm47xpielg@mthode.org> Message-ID: Ok thanks. Then I think we can continue Le lun. 15 mars 2021 à 16:50, Matthew Thode a écrit : > On 21-03-15 16:44:41, Slawek Kaplonski wrote: > > Hi, > > > > On Mon, Mar 15, 2021 at 04:14:53PM +0100, Herve Beraud wrote: > > > If projects that use neutron-lib want this specific version of > neutron-lib > > > then I guess that yes they need to update their requirements and then > > > re-release... We already freezed releases bor lib and client-lib few > days > > > ago if the update is outside this scope (services or UI) then I think > we > > > could accept it, else it will require lot of re-release and I don't > think > > > we want to do that, especially now that branching of those is on the > rails. > > > > > > Le lun. 15 mars 2021 à 15:22, Matthew Thode a > écrit : > > > > > > > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > > > > Hi, > > > > > > > > > > I'm raising this RFE to ask if we can release, and include new > version > > > > of the > > > > > neutron-lib in the Wallaby release. > > > > > Neutron already migrated all our policy rules to use new personas > like > > > > > system-reader or system-admin. Recently Lance found bug [1] with > those > > > > new > > > > > personas. > > > > > Fix for that is now merged in neutron-lib [2]. > > > > > This fix required bump the oslo_context dependency in > lower-constraints > > > > and > > > > > that lead us to bump many other dependencies versions. But in fact > all > > > > those > > > > > new versions are now aligned with what we already have in the > Neutron's > > > > > requirements so in fact neutron-lib was tested with those new > versions > > > > of the > > > > > packages every time it was run with Neutron master branch. > > > > > > > > > > I already proposed release patch for neutron-lib [3]. > > > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > > > > > [1] https://launchpad.net/bugs/1918506 > > > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > > > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > > > > > > > -- > > > > > Slawek Kaplonski > > > > > Principal Software Engineer > > > > > Red Hat > > > > > > > > It looks like there are a lot of places neutron-lib is still used. > My > > > > question here is if those projects need to update their requirements > and > > > > re-release? > > > > Fix included in that new version is relatively small. It is in the part > of code > > which is used already by stadium projects which are using neutron_lib. > But all > > of them also depends on Neutron so if neutron will bump minimum version > of the > > neutron-lib, it will be "automatically" used by those stadium projects > as well. > > And regarding all other lower-constaints changes in that new neutron-lib > - as I > > said in my previous email, all those versions are already set as minimum > in > > neutron so all of that was in fact effectively used, and is tested. > > > > If we will not have this fix in neutron-lib in Wallaby, it basically > means that > > we will have broken support for those new personas like "system > reader/admin". > > > > If that would be problem to make new neutron-lib release now, would it be > > possible to cut stable/wallaby branch in neutron-lib, backport that fix > there > > and release new bugfix version of neutron-lib just after Wallaby will be > > released? Will it be possible to bump neutron's required neutron-lib > version > > then? > > > > > > > > > > -- > > > > Matthew Thode > > > > > > > > > > > > > -- > > > Hervé Beraud > > > Senior Software Engineer at Red Hat > > > irc: hberaud > > > https://github.com/4383/ > > > https://twitter.com/4383hberaud > > > -----BEGIN PGP SIGNATURE----- > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > v6rDpkeNksZ9fFSyoY2o > > > =ECSj > > > -----END PGP SIGNATURE----- > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > It sounds like other projects will NOT need a release after this is > bumped. If that is the case the release has my signoff as requirements > PTL. > > -- > Matthew Thode > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Mar 15 16:11:18 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 15 Mar 2021 12:11:18 -0400 Subject: [devstack][cinder] ceph iscsi driver support Message-ID: Hello devstack cores, Cinder added a ceph iscsi backend driver at Wallaby milestone 2, and CI on the original patch [0] was done via a dependency chain. Ongoing CI for this driver depends on this devstack patch: Add support for ceph_iscsi cinder driver https://review.opendev.org/c/openstack/devstack/+/668668 It's got one +2. I just want to raise awareness so it can get another review so we can get the CI running before any changes are proposed for the driver. (All dependencies for the devstack patch have merged.) thanks! brian [0] https://review.opendev.org/c/openstack/cinder/+/662829 From tcr1br24 at gmail.com Mon Mar 15 07:58:53 2021 From: tcr1br24 at gmail.com (Jhen-Hao Yu) Date: Mon, 15 Mar 2021 15:58:53 +0800 Subject: [ISSUE]instances cannot ping on different compute nodes Message-ID: Dear Sir, We are testing in OpenStack (Stein) + OpenDaylight (Neon) + Open vSwitch (v2.11.1). It's an all-in-one deployment (OPNFV-9.0.0). The instances can ping each other on the same compute node, but cannot ping each other if instances are on different compute nodes. *Cmp001: * ============================================= Manager "tcp:172.16.10.40:6640" is_connected: true Bridge br-floating Port "ens6" Interface "ens6" Port br-floating Interface br-floating type: internal Port br-floating-int-patch Interface br-floating-int-patch type: patch options: {peer=br-floating-pa} Bridge br-int Controller "tcp:172.16.10.40:6653" is_connected: true fail_mode: secure Port "tun9a4f2733bb3" Interface "tun9a4f2733bb3" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.5", remote_ip=flow} Port "tap59535eb9-ef" Interface "tap59535eb9-ef" Port br-floating-pa Interface br-floating-pa type: patch options: {peer=br-floating-int-patch} Port "tapc58e6fd1-cf" Interface "tapc58e6fd1-cf" Port "tap8bb9f6de-a9" Interface "tap8bb9f6de-a9" Port "tap5f490693-e5" Interface "tap5f490693-e5" Port "tund818d69b326" Interface "tund818d69b326" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.5", remote_ip="10.1.0.2"} Port br-int Interface br-int type: internal Port "tapef79fa77-ab" Interface "tapef79fa77-ab" Port "tapa9e63637-8a" Interface "tapa9e63637-8a" ovs_version: "2.11.1" ============================================= *Cmp002:* ============================================= Manager "tcp:172.16.10.40:6640" is_connected: true Manager "ptcp:6640:127.0.0.1" Bridge br-floating Port br-floating-int-patch Interface br-floating-int-patch type: patch options: {peer=br-floating-pa} Port br-floating Interface br-floating type: internal Port "ens6" Interface "ens6" Bridge br-int Controller "tcp:172.16.10.40:6653" is_connected: true fail_mode: secure Port br-floating-pa Interface br-floating-pa type: patch options: {peer=br-floating-int-patch} Port "tap50868440-95" Interface "tap50868440-95" Port "tapa496a81b-0c" Interface "tapa496a81b-0c" Port "tun8d223879efb" Interface "tun8d223879efb" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.6", remote_ip=flow} Port br-int Interface br-int type: internal Port "tun941edf0fa60" Interface "tun941edf0fa60" type: vxlan options: {exts=gpe, key=flow, local_ip="10.1.0.6", remote_ip="10.1.0.2"} ovs_version: "2.11.1" ====================================================== Could you give us some advice? Thanks. Regards, Zack [image: Mailtrack] Sender notified by Mailtrack 03/15/21, 03:56:22 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshewale at redhat.com Mon Mar 15 14:48:04 2021 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Mon, 15 Mar 2021 20:18:04 +0530 Subject: [tripleo] TripleO CI Summary: Unified Sprint 40 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 40** (Feb 04 thru Feb 24 2021). The following is a summary of completed work during this sprint cycle: - Deployed the new (next gen) promoter code - Have a successful promotion of master, victoria, ussuri and train c8 on newly deployed promoter server. - Completated dependency pipeline work as below: - RHEL and CentOS OpenVswitch pipeline - Upstream and downstream container-tools pipeline - Downstream RHELU.next - Add centos8 stream jobs and pipeline - https://hackmd.io/Y6IGnQGXTqCl8TAX0YAeoA?view - monitoring dependency pipeline under cockpit - Created the design document for tripleo-repos - https://hackmd.io/v2jCX9RwSeuP8EEFDHRa8g?view - https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - [tempest skip list] Add addtest command to tempest-skiplist - Tempest scenario manager Many of the scenario manager methods used to be private apis. Since these methods Are supposed to be used in tempest plugins they aren’t expected to be private. Following commits are with respect to this idea except - https://review.opendev.org/c/openstack/tempest/+/774085 - https://review.opendev.org/c/openstack/tempest/+/766472 - https://review.opendev.org/c/openstack/tempest/+/773783 - https://review.opendev.org/c/openstack/tempest/+/773019 Implementation of create_subnet() api varies with manila-tempest plugin For the stable implementation of create_subnet() following parameters have been added: 1. Condition to check empty str_cidr 2. More attributes in case of ipv6 3. Usage of default_subnet_pool - https://review.opendev.org/c/openstack/tempest/+/766472 - Ruck/Rover recorded notes [1]. The planned work for the next sprint and leftover previous sprint work as following: - Migrate upstream master from CentOS-8 to CentOS-8-Stream - improving resource usage/ reduce upstream infra footprint, context - Deploy the promoter server for downstream promotions - elastic-recheck containerization - https://hackmd.io/dmxF-brbS-yg7tkFB_kxXQ - Openstack health for tripleo - https://hackmd.io/HQ5hyGAOSuG44Le2x6YzUw - Tripleo-repos spec and implementation - https://hackmd.io/v2jCX9RwSeuP8EEFDHRa8g?view - https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - Leftover dependency pipeline work: - ansible 2.10/11 upstream pipeline - https://hackmd.io/riLaLFcTTpybbyY51xcm9A?both - Tempest skiplist: - https://review.opendev.org/c/openstack/tripleo-quickstart/+/771593/ - https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/778157 - https://review.opendev.org/c/openstack/tripleo-ci/+/778155 - https://review.opendev.org/c/osf/python-tempestconf/+/775195 The Ruck and Rover for this sprint are Soniya Vyas (soniya29) and Bhagyashri Shewale (bhagyashri). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in hackmd [2]. Thanks, Bhagyashri Shewale [1] https://hackmd.io/Ta6fdwi2Sme4map8WEiaQA [2] https://hackmd.io/ii6M2T4RTUSFeTZqkw3uRQ -------------- next part -------------- An HTML attachment was scrubbed... URL: From foundjem at ieee.org Mon Mar 15 15:36:27 2021 From: foundjem at ieee.org (Armstrong Foundjem) Date: Mon, 15 Mar 2021 11:36:27 -0400 Subject: [barbican][ironic][neutron] [QA][OpenstackSDK][swift] References: <2B2B1E99-1139-4A1F-B3F0-229E240A8752.ref@ieee.org> Message-ID: <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> Hello! Quick reminder that for deliverables following the cycle-with-intermediary model, the release team will use the latest Wallaby release available on release week. The following deliverables have done a Wallaby release, but it was not refreshed in the last two months: ansible-role-lunasa-hsm ironic-inspector ironic-prometheus-exporter ironic-ui ovn-octavia-provider patrole python-openstackclient swift You should consider making a new one very soon, so that we don't use an outdated version for the final release. - Armstrong Foundjem (armstrong) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Mar 15 16:31:40 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 15 Mar 2021 17:31:40 +0100 Subject: [barbican][ironic][neutron] [QA][OpenstackSDK][swift] In-Reply-To: <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> References: <2B2B1E99-1139-4A1F-B3F0-229E240A8752.ref@ieee.org> <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> Message-ID: Hi Armstrong, We will push a release for ironic-prometheus-exporter this week. Thanks! Em seg., 15 de mar. de 2021 às 17:28, Armstrong Foundjem escreveu: > Hello! > > *Quick reminder that for deliverables following the > cycle-with-intermediary* > *model, the release team will use the latest Wallaby release available on* > *release week.* > > *The following deliverables have done a Wallaby release, but it was not* > *refreshed in the last two months:* > > ansible-role-lunasa-hsm > ironic-inspector > ironic-prometheus-exporter > ironic-ui > ovn-octavia-provider > patrole > python-openstackclient > swift > > > *You should consider making a new one very soon, so that we don't use an* > *outdated version for the final release.* > > - Armstrong Foundjem (armstrong) > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Mar 15 16:36:10 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 15 Mar 2021 17:36:10 +0100 Subject: [queens][nova] live migration error Message-ID: Hello all, we are facing a problem while migrating some virtual machines on centos 7 queens. If we create a new virtual machine it migrates. Old virtual machines got the following: 37eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Increasing downtime to 50 ms after 0 sec elapsed time 2021-03-15 17:26:44.428 82382 INFO nova.virt.libvirt.driver [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0) 2021-03-15 17:26:57.922 82382 INFO nova.compute.manager [req-83fd3d62-292e-4990-8f7a-47f404b557cc - - - - -] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] VM Paused (Lifecycle Event) 2021-03-15 17:26:58.617 82382 INFO nova.virt.libvirt.driver [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Migration operation has completed 2021-03-15 17:26:58.618 82382 INFO nova.compute.manager [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] _post_live_migration() is started.. 2021-03-15 17:26:58.658 82382 ERROR nova.virt.libvirt.driver [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Live Migration failure: operation failed: domain is not running: libvirtError: operation failed: domain is not running 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Post live migration at destination podto2-kvmae failed: InstanceNotFound_Remote: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. InstanceNotFound: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6505, in _post_live_migration 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] instance, block_migration, dest) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 783, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] instance=instance, block_migration=block_migration) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] retry=self.retry) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] timeout=timeout, retry=retry) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] retry=retry) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 550, in _send 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] raise result 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound_Remote: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] res = self.dispatcher.dispatch(message) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return self._do_dispatch(endpoint, method, ctxt, args) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] result = func(ctxt, **new_args) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in wrapped 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] function_name, call_dict, binary) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, self.value, self.tb) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in wrapped 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return f(self, context, *args, **kw) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1000, in decorated_function 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, *args, **kwargs) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 203, in decorated_function 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, *args, **kwargs) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6658, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 'destination host.', instance=instance) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, self.value, self.tb) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6653, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] block_device_info) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7844, in post_live_migration_at_destination 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] self._host.get_guest(instance) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 526, in get_guest 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] return libvirt_guest.Guest(self._get_domain(instance)) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 546, in _get_domain 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] raise exception.InstanceNotFound(instance_id=instance.uuid) 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] 2021-03-15 17:27:05.179 82382 WARNING nova.compute.resource_tracker [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] Instance not resizing, skipping migration.: InstanceNotFound_Remote: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. 2021-03-15 17:27:13.621 82382 INFO nova.compute.manager [-] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] VM Stopped (Lifecycle Event) 2021-03-15 17:27:13.711 82382 INFO nova.compute.manager [req-0dcd030c-d520-4223-91a6-f2fbc9f2444a - - - - -] [instance: 25fbb55c-5991-49b2-885f-26adebaeb572] During the sync_power process the instance has moved from host podto2-kvmae to host podto2-kvm01 The istances was on podto1-kvm01 and we try to migrate it on podto2-kvmae. It stops to migrate and goes in error state and stop to responding to ping requests. >From logs is seems it returned on podto2-kvm01 but it is in error state on podto2-kvmae. After hard rebooting it starts on podto2-kvmae and then we can migrate it where we want without errors. Please, any help ? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Mar 15 16:44:44 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 15 Mar 2021 11:44:44 -0500 Subject: [barbican][ironic][neutron] [QA][OpenstackSDK][swift] In-Reply-To: <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> References: <2B2B1E99-1139-4A1F-B3F0-229E240A8752.ref@ieee.org> <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> Message-ID: <17836c68af8.101fedd35538416.121854804594138725@ghanshyammann.com> ---- On Mon, 15 Mar 2021 10:36:27 -0500 Armstrong Foundjem wrote ---- > Hello! > Quick reminder that for deliverables following the cycle-with-intermediarymodel, the release team will use the latest Wallaby release available onrelease week. > The following deliverables have done a Wallaby release, but it was notrefreshed in the last two months: > ansible-role-lunasa-hsmironic-inspectorironic-prometheus-exporterironic-uiovn-octavia-providerpatrolepython-openstackclientswift Thanks Armstrong for the reminder, Qa team will do "patrole" release in the next week or so. I am working on fixing the current gate failure. -gmann > > You should consider making a new one very soon, so that we don't use anoutdated version for the final release. > - Armstrong Foundjem (armstrong) From kennelson11 at gmail.com Mon Mar 15 16:54:43 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 15 Mar 2021 09:54:43 -0700 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Yes! We should merge them even with the deadline past so they will be on the site to look back at. I'll review them later today. -Kendall (diablo_rojo) On Mon, Mar 15, 2021, 2:10 AM Herve Beraud wrote: > Hello Kendall, > > I assigned the topic `wallaby-cycle-highlights` to all highlights not yet > merged. Let me know if it's not too late for those and if you want to > continue with them. > > > https://review.opendev.org/q/topic:%22wallaby-cycle-highlight%22+(status:open%20OR%20status:merged) > > Thanks > > Le ven. 12 mars 2021 à 20:08, Kendall Nelson a > écrit : > >> Hello All! >> >> I know we are past the deadline, but I wanted to do one final call. If >> you have highlights you want included in the release marketing for Wallaby, >> you must have patches pushed to the releases repo by Sunday March 14th at >> 6:00 UTC. >> >> If you can't have it pushed by then but want to be included, please >> contact me directly. >> >> Thanks! >> >> -Kendall (diablo_rojo) >> >> On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson >> wrote: >> >>> Hello Everyone! >>> >>> It's time to start thinking about calling out 'cycle-highlights' in >>> your deliverables! I have no idea how we are here AGAIN ALREADY, alas, here >>> we be. >>> >>> As PTLs, you probably get many pings towards the end of every release >>> cycle by various parties (marketing, management, journalists, etc) asking >>> for highlights of what is new and what significant changes are coming in >>> the new release. By putting them all in the same place it makes them easy >>> to reference because they get compiled into a pretty website like this from >>> the last few releases: Stein[1], Train[2]. >>> >>> We don't need a fully fledged marketing message, just a few highlights >>> (3-4 ideally), from each project team. Looking through your release >>> notes might be a good place to start. >>> >>> *The deadline for cycle highlights is the end of the R-5 week [3] (next >>> week) on March 12th.* >>> >>> How To Reminder: >>> ------------------------- >>> >>> Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the >>> openstack/releases repo like this: >>> >>> cycle-highlights: >>> - Introduced new service to use unused host to mine bitcoin. >>> >>> The formatting options for this tag are the same as what you are >>> probably used to with Reno release notes. >>> >>> Also, you can check on the formatting of the output by either running >>> locally: >>> >>> tox -e docs >>> >>> And then checking the resulting doc/build/html/$RELEASE/highlights.html >>> file or the output of the build-openstack-sphinx-docs job under >>> html/$RELEASE/highlights.html. >>> >>> Feel free to add me as a reviewer on your patches. >>> >>> Can't wait to see you all have accomplished this release! >>> >>> Thanks :) >>> >>> -Kendall Nelson (diablo_rojo) >>> >>> [1] https://releases.openstack.org/stein/highlights.html >>> [2] https://releases.openstack.org/train/highlights.html >>> [3] htt >>> >>> https://releases.openstack.org/wallaby/schedule.html >>> >>> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Mar 15 17:59:07 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 15 Mar 2021 18:59:07 +0100 Subject: [queens][nova] live migration error In-Reply-To: References: Message-ID: Hello, looking at destination kvm host I got the following in instance log under /var/log/libvirt/qemu: 2021-03-15 11:48:31.996+0000: starting up libvirt version: 4.5.0, package: 36.el7_9.3 (CentOS BuildSystem , 2020-11-16-16:25:20, x86-01.bsys.centos.org), qemu version: 2.12.0qemu-kvm-ev-2.12.0-44.1.el7_8.1, kernel: 3.10.0-1160.15.2.el7.x86_64, hostname: podto2-kvmae LC_ALL=C \ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \ QEMU_AUDIO_DRV=none \ /usr/libexec/qemu-kvm \ -name guest=instance-00002a52,debug-threads=on \ -S \ -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-73-instance-00002a52/master-key.aes \ -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off \ -cpu Broadwell-IBRS,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on \ -m 4096 \ -realtime mlock=off \ -smp 2,sockets=2,cores=1,threads=1 \ -uuid c6ea7ed2-e7ce-4df6-a767-6bb95ae8fdc6 \ -smbios 'type=1,manufacturer=RDO,product=OpenStack Compute,version=17.0.11-1.el7,serial=3dec30fe-a31f-4ea6-971f-6f993589ef04,uuid=c6ea7ed2-e7ce-4df6-a767-6bb95ae8fdc6,family=Virtual Machine' \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=139,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -drive file=/var/lib/nova/instances/c6ea7ed2-e7ce-4df6-a767-6bb95ae8fdc6/disk.config,format=raw,if=none,id=drive-ide0-0-0,readonly=on,cache=none \ -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,write-cache=on \ -drive file=/var/lib/nova/mnt/7eb4b0178ee3ec9ad7cbbc20c62b1912/volume-d5c812c5-2c27-4e82-a38d-83fc79ab848e,format=raw,if=none,id=drive-virtio-disk0,serial=d5c812c5-2c27-4e82-a38d-83fc79ab848e,cache=none,aio=native \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on \ -netdev tap,fd=141,id=hostnet0,vhost=on,vhostfd=142 \ -device virtio-net-pci,host_mtu=9000,netdev=hostnet0,id=net0,mac=fa:16:3e:3f:e5:32,bus=pci.0,addr=0x3 \ -add-fd set=3,fd=144 \ -chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=143,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:55 \ -k en-us \ -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \ -incoming defer \ -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-03-15 11:48:31.996+0000: Domain id=73 is tainted: high-privileges 2021-03-15T11:48:32.163025Z qemu-kvm: -chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on: char device redirected to /dev/pts/57 (label charserial0) 2021-03-15T11:48:32.167206Z qemu-kvm: -drive file=/var/lib/nova/mnt/7eb4b0178ee3ec9ad7cbbc20c62b1912/volume-d5c812c5-2c27-4e82-a38d-83fc79ab848e,format=raw,if=none,id=drive-virtio-disk0,serial=d5c812c5-2c27-4e82-a38d-83fc79ab848e,cache=none,aio=native: 'serial' is deprecated, please use the corresponding option of '-device' instead 2021-03-15T11:48:37.779611Z qemu-kvm: Failed to load virtio_pci/modern_queue_state:desc 2021-03-15T11:48:37.780020Z qemu-kvm: Failed to load virtio_pci/modern_state:vqs 2021-03-15T11:48:37.780042Z qemu-kvm: Failed to load virtio/extra_state:extra_state 2021-03-15T11:48:37.780062Z qemu-kvm: Failed to load virtio-balloon:virtio 2021-03-15T11:48:37.780082Z qemu-kvm: error while loading state for instance 0x0 of device '0000:00:06.0/virtio-balloon' 2021-03-15T11:48:37.781465Z qemu-kvm: load of migration failed: Input/output error 2021-03-15 11:48:38.231+0000: shutting down, reason=crashed "instance-00002a52.log" 102L, 7122C But hard resetting the vm it starts Ignazio Il giorno lun 15 mar 2021 alle ore 17:36 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello all, > we are facing a problem while migrating some virtual machines on centos 7 > queens. > If we create a new virtual machine it migrates. > Old virtual machines got the following: > 37eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Increasing downtime to 50 ms after 0 > sec elapsed time > 2021-03-15 17:26:44.428 82382 INFO nova.virt.libvirt.driver > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Migration running for 0 secs, memory > 100% remaining; (bytes processed=0, remaining=0, total=0) > 2021-03-15 17:26:57.922 82382 INFO nova.compute.manager > [req-83fd3d62-292e-4990-8f7a-47f404b557cc - - - - -] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] VM Paused (Lifecycle Event) > 2021-03-15 17:26:58.617 82382 INFO nova.virt.libvirt.driver > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Migration operation has completed > 2021-03-15 17:26:58.618 82382 INFO nova.compute.manager > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] _post_live_migration() is started.. > 2021-03-15 17:26:58.658 82382 ERROR nova.virt.libvirt.driver > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Live Migration failure: operation > failed: domain is not running: libvirtError: operation failed: domain is > not running > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Post live migration at destination > podto2-kvmae failed: InstanceNotFound_Remote: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > InstanceNotFound: Instance 25fbb55c-5991-49b2-885f-26adebaeb572 could not > be found. > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6505, in > _post_live_migration > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] instance, block_migration, dest) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 783, in > post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] instance=instance, > block_migration=block_migration) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, > in call > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] retry=self.retry) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, > in _send > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] timeout=timeout, retry=retry) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 559, in send > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] retry=retry) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 550, in _send > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] raise result > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound_Remote: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Traceback (most recent call last): > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, > in _process_incoming > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] res = > self.dispatcher.dispatch(message) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line > 220, in dispatch > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return > self._do_dispatch(endpoint, method, ctxt, args) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line > 190, in _do_dispatch > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] result = func(ctxt, **new_args) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in > wrapped > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] function_name, call_dict, binary) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, > self.value, self.tb) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in > wrapped > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return f(self, context, *args, > **kw) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1000, in > decorated_function > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, > *args, **kwargs) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 203, in > decorated_function > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return function(self, context, > *args, **kwargs) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6658, in > post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] 'destination host.', > instance=instance) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] self.force_reraise() > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] six.reraise(self.type_, > self.value, self.tb) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6653, in > post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] block_device_info) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7844, > in post_live_migration_at_destination > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] self._host.get_guest(instance) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 526, in > get_guest > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] return > libvirt_guest.Guest(self._get_domain(instance)) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] File > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 546, in > _get_domain > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] raise > exception.InstanceNotFound(instance_id=instance.uuid) > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] InstanceNotFound: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:01.259 82382 ERROR nova.compute.manager [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] > 2021-03-15 17:27:05.179 82382 WARNING nova.compute.resource_tracker > [req-2189a13d-097a-4eb0-9f9b-60767b9b0c68 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] Instance not resizing, skipping > migration.: InstanceNotFound_Remote: Instance > 25fbb55c-5991-49b2-885f-26adebaeb572 could not be found. > 2021-03-15 17:27:13.621 82382 INFO nova.compute.manager [-] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] VM Stopped (Lifecycle Event) > 2021-03-15 17:27:13.711 82382 INFO nova.compute.manager > [req-0dcd030c-d520-4223-91a6-f2fbc9f2444a - - - - -] [instance: > 25fbb55c-5991-49b2-885f-26adebaeb572] During the sync_power process the > instance has moved from host podto2-kvmae to host podto2-kvm01 > > > The istances was on podto1-kvm01 and we try to migrate it on podto2-kvmae. > It stops to migrate and goes in error state and stop to responding to ping > requests. > From logs is seems it returned on podto2-kvm01 but it is in error state on > podto2-kvmae. > After hard rebooting it starts on podto2-kvmae and then we can migrate it > where we want without errors. > Please, any help ? > Thanks > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajitharobert01 at gmail.com Mon Mar 15 17:32:02 2021 From: ajitharobert01 at gmail.com (Ajitha Robert) Date: Mon, 15 Mar 2021 23:02:02 +0530 Subject: [cinder] wallaby feature freeze Message-ID: Hi team, This patch https://review.opendev.org/c/openstack/cinder/+/778886 requires a feature freeze exception. I am working on the the minor revisions and the CI will be running tomorrow. Thank you. -- *Regards,Ajitha R* -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajesh.r at zadarastorage.com Mon Mar 15 17:43:11 2021 From: rajesh.r at zadarastorage.com (Rajesh Ratnakaram) Date: Mon, 15 Mar 2021 23:13:11 +0530 Subject: [cinder] wallaby feature freeze Message-ID: <604f9caf.1c69fb81.760ef.2f02@mx.google.com> Hi, I have added missing features to Zadara cinder driver and posted: https://review.opendev.org/c/openstack/cinder/+/774463 The corresponding bp can be found at: https://blueprints.launchpad.net/cinder/+spec/zadara-wallaby-features I wanted to push the above changes to Wallaby release, hence requesting to add FFE for the above request. Thanks and Regards, Rajesh. Sent from Mail for Windows 10 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 15 20:17:55 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 15 Mar 2021 21:17:55 +0100 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Thanks. Only one patch is still open but I saw that you commented on it. Le lun. 15 mars 2021 à 17:55, Kendall Nelson a écrit : > Yes! We should merge them even with the deadline past so they will be on > the site to look back at. > > I'll review them later today. > > -Kendall (diablo_rojo) > > On Mon, Mar 15, 2021, 2:10 AM Herve Beraud wrote: > >> Hello Kendall, >> >> I assigned the topic `wallaby-cycle-highlights` to all highlights not yet >> merged. Let me know if it's not too late for those and if you want to >> continue with them. >> >> >> https://review.opendev.org/q/topic:%22wallaby-cycle-highlight%22+(status:open%20OR%20status:merged) >> >> Thanks >> >> Le ven. 12 mars 2021 à 20:08, Kendall Nelson a >> écrit : >> >>> Hello All! >>> >>> I know we are past the deadline, but I wanted to do one final call. If >>> you have highlights you want included in the release marketing for Wallaby, >>> you must have patches pushed to the releases repo by Sunday March 14th at >>> 6:00 UTC. >>> >>> If you can't have it pushed by then but want to be included, please >>> contact me directly. >>> >>> Thanks! >>> >>> -Kendall (diablo_rojo) >>> >>> On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson >>> wrote: >>> >>>> Hello Everyone! >>>> >>>> It's time to start thinking about calling out 'cycle-highlights' in >>>> your deliverables! I have no idea how we are here AGAIN ALREADY, alas, here >>>> we be. >>>> >>>> As PTLs, you probably get many pings towards the end of every release >>>> cycle by various parties (marketing, management, journalists, etc) asking >>>> for highlights of what is new and what significant changes are coming in >>>> the new release. By putting them all in the same place it makes them easy >>>> to reference because they get compiled into a pretty website like this from >>>> the last few releases: Stein[1], Train[2]. >>>> >>>> We don't need a fully fledged marketing message, just a few highlights >>>> (3-4 ideally), from each project team. Looking through your release >>>> notes might be a good place to start. >>>> >>>> *The deadline for cycle highlights is the end of the R-5 week [3] (next >>>> week) on March 12th.* >>>> >>>> How To Reminder: >>>> ------------------------- >>>> >>>> Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the >>>> openstack/releases repo like this: >>>> >>>> cycle-highlights: >>>> - Introduced new service to use unused host to mine bitcoin. >>>> >>>> The formatting options for this tag are the same as what you are >>>> probably used to with Reno release notes. >>>> >>>> Also, you can check on the formatting of the output by either running >>>> locally: >>>> >>>> tox -e docs >>>> >>>> And then checking the resulting doc/build/html/$RELEASE/highlights.html >>>> file or the output of the build-openstack-sphinx-docs job under >>>> html/$RELEASE/highlights.html. >>>> >>>> Feel free to add me as a reviewer on your patches. >>>> >>>> Can't wait to see you all have accomplished this release! >>>> >>>> Thanks :) >>>> >>>> -Kendall Nelson (diablo_rojo) >>>> >>>> [1] https://releases.openstack.org/stein/highlights.html >>>> [2] https://releases.openstack.org/train/highlights.html >>>> [3] htt >>>> >>>> https://releases.openstack.org/wallaby/schedule.html >>>> >>>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Mar 15 20:41:14 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 15 Mar 2021 13:41:14 -0700 Subject: [PTLs][release] Wallaby Cycle Highlights In-Reply-To: References: Message-ID: Even if more trickle in as we get closer to the release, I am fine with landing them for later reference, they just might not make it into the press release the marketing folks put together. -Kendall On Mon, Mar 15, 2021 at 1:18 PM Herve Beraud wrote: > Thanks. Only one patch is still open but I saw that you commented on it. > > Le lun. 15 mars 2021 à 17:55, Kendall Nelson a > écrit : > >> Yes! We should merge them even with the deadline past so they will be on >> the site to look back at. >> >> I'll review them later today. >> >> -Kendall (diablo_rojo) >> >> On Mon, Mar 15, 2021, 2:10 AM Herve Beraud wrote: >> >>> Hello Kendall, >>> >>> I assigned the topic `wallaby-cycle-highlights` to all highlights not >>> yet merged. Let me know if it's not too late for those and if you want to >>> continue with them. >>> >>> >>> https://review.opendev.org/q/topic:%22wallaby-cycle-highlight%22+(status:open%20OR%20status:merged) >>> >>> Thanks >>> >>> Le ven. 12 mars 2021 à 20:08, Kendall Nelson a >>> écrit : >>> >>>> Hello All! >>>> >>>> I know we are past the deadline, but I wanted to do one final call. If >>>> you have highlights you want included in the release marketing for Wallaby, >>>> you must have patches pushed to the releases repo by Sunday March 14th at >>>> 6:00 UTC. >>>> >>>> If you can't have it pushed by then but want to be included, please >>>> contact me directly. >>>> >>>> Thanks! >>>> >>>> -Kendall (diablo_rojo) >>>> >>>> On Thu, Feb 25, 2021 at 2:59 PM Kendall Nelson >>>> wrote: >>>> >>>>> Hello Everyone! >>>>> >>>>> It's time to start thinking about calling out 'cycle-highlights' in >>>>> your deliverables! I have no idea how we are here AGAIN ALREADY, alas, here >>>>> we be. >>>>> >>>>> As PTLs, you probably get many pings towards the end of every release >>>>> cycle by various parties (marketing, management, journalists, etc) asking >>>>> for highlights of what is new and what significant changes are coming in >>>>> the new release. By putting them all in the same place it makes them easy >>>>> to reference because they get compiled into a pretty website like this from >>>>> the last few releases: Stein[1], Train[2]. >>>>> >>>>> We don't need a fully fledged marketing message, just a few highlights >>>>> (3-4 ideally), from each project team. Looking through your release >>>>> notes might be a good place to start. >>>>> >>>>> *The deadline for cycle highlights is the end of the R-5 week [3] >>>>> (next week) on March 12th.* >>>>> >>>>> How To Reminder: >>>>> ------------------------- >>>>> >>>>> Simply add them to the deliverables/$RELEASE/$PROJECT.yaml in the >>>>> openstack/releases repo like this: >>>>> >>>>> cycle-highlights: >>>>> - Introduced new service to use unused host to mine bitcoin. >>>>> >>>>> The formatting options for this tag are the same as what you are >>>>> probably used to with Reno release notes. >>>>> >>>>> Also, you can check on the formatting of the output by either running >>>>> locally: >>>>> >>>>> tox -e docs >>>>> >>>>> And then checking the resulting doc/build/html/$RELEASE/highlights.html >>>>> file or the output of the build-openstack-sphinx-docs job under >>>>> html/$RELEASE/highlights.html. >>>>> >>>>> Feel free to add me as a reviewer on your patches. >>>>> >>>>> Can't wait to see you all have accomplished this release! >>>>> >>>>> Thanks :) >>>>> >>>>> -Kendall Nelson (diablo_rojo) >>>>> >>>>> [1] https://releases.openstack.org/stein/highlights.html >>>>> [2] https://releases.openstack.org/train/highlights.html >>>>> [3] htt >>>>> >>>>> https://releases.openstack.org/wallaby/schedule.html >>>>> >>>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Mar 15 20:50:33 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 15 Mar 2021 13:50:33 -0700 Subject: [community/MLs] Measuring ML Success In-Reply-To: References: Message-ID: I feel like it should redirect? But if the .io site isn't what corresponds to the open source repo that exists in opendev, we need to make that very clear everywhere possible. -Kendall (diablo_rojo) On Tue, Mar 9, 2021 at 1:54 AM Mark Goddard wrote: > On Tue, 9 Mar 2021 at 01:17, Michael Johnson wrote: > > > > Oh, nice. I missed the memo on the URL change. > If stackalytics.com is less well maintained, should it be abandoned? > Or redirect to stackalytics.io? > Mark > > > > Thanks, > > Michael > > > > On Mon, Mar 8, 2021 at 5:08 PM Andrii Ostapenko > wrote: > > > > > > Michael, > > > > > > https://www.stackalytics.io is updated daily and maintained. > > > > > > On Mon, Mar 8, 2021 at 6:55 PM Michael Johnson > wrote: > > > > > > > > Just an FYI, stackalytics hasn't updated since January, so it's not > > > > going to be a good, current, source of information. > > > > > > > > Michael > > > > > > > > On Mon, Mar 8, 2021 at 12:00 PM Andrii Ostapenko < > anost1986 at gmail.com> wrote: > > > > > > > > > > Hi Jimmy, > > > > > > > > > > Stackalytics [0] currently tracks emails identifying and grouping > them > > > > > by author, company, module (with some success). > > > > > To answer your challenge we'll need to add a grouping by thread, > but > > > > > still need some criteria to mark thread as possibly unanswered. > E.g. > > > > > threads having a single message or only messages from a single > author, > > > > > or if the last message in a thread contains a question mark. > > > > > > > > > > With no additional information marking thread as closed explicitly, > > > > > all this will still be a guessing, producing candidates for > unanswered > > > > > threads. > > > > > > > > > > Thank you for bringing this up! > > > > > > > > > > [0] https://www.stackalytics.io/?metric=emails > > > > > > > > > > > > > > > On Mon, Mar 8, 2021 at 12:48 PM Jimmy McArthur < > jimmy at openstack.org> wrote: > > > > > > > > > > > > Hi All - > > > > > > > > > > > > Over the last six months or so, we've had feedback from people > that feel their questions die on the ML or that are missing > ask.openstack.org. I don't think we should open up the ask.openstack.org > can of worms, by any means. However, I wanted to find out if there was any > software out there we could use to track metrics on which questions go > unanswered on the ML. Everything I've found is very focused on email > marketing, which is not what we're after. Would love to try to get some > numbers on individuals that are trying to reach out to the ML, but just > aren't getting through to anyone. > > > > > > > > > > > > Assuming we get that, I feel like it would be an easy next step > to do a monthly or bi-monthly check to reach out to these potential new > contributors. I realize our community is busy and people are, by and > large, volunteering their time to answer these questions. But as hard as > that is, it's also tough to pose new questions to a community you're > unfamiliar with and then hear crickets. > > > > > > > > > > > > Open to other ideas/thoughts :) > > > > > > > > > > > > Cheers! > > > > > > Jimmy > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 15 20:52:37 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 15 Mar 2021 21:52:37 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: References: <20210315113716.qn4thhiqheuw2u7o@p1.localdomain> <20210315154804.5fsfy6hm47xpielg@mthode.org> Message-ID: <4248079.RTiLppNkMA@p1> Hi, Dnia poniedziałek, 15 marca 2021 16:57:21 CET Herve Beraud pisze: > Ok thanks. Then I think we can continue > > Le lun. 15 mars 2021 à 16:50, Matthew Thode a écrit : > > On 21-03-15 16:44:41, Slawek Kaplonski wrote: > > > Hi, > > > > > > On Mon, Mar 15, 2021 at 04:14:53PM +0100, Herve Beraud wrote: > > > > If projects that use neutron-lib want this specific version of > > > > neutron-lib > > > > > > then I guess that yes they need to update their requirements and then > > > > re-release... We already freezed releases bor lib and client-lib few > > > > days > > > > > > ago if the update is outside this scope (services or UI) then I think > > > > we > > > > > > could accept it, else it will require lot of re-release and I don't > > > > think > > > > > > we want to do that, especially now that branching of those is on the > > > > rails. > > > > > > Le lun. 15 mars 2021 à 15:22, Matthew Thode a > > > > écrit : > > > > > On 21-03-15 12:37:16, Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > > > > > > > I'm raising this RFE to ask if we can release, and include new > > > > version > > > > > > > of the > > > > > > > > > > > neutron-lib in the Wallaby release. > > > > > > Neutron already migrated all our policy rules to use new personas > > > > like > > > > > > > > system-reader or system-admin. Recently Lance found bug [1] with > > > > those > > > > > > > new > > > > > > > > > > > personas. > > > > > > Fix for that is now merged in neutron-lib [2]. > > > > > > This fix required bump the oslo_context dependency in > > > > lower-constraints > > > > > > > and > > > > > > > > > > > that lead us to bump many other dependencies versions. But in fact > > > > all > > > > > > > those > > > > > > > > > > > new versions are now aligned with what we already have in the > > > > Neutron's > > > > > > > > requirements so in fact neutron-lib was tested with those new > > > > versions > > > > > > > of the > > > > > > > > > > > packages every time it was run with Neutron master branch. > > > > > > > > > > > > I already proposed release patch for neutron-lib [3]. > > > > > > Please let me know if I You need anything else regrarding this > > > > > > RFE. > > > > > > > > > > > > [1] https://launchpad.net/bugs/1918506 > > > > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/780204 > > > > > > [3] https://review.opendev.org/c/openstack/releases/+/780550 > > > > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > > > > > > > It looks like there are a lot of places neutron-lib is still used. > > > > My > > > > > > > question here is if those projects need to update their requirements > > > > and > > > > > > > re-release? > > > > > > Fix included in that new version is relatively small. It is in the part > > > > of code > > > > > which is used already by stadium projects which are using neutron_lib. > > > > But all > > > > > of them also depends on Neutron so if neutron will bump minimum version > > > > of the > > > > > neutron-lib, it will be "automatically" used by those stadium projects > > > > as well. > > > > > And regarding all other lower-constaints changes in that new neutron-lib > > > > - as I > > > > > said in my previous email, all those versions are already set as minimum > > > > in > > > > > neutron so all of that was in fact effectively used, and is tested. > > > > > > If we will not have this fix in neutron-lib in Wallaby, it basically > > > > means that > > > > > we will have broken support for those new personas like "system > > > > reader/admin". > > > > > If that would be problem to make new neutron-lib release now, would it > > > be > > > possible to cut stable/wallaby branch in neutron-lib, backport that fix > > > > there > > > > > and release new bugfix version of neutron-lib just after Wallaby will be > > > released? Will it be possible to bump neutron's required neutron-lib > > > > version > > > > > then? > > > > > > > > -- > > > > > Matthew Thode > > > > > > > > -- > > > > Hervé Beraud > > > > Senior Software Engineer at Red Hat > > > > irc: hberaud > > > > https://github.com/4383/ > > > > https://twitter.com/4383hberaud > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > v6rDpkeNksZ9fFSyoY2o > > > > =ECSj > > > > -----END PGP SIGNATURE----- > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > It sounds like other projects will NOT need a release after this is > > bumped. If that is the case the release has my signoff as requirements > > PTL. > > > > -- > > Matthew Thode > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud Thank You :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Mon Mar 15 21:02:48 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 15 Mar 2021 22:02:48 +0100 Subject: [barbican][ironic][neutron] [QA][OpenstackSDK][swift] In-Reply-To: <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> References: <2B2B1E99-1139-4A1F-B3F0-229E240A8752.ref@ieee.org> <2B2B1E99-1139-4A1F-B3F0-229E240A8752@ieee.org> Message-ID: <6266934.E0M3N7Pm80@p1> Hi, Dnia poniedziałek, 15 marca 2021 16:36:27 CET Armstrong Foundjem pisze: > Hello! > > Quick reminder that for deliverables following the cycle-with-intermediary > model, the release team will use the latest Wallaby release available on > release week. > > The following deliverables have done a Wallaby release, but it was not > refreshed in the last two months: > > ansible-role-lunasa-hsm > ironic-inspector > ironic-prometheus-exporter > ironic-ui > ovn-octavia-provider > patrole > python-openstackclient > swift > > > You should consider making a new one very soon, so that we don't use an > outdated version for the final release. > > - Armstrong Foundjem (armstrong) Thx for the reminder. Patch for ovn-octavia-provider is proposed https:// review.opendev.org/c/openstack/releases/+/780673 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gouthampravi at gmail.com Mon Mar 15 21:48:12 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 15 Mar 2021 14:48:12 -0700 Subject: [manila][requirements] FFE requested for python-manilaclient Message-ID: Hello, We were tracking one last feature inclusion into python-manilaclient for the wallaby release: https://review.opendev.org/c/openstack/python-manilaclient/+/775020 The change does not introduce any new requirements, but adds new SDK methods and shell implementations for end users to harness a feature addition in manila to be able to update their security services. The release team was very gracious in allowing us to wrap up code reviews pertaining to the above change over the weekend; however, the gate was uncooperative this morning by pointing us to lingering sporadic failures that are a combination of bad test case logic as well as occasional test environment setup issues. (Zuul is awesome, but it has a way to call out our procrastinations) No projects that depend on python-manilaclient need this change in their current form because it is net new feature functionality (existing SDK/CLI functionality is unaltered). However, I thought i should check here before requesting a re-release of python-manilaclient under the Wallaby cycle. Thanks, Goutham From mthode at mthode.org Mon Mar 15 22:42:26 2021 From: mthode at mthode.org (Matthew Thode) Date: Mon, 15 Mar 2021 17:42:26 -0500 Subject: [manila][requirements] FFE requested for python-manilaclient In-Reply-To: References: Message-ID: <20210315224226.v7uqyt2solsfkhtv@mthode.org> On 21-03-15 14:48:12, Goutham Pacha Ravi wrote: > Hello, > > We were tracking one last feature inclusion into python-manilaclient > for the wallaby release: > https://review.opendev.org/c/openstack/python-manilaclient/+/775020 > > The change does not introduce any new requirements, but adds new SDK > methods and shell implementations for end users to harness a feature > addition in manila to be able to update their security services. > > The release team was very gracious in allowing us to wrap up code > reviews pertaining to the above change over the weekend; however, the > gate was uncooperative this morning by pointing us to lingering > sporadic failures that are a combination of bad test case logic as > well as occasional test environment setup issues. (Zuul is awesome, > but it has a way to call out our procrastinations) > > No projects that depend on python-manilaclient need this change in > their current form because it is net new feature functionality > (existing SDK/CLI functionality is unaltered). However, I thought i > should check here before requesting a re-release of > python-manilaclient under the Wallaby cycle. > > Thanks, > Goutham > Fine by me from a requirements perspective. Thanks. I imagine this will be a version bump on top of https://review.opendev.org/780667 then? -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gouthampravi at gmail.com Mon Mar 15 22:57:13 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 15 Mar 2021 15:57:13 -0700 Subject: [manila][requirements] FFE requested for python-manilaclient In-Reply-To: <20210315224226.v7uqyt2solsfkhtv@mthode.org> References: <20210315224226.v7uqyt2solsfkhtv@mthode.org> Message-ID: On Mon, Mar 15, 2021 at 3:47 PM Matthew Thode wrote: > > On 21-03-15 14:48:12, Goutham Pacha Ravi wrote: > > Hello, > > > > We were tracking one last feature inclusion into python-manilaclient > > for the wallaby release: > > https://review.opendev.org/c/openstack/python-manilaclient/+/775020 > > > > The change does not introduce any new requirements, but adds new SDK > > methods and shell implementations for end users to harness a feature > > addition in manila to be able to update their security services. > > > > The release team was very gracious in allowing us to wrap up code > > reviews pertaining to the above change over the weekend; however, the > > gate was uncooperative this morning by pointing us to lingering > > sporadic failures that are a combination of bad test case logic as > > well as occasional test environment setup issues. (Zuul is awesome, > > but it has a way to call out our procrastinations) > > > > No projects that depend on python-manilaclient need this change in > > their current form because it is net new feature functionality > > (existing SDK/CLI functionality is unaltered). However, I thought i > > should check here before requesting a re-release of > > python-manilaclient under the Wallaby cycle. > > > > Thanks, > > Goutham > > > > Fine by me from a requirements perspective. Thanks. I imagine this > will be a version bump on top of https://review.opendev.org/780667 then? Yes. I worked with Hervé Beraud to get that release in, and pursue this FFE/RFE in parallel to include https://review.opendev.org/c/openstack/python-manilaclient/+/775020 into 2.6.0. > > -- > Matthew Thode From luke.camilleri at zylacomputing.com Mon Mar 15 23:00:51 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 16 Mar 2021 00:00:51 +0100 Subject: [Horizon][Victoria][ops] - Scope based policies in Horizon not working Message-ID: <2dcfca03-f801-efb2-e311-0d04aa0b6c8c@zylacomputing.com> Hi Everyone, we had sent an email to this mailing list last week on an issue that we are having which we thought was being generated by keystone but on closer inspection it seems that this is coming actually from Horizon policy files. We would like to implement OpenStack using the scope based approach (v3) so that we could keep the main roles (reader,member and Admin) and use the scopes to differentiate the targets. We have implemented the below settings in /etc/keystone/keystone.conf: enforce_scope = true enforce_new_defaults = true All works well with the cli and the scopes except Horizon. We constantly receive errors in /var/log/keystone.log like: 2021-03-15 23:15:55.772 1611435 WARNING keystone.server.flask.application [req-a89277a2-8fb2-4fcb-b337-e76a73bb3af9 1405ecf1ff7f4e4188648dbd1fb84c07 c584121f9d1a4334a3853c61bb5b9d93 - default default] You are not authorized to perform the requested action: identity:validate_token.: keystone.exception.ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token but the same command (for example an "openstack domain list") works well from the CLI. We could identify the issue coming from the policy files shipped with the dashboard at /etc/openstack-dashboard/ as the rules within these policies are not scope based and in json format while according to the roadmap it seems that yaml is being used for the policy files and they should be scope based. When we try to convert the files to json to then upgrade them (to use the scope syntax) we receive the following errors: #oslopolicy-convert-json-to-yaml --config-file /etc/keystone/keystone.conf --namespace keystone --output-file keystone_policy.yaml --policy-file keystone_policy.json Traceback (most recent call last):   File "/bin/oslopolicy-convert-json-to-yaml", line 10, in     sys.exit(convert_policy_json_to_yaml())   File "/usr/lib/python3.6/site-packages/oslo_policy/generator.py", line 599, in convert_policy_json_to_yaml     conf.output_file)   File "/usr/lib/python3.6/site-packages/oslo_policy/generator.py", line 451, in _convert_policy_json_to_yaml     scope_types=default_rule.scope_types)   File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line 1239, in __init__     scope_types=scope_types   File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line 1154, in __init__     self.description = description   File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line 1251, in description     raise InvalidRuleDefault('Description is required') oslo_policy.policy.InvalidRuleDefault: Invalid policy rule default: Description is required. We believe that converting such files to yaml and then upgrading them for scope based access (v3) should solve our issues based on the fact that if we disable the enforce _scope and enforce_new_defaults we can have a fully functional setup (but not scope based) which defies the initial objective Thanks in advance for anyone that may be able to assist with this issue From luke.camilleri at zylacomputing.com Mon Mar 15 23:02:45 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 16 Mar 2021 00:02:45 +0100 Subject: [keystone][ops]Keystone enforced policy issues - Victoria In-Reply-To: References: Message-ID: <0985dc2d-97ef-c39b-c69d-8beb012e2c4a@zylacomputing.com> A new mail has been sent referencing horizon instead of victoria in the subject line. Consider this email solved On 12/03/2021 12:30, Luke Camilleri wrote: > Updated subject with tags > > On 11/03/2021 22:39, Luke Camilleri wrote: >> Hi there, we have been troubleshooting keystone policies for a couple >> of days now (Victoria) and would like to reach out to anyone with >> some experience on the keystone policies. Basically we would like to >> follow the standard admin, member and reader roles together with the >> scopes function and have therefore enabled the below two option in >> keystone.conf >> >> enforce_scope = true >> enforce_new_defaults = true >> >> once enabled we started seeing the below error related to token >> validation in keystone.log: >> >> 2021-03-11 19:50:12.009 1047463 WARNING >> keystone.server.flask.application >> [req-33cda154-1d54-447e-8563-0676dc5d8471 >> 020bd854741e4ba69c87d4142cad97a5 c584121f9d1a4334a3853c61bb5b9d93 - >> default default] You are not authorized to perform the requested >> action: identity:validate_token.: keystone.exception.ForbiddenAction: >> You are not authorized to perform the requested action: >> identity:validate_token. >> >> The policy was previously setup as: >> >> identity:validate_token: rule:service_admin_or_token_subject >> >> but has now been implemented with the new scope format to: >> >> identity:validate_token: (role:reader and system_scope:all) or >> rule:service_role or rule:token_subject >> >> If we change the policy to the old one, we stop receiving the >> identity:validate_token exception. This only happens in horizon and >> running the same commands in the CLI (python-openstack-client) does >> not output any errors. To work around this behavior we place the old >> policy for validate_token rule in >> /usr/share/openstack-dashboard/openstack_dashboard/conf/keystone_policy.yaml >> and in the file /etc/keystone/keystone.conf >> >> A second way how we can solve the validate_token exception is to >> disable the option which has been enabled above "enforce_new_defaults >> = true" which will obviously allow the deprecated policy rules to >> become effective and hence has the same behavior as implementing the >> old policy as we did >> >> We would like to know if anyone have had this behavior and how it has >> been solved maybe someone can point us in the right direction to >> identify better what is going on. >> >> Last but not least, it seems that from Horizon, the admin user is >> being assigned a project scoped token instead of a system scoped >> token, while from the CLI the same admin user can successfully issue >> a system:all token and run commands across all resources. We would be >> very happy to receive any form of input related to the above issues >> we are facing >> >> Thanks in advance for any assistance >> From amotoki at gmail.com Tue Mar 16 00:33:08 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 16 Mar 2021 09:33:08 +0900 Subject: [Horizon][Victoria][ops] - Scope based policies in Horizon not working In-Reply-To: <2dcfca03-f801-efb2-e311-0d04aa0b6c8c@zylacomputing.com> References: <2dcfca03-f801-efb2-e311-0d04aa0b6c8c@zylacomputing.com> Message-ID: Hi, Horizon does not support the system-scoped token yet as of Victoria and coming Wallaby. Horizon talks with various services. Some services support the system-scoped token and some do not. It is not easy to handle such kind of mixed situations. As of Victoria, keystone and nova support the system-scoped token but other services do not. We can fall back to the project-scoped token with admin role, but many considerations are still needed. This is the reason of the slow progress in horizon. Victoria horizon also does not understand deprecated policy rules defined as policy-in-code. This was addressed in Wallaby, so we can use the same version of policy.yaml in keystone and horizon. However, horizon still does not handle the system-scoped token and horizon still depends on deprecated rules (rather than the system-scope aware rules). The horizon team recognizes this is the most important feature gap and will do the best to address it in Xena cycle. Thanks, Akihiro Motoki (irc: amotoki) On Tue, Mar 16, 2021 at 8:02 AM Luke Camilleri wrote: > > Hi Everyone, we had sent an email to this mailing list last week on an > issue that we are having which we thought was being generated by > keystone but on closer inspection it seems that this is coming actually > from Horizon policy files. > > We would like to implement OpenStack using the scope based approach (v3) > so that we could keep the main roles (reader,member and Admin) and use > the scopes to differentiate the targets. > > We have implemented the below settings in /etc/keystone/keystone.conf: > > enforce_scope = true > enforce_new_defaults = true > > All works well with the cli and the scopes except Horizon. We constantly > receive errors in /var/log/keystone.log like: > > 2021-03-15 23:15:55.772 1611435 WARNING > keystone.server.flask.application > [req-a89277a2-8fb2-4fcb-b337-e76a73bb3af9 > 1405ecf1ff7f4e4188648dbd1fb84c07 c584121f9d1a4334a3853c61bb5b9d93 - > default default] You are not authorized to perform the requested action: > identity:validate_token.: keystone.exception.ForbiddenAction: You are > not authorized to perform the requested action: identity:validate_token > > but the same command (for example an "openstack domain list") works well > from the CLI. > > We could identify the issue coming from the policy files shipped with > the dashboard at /etc/openstack-dashboard/ as the rules > within these policies are not scope based and in json format while > according to the roadmap it seems that yaml is being used for the policy > files and they should be scope based. > > When we try to convert the files to json to then upgrade them (to use > the scope syntax) we receive the following errors: > > #oslopolicy-convert-json-to-yaml --config-file > /etc/keystone/keystone.conf --namespace keystone --output-file > keystone_policy.yaml --policy-file keystone_policy.json > Traceback (most recent call last): > File "/bin/oslopolicy-convert-json-to-yaml", line 10, in > sys.exit(convert_policy_json_to_yaml()) > File "/usr/lib/python3.6/site-packages/oslo_policy/generator.py", > line 599, in convert_policy_json_to_yaml > conf.output_file) > File "/usr/lib/python3.6/site-packages/oslo_policy/generator.py", > line 451, in _convert_policy_json_to_yaml > scope_types=default_rule.scope_types) > File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line > 1239, in __init__ > scope_types=scope_types > File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line > 1154, in __init__ > self.description = description > File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line > 1251, in description > raise InvalidRuleDefault('Description is required') > oslo_policy.policy.InvalidRuleDefault: Invalid policy rule default: > Description is required. > > We believe that converting such files to yaml and then upgrading them > for scope based access (v3) should solve our issues based on the fact > that if we disable the enforce _scope and enforce_new_defaults we can > have a fully functional setup (but not scope based) which defies the > initial objective > > Thanks in advance for anyone that may be able to assist with this issue > > From rosmaita.fossdev at gmail.com Tue Mar 16 03:45:54 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 15 Mar 2021 23:45:54 -0400 Subject: [cinder] wallaby feature freeze In-Reply-To: References: Message-ID: <279b04ef-99f6-687d-ac77-f99bf54c9613@gmail.com> On 3/15/21 1:32 PM, Ajitha Robert wrote: > Hi team, > > This patch https://review.opendev.org/c/openstack/cinder/+/778886 > requires a > feature freeze exception.  I am working on the the minor revisions and > the CI will be running tomorrow. Thank you. You can have an FFE. Keep in mind that you must respond quickly to revision requests because the patch must be merged by 21:00 UTC on Friday 19 March in order to be included in Wallaby. > -- > */Regards, > Ajitha R/ > * From rosmaita.fossdev at gmail.com Tue Mar 16 03:46:17 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 15 Mar 2021 23:46:17 -0400 Subject: [cinder] wallaby feature freeze In-Reply-To: <604f9caf.1c69fb81.760ef.2f02@mx.google.com> References: <604f9caf.1c69fb81.760ef.2f02@mx.google.com> Message-ID: <459873be-54d7-405d-8f4d-f38d5102b963@gmail.com> On 3/15/21 1:43 PM, Rajesh Ratnakaram wrote: > Hi, > > I have added missing features to Zadara cinder driver and posted: > https://review.opendev.org/c/openstack/cinder/+/774463 > > > The corresponding bp can be found at: > > https://blueprints.launchpad.net/cinder/+spec/zadara-wallaby-features > > > I wanted to push the above changes to Wallaby release, hence requesting > to add FFE for the above request. You can have an FFE. Keep in mind that you must respond quickly to revision requests because the patch must be merged by 21:00 UTC on Friday 19 March in order to be included in Wallaby. > > Thanks and Regards, > > Rajesh. > > Sent from Mail for > Windows 10 > From pangliye at inspur.com Tue Mar 16 06:57:27 2021 From: pangliye at inspur.com (=?gb2312?B?TGl5ZSBQYW5nKOXMwaLStSk=?=) Date: Tue, 16 Mar 2021 06:57:27 +0000 Subject: [venus][ptg] Xena PTG Message-ID: Hi, Venus is the OpenStack project that provides a one-stop solution for log collection, indexing, analysis, alerting, visualization, report generation and other needs. Additionally, this project plans to use machine learning algorithms to locate failures and find the root causes quickly, to improve operation and maintenance efficiency. If you are planning to attend venus sessions during the Xena PTG, please fill in the doodle[1] with time slots which are good for you before 23rd March so that we can book the best time slots for most of us. And, please add any topic which you want to discuss on the etherpad[2]. [1] https://doodle.com/poll/q72urs2mk7up45pf [2] https://etherpad.opendev.org/p/venus-xena-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From arnaud.morin at gmail.com Tue Mar 16 07:48:53 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Tue, 16 Mar 2021 07:48:53 +0000 Subject: [largescale-sig] Next meeting: March 10, 15utc In-Reply-To: References: Message-ID: Great topic! Sorry for being absent last week, I will do my best to be present next week! Cheers, On 15.03.21 - 13:56, Gene Kuo wrote: > Hi, > > I confirmed that I'm able to give a short talk to start off the discussion for the next video meeting. > The topic will be "RabbitMQ Clusters at Large Scale OpenStack Infrastructure". > > Regards, > Gene Kuo > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ > 在 2021年3月11日星期四 01:06,Belmiro Moreira 寫道: > > > Hi, > > we had the Large Scale SIG meeting today. > > > > Meeting logs are available at: > > http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-03-10-15.00.log.html > > > > We discussed topics for a new video meeting in 2 weeks. > > Details will be sent later. > > > > regards, > > Belmiro > > > > On Mon, Mar 8, 2021 at 12:43 PM Thierry Carrez wrote: > > > >> Hi everyone, > >> > >> Our next Large Scale SIG meeting will be this Wednesday in > >> #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it > >> translates locally at: > >> > >> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210310T15 > >> > >> Belmiro Moreira will chair this meeting. A number of topics have already > >> been added to the agenda, including discussing CentOS Stream, reflecting > >> on last video meeting and pick a topic for the next one. > >> > >> Feel free to add other topics to our agenda at: > >> https://etherpad.openstack.org/p/large-scale-sig-meeting > >> > >> Regards, > >> > >> -- > >> Thierry Carrez From hberaud at redhat.com Tue Mar 16 08:29:48 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 16 Mar 2021 09:29:48 +0100 Subject: [manila][requirements] FFE requested for python-manilaclient In-Reply-To: References: <20210315224226.v7uqyt2solsfkhtv@mthode.org> Message-ID: As already discussed with you on IRC if these changes are merged ASAP and if Matthew is Ok from a requirements perspective then I'm ok to proceed. A version of python-manilaclient client have been frozen yesterday so now we can take advantage of the RFE mechanismes to land these changes that face a flaky CI. Le lun. 15 mars 2021 à 23:59, Goutham Pacha Ravi a écrit : > On Mon, Mar 15, 2021 at 3:47 PM Matthew Thode wrote: > > > > On 21-03-15 14:48:12, Goutham Pacha Ravi wrote: > > > Hello, > > > > > > We were tracking one last feature inclusion into python-manilaclient > > > for the wallaby release: > > > https://review.opendev.org/c/openstack/python-manilaclient/+/775020 > > > > > > The change does not introduce any new requirements, but adds new SDK > > > methods and shell implementations for end users to harness a feature > > > addition in manila to be able to update their security services. > > > > > > The release team was very gracious in allowing us to wrap up code > > > reviews pertaining to the above change over the weekend; however, the > > > gate was uncooperative this morning by pointing us to lingering > > > sporadic failures that are a combination of bad test case logic as > > > well as occasional test environment setup issues. (Zuul is awesome, > > > but it has a way to call out our procrastinations) > > > > > > No projects that depend on python-manilaclient need this change in > > > their current form because it is net new feature functionality > > > (existing SDK/CLI functionality is unaltered). However, I thought i > > > should check here before requesting a re-release of > > > python-manilaclient under the Wallaby cycle. > > > > > > Thanks, > > > Goutham > > > > > > > Fine by me from a requirements perspective. Thanks. I imagine this > > will be a version bump on top of https://review.opendev.org/780667 then? > > Yes. I worked with Hervé Beraud to get that release in, and pursue > this FFE/RFE in parallel to include > https://review.opendev.org/c/openstack/python-manilaclient/+/775020 > into 2.6.0. > > > > > > -- > > Matthew Thode > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Mar 16 08:43:44 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 16 Mar 2021 09:43:44 +0100 Subject: [release] Xena PTG In-Reply-To: References: Message-ID: Friendly reminder about our PTG session, don't forget to vote for time slots before the 21st of March Le mer. 10 mars 2021 à 10:52, Herve Beraud a écrit : > Oï releasers, > > The PTG is fast approaching (Apr 19 - 23). > > To help to organize the gathering: > 1) please fill the doodle[1] with the time slots that fit well for you; > 2) please add your PTG topics in our etherpad[2]. > > Voting will be closed on the 21st of March. > > Thanks for your reading, > > [1] https://doodle.com/poll/8d8n2picqnhchhsv > [2] https://etherpad.opendev.org/p/xena-ptg-os-relmgt > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Mar 16 09:10:23 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 16 Mar 2021 14:10:23 +0500 Subject: [victoria][neutron] OVN Gateway Chassis Issue Message-ID: Hi, I have four compute nodes in my lab setup. Initially all the four compute nodes were acting as gateway chassis with priority 1, 2, 3 and 4. Then I have specifically marked two node as a gateway chassis with below command on compute nodes. ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw" The command ovn-nbctl list gateway_chassis start showing two chassis. I have checked via tcpdump, the public traffic started flowing from both nodes. Look like its doing round robin to send packets. Then I tried to remove one chassis from gateway and used below command. ovs-vsctl remove open . external-ids ovn-cms-options=enable-chassis-as-gw The ovn-nbctl list gateway_chassis started showing one gateway chassis but I can see from tcpdump that public traffic still flows from both gateway chassis. Below is the current status of chassis. root at network:/etc/neutron# ovn-sbctl list chassis _uuid : 532bb9d0-6667-462c-9631-0cb5360bd4dc encaps : [358c4a59-0bca-459c-958c-524eb8c385ce] external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", "neutron:liveness_check_at"="2021-03-16T08:52:41.361302+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.364928+00:00", "neutron:ovn-metadata-id"="2ac66785-d0c7-43ee-8c78-5fd6ed6ccc73", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=""} hostname : virtual-hv2 name : "fdfae005-7473-486a-b331-8a54c53c1279" nb_cfg : 6157 transport_zones : [] vtep_logical_switches: [] _uuid : a99ab389-96a5-4a58-a301-34618868450a encaps : [6e7490ce-3c58-4a1c-999d-ff1638c66feb] external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", neutron-metadata-proxy-networks="dc917847-f70f-4de0-9865-3e9594c65ef1", "neutron:liveness_check_at"="2021-03-16T08:52:41.368768+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.372045+00:00", "neutron:ovn-metadata-id"="3441fc3c-ca43-4360-8210-8c9ebe4fc13d", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=""} hostname : kvm10-a1-khi01 name : "87504098-4474-40fc-9576-ac449c1c4448" nb_cfg : 6157 transport_zones : [] vtep_logical_switches: [] _uuid : b9bdfe12-fe27-4580-baee-159f871c442b encaps : [52a8f523-9740-4333-a4a4-69bf5e27117c] external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", neutron-metadata-proxy-networks="dc917847-f70f-4de0-9865-3e9594c65ef1", "neutron:liveness_check_at"="2021-03-16T08:52:41.326719+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.342214+00:00", "neutron:ovn-metadata-id"="2a751610-97a8-4688-a719-df3616f4f770", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=""} hostname : kvm12-a1-khi01 name : "82630e57-668e-4f67-a3fb-a173f4da432a" nb_cfg : 6157 transport_zones : [] vtep_logical_switches: [] _uuid : 669d1ae3-7a5d-4ec3-869d-ca6240f9ae2c encaps : [ac8022b3-1ea5-45c7-a7e8-74db7b627df4] external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", "neutron:liveness_check_at"="2021-03-16T08:52:41.347144+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.352021+00:00", "neutron:ovn-metadata-id"="2d5ce6fd-6a9f-4356-9406-6ca91601af43", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw} hostname : virtual-hv1 name : "731e842a-3a69-4044-87e9-32b7517d4f07" nb_cfg : 6157 transport_zones : [] vtep_logical_switches: [] Need help how can I permanently remove a gateway chassis that it should stop serving public traffic ? also is it something to do with priority ? - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailakkina at gmail.com Tue Mar 16 09:16:25 2021 From: mailakkina at gmail.com (Nagaraj Akkina) Date: Tue, 16 Mar 2021 10:16:25 +0100 Subject: [live migration] [libvirt] [nova] Message-ID: Hi Team, We are using live migration in our environment from quite some time now, recently there are nested VMs created and after that "Live migration of nested VMs fails" Is there any documentation on this?. How to make live migration work on nested VMs? Regards, Akkina -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailakkina at gmail.com Tue Mar 16 09:20:09 2021 From: mailakkina at gmail.com (Nagaraj Akkina) Date: Tue, 16 Mar 2021 10:20:09 +0100 Subject: [live migration] [nova] [libvirt] [stein] Live migration of nested VMs fails Message-ID: Hi Team, We are using live migration in our environment from some time now, recently there are nested VMs created and after that "Live migration of nested VMs fails" How to make live migration work on nested VMs? Regards, Akkina ReplyForward -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Mar 16 09:34:42 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 16 Mar 2021 10:34:42 +0100 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: References: Message-ID: Here is a summary from our previous meeting [1] related to this topic. Apparently the error is caused by the old version of nodejs [2] (usually lower than v6). This error is on an independent project [3] so normally the NodeJS supported runtimes are the same that with the current series (AFAIK we use NodeJS 10) [4] This job uses xenial and xenial seems to provide nodejs 4.2.6 [5] that could explain why we see this error. However, this job inherits from `publish-openstack-artifacts` [6] and this one defines a nodeset based on focal [7] so I wonder why we use xenial here [8]. Notice that focal comes with NodeJS 10.19 [4]. The last execution of this job (`release-openstack-javascript`) was successful but it was 18 months ago [9], unfortunately logs aren't longer available for these builds. It could be worth seeing which version of ubuntu was used during this period. Maybe the solution is simply moving this javascript job onto a nodeset based on focal. Thoughts? [1] eavesdrop.openstack.org/meetings/releaseteam/2021/releaseteam.2021-03-11-17.00.log.html#l-186 [2] https://stackoverflow.com/questions/64414716/unexpected-token-in-yarn-installation [3] https://releases.openstack.org/reference/release_models.html#independent [4] https://governance.openstack.org/tc/reference/runtimes/wallaby.html#node-js-runtime-for-wallaby [5] https://pkgs.org/search/?q=nodejs [6] https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.yaml#L780 [7] https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.yaml#L8 [8] https://zuul.opendev.org/t/openstack/build/cdffd2a26a0d4a5b8137edb392fa5971/log/job-output.txt#743 [9] https://zuul.opendev.org/t/openstack/builds?job_name=release-openstack-javascript Le jeu. 11 mars 2021 à 12:42, Thierry Carrez a écrit : > We had a release job failure during the processing of the tag event when > 1.3.0 was (successfully) pushed to openstack/monasca-grafana-datasource. > > Tags on this repository trigger the release-openstack-javascript job, > which failed during pre playbook when trying to run yarn --version with > the following error: > > /usr/share/yarn/lib/cli.js:46100 > let { > ^ > > SyntaxError: Unexpected token { > at exports.runInThisContext (vm.js:53:16) > at Module._compile (module.js:373:25) > at Object.Module._extensions..js (module.js:416:10) > at Module.load (module.js:343:32) > at Function.Module._load (module.js:300:12) > at Module.require (module.js:353:17) > at require (internal/module.js:12:17) > at Object. (/usr/share/yarn/bin/yarn.js:24:13) > at Module._compile (module.js:409:26) > at Object.Module._extensions..js (module.js:416:10) > > See https://zuul.opendev.org/t/openstack/build > /cdffd2a26a0d4a5b8137edb392fa5971 > > This prevented the job from running (likely resulting in nothing being > uploaded to NPM? Not a JS job specialist), which in turn prevented > announce-release job from announcing it. > > -- > Thierry Carrez (ttx) > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Tue Mar 16 10:20:54 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Tue, 16 Mar 2021 10:20:54 +0000 Subject: [victoria][neutron] OVN Gateway Chassis Issue In-Reply-To: References: Message-ID: Hi Ammad, On Tue, Mar 16, 2021 at 9:15 AM Ammad Syed wrote: > > Hi, > > I have four compute nodes in my lab setup. Initially all the four compute nodes were acting as gateway chassis with priority 1, 2, 3 and 4. > > Then I have specifically marked two node as a gateway chassis with below command on compute nodes. > > ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw" > > The command ovn-nbctl list gateway_chassis start showing two chassis. I have checked via tcpdump, the public traffic started flowing from both nodes. Look like its doing round robin to send packets. > > Then I tried to remove one chassis from gateway and used below command. > > ovs-vsctl remove open . external-ids ovn-cms-options=enable-chassis-as-gw > > The ovn-nbctl list gateway_chassis started showing one gateway chassis but I can see from tcpdump that public traffic still flows from both gateway chassis. > > Below is the current status of chassis. > > root at network:/etc/neutron# ovn-sbctl list chassis > _uuid : 532bb9d0-6667-462c-9631-0cb5360bd4dc > encaps : [358c4a59-0bca-459c-958c-524eb8c385ce] > external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", "neutron:liveness_check_at"="2021-03-16T08:52:41.361302+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.364928+00:00", "neutron:ovn-metadata-id"="2ac66785-d0c7-43ee-8c78-5fd6ed6ccc73", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=""} > hostname : virtual-hv2 > name : "fdfae005-7473-486a-b331-8a54c53c1279" > nb_cfg : 6157 > transport_zones : [] > vtep_logical_switches: [] > > _uuid : a99ab389-96a5-4a58-a301-34618868450a > encaps : [6e7490ce-3c58-4a1c-999d-ff1638c66feb] > external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", neutron-metadata-proxy-networks="dc917847-f70f-4de0-9865-3e9594c65ef1", "neutron:liveness_check_at"="2021-03-16T08:52:41.368768+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.372045+00:00", "neutron:ovn-metadata-id"="3441fc3c-ca43-4360-8210-8c9ebe4fc13d", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=""} > hostname : kvm10-a1-khi01 > name : "87504098-4474-40fc-9576-ac449c1c4448" > nb_cfg : 6157 > transport_zones : [] > vtep_logical_switches: [] > > _uuid : b9bdfe12-fe27-4580-baee-159f871c442b > encaps : [52a8f523-9740-4333-a4a4-69bf5e27117c] > external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", neutron-metadata-proxy-networks="dc917847-f70f-4de0-9865-3e9594c65ef1", "neutron:liveness_check_at"="2021-03-16T08:52:41.326719+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.342214+00:00", "neutron:ovn-metadata-id"="2a751610-97a8-4688-a719-df3616f4f770", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=""} > hostname : kvm12-a1-khi01 > name : "82630e57-668e-4f67-a3fb-a173f4da432a" > nb_cfg : 6157 > transport_zones : [] > vtep_logical_switches: [] > > _uuid : 669d1ae3-7a5d-4ec3-869d-ca6240f9ae2c > encaps : [ac8022b3-1ea5-45c7-a7e8-74db7b627df4] > external_ids : {datapath-type=system, iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", "neutron:liveness_check_at"="2021-03-16T08:52:41.347144+00:00", "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.352021+00:00", "neutron:ovn-metadata-id"="2d5ce6fd-6a9f-4356-9406-6ca91601af43", "neutron:ovn-metadata-sb-cfg"="6157", ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", ovn-cms-options=enable-chassis-as-gw} > hostname : virtual-hv1 > name : "731e842a-3a69-4044-87e9-32b7517d4f07" > nb_cfg : 6157 > transport_zones : [] > vtep_logical_switches: [] > > Need help how can I permanently remove a gateway chassis that it should stop serving public traffic ? also is it something to do with priority ? > Is it possible that we still have router ports scheduled onto that chassis ? You list your routers, router_ports and which gateway chassis the port is scheduled on with the following commands: # List your routers $ ovn-nbctl lr-list # List the router ports in that router $ ovn-nbctl lrp-list # List which gateway chassis (if any) that router port is scheduled on. Here you will see the priority, the highest is where the port should be located $ ovn-nbctl lrp-get-gateway-chassis If that's the case, I think the OVN driver is not automatically accounting for rescheduling these ports when a gateway chassis is removed/added. We need to discuss whether this is something we want to have automatically like this or not because it can cause data disruption. Alternatively we could have a "rescheduling" script that could be run by operators when they want to add/remove a gateway chassis so they can plan before moving the ports from one chassis to another (again potentially causing disruptions). Hope that helps, Lucas > - Ammad From mark at stackhpc.com Tue Mar 16 10:49:14 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 16 Mar 2021 10:49:14 +0000 Subject: [kolla][ptg] Xena PTG slots Message-ID: Hi, The Xena PTG [1] is fast approaching, and it is time to decide on our time slots. The event will take place on 19th - 23rd April. For the last few events, we have settled on the following schedule: * Monday: 13:00 - 17:00 UTC (General & Kolla) * Tuesday: 13:00 - 17:00 UTC (Kolla Ansible) * Wednesday: 13:00 - 15:00 UTC (Kayobe) This has largely worked quite well, allowing the contributor base in Europe and the US to collaborate. This schedule does tend to exclude contributors in other timezones however, and I would like to explore ideas to increase our global coverage. I would be interested to hear from anyone who would be interested in joining one or more slots in the 6:00 - 8:00 UTC window on one or more days. Please respond here or in IRC if you are interested. PS remember to sign up [2]! Thanks, Mark [1] https://www.openstack.org/ptg/ [2] https://april2021-ptg.eventbrite.com/ From syedammad83 at gmail.com Tue Mar 16 10:50:28 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 16 Mar 2021 15:50:28 +0500 Subject: [victoria][neutron] OVN Gateway Chassis Issue In-Reply-To: References: Message-ID: Hi Lucas, I have checked, currently there is only one router. root at network:/etc/neutron# ovn-nbctl lr-list 9f6111a9-3231-4f60-8199-78780760fe34 (neutron-ff36ce12-78fc-4ac9-9ae9-5a18ec1002bd) with two ports. root at network:/etc/neutron# ovn-nbctl lrp-list 9f6111a9-3231-4f60-8199-78780760fe34 a33dc21f-dcd7-4714-8003-9e21bc283d03 (lrp-52409f01-b140-4729-90b4-409c7c9b3f4b) 76dd76a7-1c64-4686-aeac-44ae63677404 (lrp-b12c1aa0-0857-494c-92a0-ee54cc7e01cc) One port showing no output. root at network:/etc/neutron# ovn-nbctl lrp-get-gateway-chassis a33dc21f-dcd7-4714-8003-9e21bc283d03 root at network:/etc/neutron# Other port showing gateway chassis. root at network:/etc/neutron# ovn-nbctl lrp-get-gateway-chassis 76dd76a7-1c64-4686-aeac-44ae63677404 lrp-b12c1aa0-0857-494c-92a0-ee54cc7e01cc_731e842a-3a69-4044-87e9-32b7517d4f07 1 This is the current active gateway chassis. root at network:/etc/neutron# ovn-nbctl list gateway_chassis _uuid : 4e23ff9b-9588-46aa-9ed1-69fea503a729 chassis_name : "731e842a-3a69-4044-87e9-32b7517d4f07" external_ids : {} name : lrp-b12c1aa0-0857-494c-92a0-ee54cc7e01cc_731e842a-3a69-4044-87e9-32b7517d4f07 options : {} priority : 1 This chassis fdfae005-7473-486a-b331-8a54c53c1279 is the one that I have removed from gateway chassis and I don't see any port scheduled on it. I have tried to reboot the chassis but when the chassis comes back, the uplink port start showing traffic in tcpdump. - Ammad On Tue, Mar 16, 2021 at 3:21 PM Lucas Alvares Gomes wrote: > Hi Ammad, > > > On Tue, Mar 16, 2021 at 9:15 AM Ammad Syed wrote: > > > > Hi, > > > > I have four compute nodes in my lab setup. Initially all the four > compute nodes were acting as gateway chassis with priority 1, 2, 3 and 4. > > > > Then I have specifically marked two node as a gateway chassis with below > command on compute nodes. > > > > ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw" > > > > The command ovn-nbctl list gateway_chassis start showing two chassis. I > have checked via tcpdump, the public traffic started flowing from both > nodes. Look like its doing round robin to send packets. > > > > Then I tried to remove one chassis from gateway and used below command. > > > > ovs-vsctl remove open . external-ids ovn-cms-options=enable-chassis-as-gw > > > > The ovn-nbctl list gateway_chassis started showing one gateway chassis > but I can see from tcpdump that public traffic still flows from both > gateway chassis. > > > > Below is the current status of chassis. > > > > root at network:/etc/neutron# ovn-sbctl list chassis > > _uuid : 532bb9d0-6667-462c-9631-0cb5360bd4dc > > encaps : [358c4a59-0bca-459c-958c-524eb8c385ce] > > external_ids : {datapath-type=system, > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", > is-interconn="false", > "neutron:liveness_check_at"="2021-03-16T08:52:41.361302+00:00", > "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.364928+00:00", > "neutron:ovn-metadata-id"="2ac66785-d0c7-43ee-8c78-5fd6ed6ccc73", > "neutron:ovn-metadata-sb-cfg"="6157", > ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", > ovn-cms-options=""} > > hostname : virtual-hv2 > > name : "fdfae005-7473-486a-b331-8a54c53c1279" > > nb_cfg : 6157 > > transport_zones : [] > > vtep_logical_switches: [] > > > > _uuid : a99ab389-96a5-4a58-a301-34618868450a > > encaps : [6e7490ce-3c58-4a1c-999d-ff1638c66feb] > > external_ids : {datapath-type=system, > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", > is-interconn="false", > neutron-metadata-proxy-networks="dc917847-f70f-4de0-9865-3e9594c65ef1", > "neutron:liveness_check_at"="2021-03-16T08:52:41.368768+00:00", > "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.372045+00:00", > "neutron:ovn-metadata-id"="3441fc3c-ca43-4360-8210-8c9ebe4fc13d", > "neutron:ovn-metadata-sb-cfg"="6157", > ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", > ovn-cms-options=""} > > hostname : kvm10-a1-khi01 > > name : "87504098-4474-40fc-9576-ac449c1c4448" > > nb_cfg : 6157 > > transport_zones : [] > > vtep_logical_switches: [] > > > > _uuid : b9bdfe12-fe27-4580-baee-159f871c442b > > encaps : [52a8f523-9740-4333-a4a4-69bf5e27117c] > > external_ids : {datapath-type=system, > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", > is-interconn="false", > neutron-metadata-proxy-networks="dc917847-f70f-4de0-9865-3e9594c65ef1", > "neutron:liveness_check_at"="2021-03-16T08:52:41.326719+00:00", > "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.342214+00:00", > "neutron:ovn-metadata-id"="2a751610-97a8-4688-a719-df3616f4f770", > "neutron:ovn-metadata-sb-cfg"="6157", > ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", > ovn-cms-options=""} > > hostname : kvm12-a1-khi01 > > name : "82630e57-668e-4f67-a3fb-a173f4da432a" > > nb_cfg : 6157 > > transport_zones : [] > > vtep_logical_switches: [] > > > > _uuid : 669d1ae3-7a5d-4ec3-869d-ca6240f9ae2c > > encaps : [ac8022b3-1ea5-45c7-a7e8-74db7b627df4] > > external_ids : {datapath-type=system, > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", > is-interconn="false", > "neutron:liveness_check_at"="2021-03-16T08:52:41.347144+00:00", > "neutron:metadata_liveness_check_at"="2021-03-16T08:52:41.352021+00:00", > "neutron:ovn-metadata-id"="2d5ce6fd-6a9f-4356-9406-6ca91601af43", > "neutron:ovn-metadata-sb-cfg"="6157", > ovn-bridge-mappings="ext-net1:br-ext", ovn-chassis-mac-mappings="", > ovn-cms-options=enable-chassis-as-gw} > > hostname : virtual-hv1 > > name : "731e842a-3a69-4044-87e9-32b7517d4f07" > > nb_cfg : 6157 > > transport_zones : [] > > vtep_logical_switches: [] > > > > Need help how can I permanently remove a gateway chassis that it should > stop serving public traffic ? also is it something to do with priority ? > > > > Is it possible that we still have router ports scheduled onto that chassis > ? > > You list your routers, router_ports and which gateway chassis the port > is scheduled on with the following commands: > > # List your routers > $ ovn-nbctl lr-list > > # List the router ports in that router > $ ovn-nbctl lrp-list > > # List which gateway chassis (if any) that router port is scheduled > on. Here you will see the priority, the highest is where the port > should be located > $ ovn-nbctl lrp-get-gateway-chassis > > If that's the case, I think the OVN driver is not automatically > accounting for rescheduling these ports when a gateway chassis is > removed/added. We need to discuss whether this is something we want to > have automatically like this or not because it can cause data > disruption. > > Alternatively we could have a "rescheduling" script that could be run > by operators when they want to add/remove a gateway chassis so they > can plan before moving the ports from one chassis to another (again > potentially causing disruptions). > > Hope that helps, > Lucas > > > - Ammad > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Mar 16 11:04:12 2021 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 16 Mar 2021 12:04:12 +0100 Subject: [largescale-sig] Video meeting Mar 24: "Scaling RabbitMQ Clusters" Message-ID: Hi everyone, The Large Scale SIG organizes video meetings around specific scaling topics, just one hour of discussion bootstrapped by a short presentation from operators of large scale deployments of OpenStack. Last month we did one on "Regions vs. Cells" which was pretty popular. Our next video meeting will be next Wednesday, March 24, at 1500utc on Zoom (link/password will be provided on this thread the day before the event). The theme will be: "Scaling RabbitMQ Clusters" quickstarted by a presentation from Gene Kuo (LINE). To see how 1500UTC will translate into in your corner of the world, you can doublecheck: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210324T15 If you're interested in sharing your experience on this topic, or just want to learn more about that, please mark your calendar and join us next Wednesday ! -- Thierry Carrez (ttx) From felipe.ebert at gmail.com Tue Mar 16 12:19:25 2021 From: felipe.ebert at gmail.com (Felipe Ebert) Date: Tue, 16 Mar 2021 13:19:25 +0100 Subject: [all] Help on Improving Software Documentation Through Code Reviews Message-ID: Dear members of OpenStack, We are an international team of researchers, from the University of Bari - Italy, Federal University of Pernambuco - Brazil, University of Adelaide - Australia, and Eindhoven University of Technology - the Netherlands. The goal of our research is to improve software documentation and maintainability of open-source projects by providing automatic support of source code comments based on code review discussions. We would like to learn identifying relevant information from code review discussions such that tooling can be developed to improve software documentation and maintainability of open-source projects. Therefore, we would like to talk to open-source developers and interview them. If you are interested in talking about your experiences as an open-source reviewer then please schedule a meeting with us using YouCanBook.me here: [ https://tue-interview.youcanbook.me]. We are interested in different experiences, therefore, we are looking to talk to people of all age-groups, genders, and people with different roles in open-source projects. So even whether you are a first-time contributor or a senior maintainer don’t hesitate to contact us. The interview itself will take no longer than 30 min and will be scheduled using Microsoft Teams. We will record the interview and the recording and transcript of the interview will be stored securely, and only made available to the researchers that are involved in this project. Additionally, we plan to use anonymized results of the interview in a scientific publication. Kind regards and thanks in advance, -- Nicole Novielli, University of Bari, Fernando Castor, Federal University of Pernambuco, Christoph Treude, University of Adelaide, Alexander Serebrenik, Eindhoven University of Technology, Felipe Ebert, Eindhoven University of Technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahson.babel at cc.in2p3.fr Tue Mar 16 13:48:47 2021 From: jahson.babel at cc.in2p3.fr (Jahson Babel) Date: Tue, 16 Mar 2021 14:48:47 +0100 Subject: [ops] Bandwidth problem on computes Message-ID: <7f573860-4ef8-39e0-d475-6ec2319c0880@cc.in2p3.fr> Hello everyone, I have a bandwidth problem between the computes nodes of an openstack cluster. This cluster runs on Rocky version with OpenVSwitch. To simplify I'll just pick 3 servers, one controller and two computes nodes all connected to the same switch. Every server is configured with two 10G links. Those links are configured in LACP /teaming. From what I understand of teaming and this configuration I should be able to get 10Gbps between all three nodes. But if I iperf we are way below this : compute1 # sudo iperf3 -c compute2 -p 5201 Connecting to host compute2, port 5201 [  4] local X.X.X.X port 44946 connected to X.X.X.X port 5201 [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd [  4]   0.00-1.00   sec   342 MBytes  2.87 Gbits/sec  137    683 KBytes [  4]   1.00-2.00   sec   335 MBytes  2.81 Gbits/sec    8    501 KBytes  Plus the problem seems to be only present with incoming traffic. Which mean I can almost get the full 10gbps if I iperf from a compute to the controller. compute1 # sudo iperf3 -c controller -p 5201 Connecting to host controller, port 5201 [  4] local X.X.X.X port 39008 connected to X.X.X.X port 5201 [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd [  4]   0.00-1.00   sec  1.10 GBytes  9.41 Gbits/sec    0    691 KBytes [  4]   1.00-2.00   sec  1.09 GBytes  9.38 Gbits/sec    0    803 KBytes If I do the opposite I get the same results I was getting between the 2 computes. From the tests we've done it seems related to the openstack's services, specifically neutron or OpenVSwitch. From the time those services are running we can't get the full bandwidth. Stopping the services won't fix the issue, in our case removing the packages and rebooting is the only way to obtain the full bandwidth between computes. I voluntarily didn't mention VMs to simplify the question but of course this behavior can also be observed in VMs Knowing that we can achieve 10Gbps it doesn't seems related to the hardware nor the OS. That why we suspect OpenStack's services. But I couldn't find any evidence or misconfiguration that could confirm that. So if anyone got some hints about that kind of setup and/or how mitigate bandwidth decrease I would appreciate. Let know if you need more info. Thanks in advance, Jahson -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2964 bytes Desc: S/MIME Cryptographic Signature URL: From balazs.gibizer at est.tech Tue Mar 16 15:46:05 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 16 Mar 2021 16:46:05 +0100 Subject: [live migration] [libvirt] [nova] In-Reply-To: References: Message-ID: On Tue, Mar 16, 2021 at 10:16, Nagaraj Akkina wrote: > Hi Team, > > We are using live migration in our environment from quite some time > now, recently there are nested VMs created and after that "Live > migration of nested VMs fails" Is there any documentation on this?. > How to make live migration work on nested VMs? Hi, Could you be a bit more specific? Could you paste the error message / log / stack trace you are seeing? (e.g. to paste.openstack.org) Cheers, gibi > > Regards, > Akkina From tonyliu0592 at hotmail.com Tue Mar 16 16:06:40 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 16 Mar 2021 16:06:40 +0000 Subject: [live migration] [nova] [libvirt] [stein] Live migration of nested VMs fails In-Reply-To: References: Message-ID: https://www.linux-kvm.org/page/Nested_Guests#Limitations Tony > -----Original Message----- > From: Nagaraj Akkina > Sent: Tuesday, March 16, 2021 2:20 AM > To: openstack-discuss at lists.openstack.org > Subject: [live migration] [nova] [libvirt] [stein] Live migration of > nested VMs fails > > Hi Team, > > We are using live migration in our environment from some time now, > recently there are nested VMs created and after that "Live migration of > nested VMs fails" How to make live migration work on nested VMs? > > Regards, > Akkina > NzX0E3K_CM5fBiLgE=s80> ReplyForward From rosmaita.fossdev at gmail.com Tue Mar 16 17:04:37 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 16 Mar 2021 13:04:37 -0400 Subject: [cinder] docs job failures Message-ID: Note to reviewers, since all FFE patches need to be reviewed and merged by friday: Don't let a Zuul -1 for openstack-tox-docs stop you from reviewing patches. Gorka has a fix posted that we believe will address this: https://review.opendev.org/c/openstack/cinder/+/780907 but it needs to make its way through check and gate first, so it may be awhile. An example of the docs failure is: https://zuul.opendev.org/t/openstack/build/28c1d829e2d4469d82128a4f295f4dad It happens very early in the job, so even if there's a problem with the docs on a patch you're looking at, you aren't going to see it. If you want to check the docs on your patch locally, the change in this patch works for me, but YMMV: https://review.opendev.org/c/openstack/cinder/+/780696 From noonedeadpunk at ya.ru Tue Mar 16 17:30:19 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 16 Mar 2021 19:30:19 +0200 Subject: [openstack-ansible] vPTG April 2021 Message-ID: <658581615915192@mail.yandex.ru> An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Mar 16 17:35:06 2021 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 16 Mar 2021 12:35:06 -0500 Subject: [oslo] Xena PTG Etherpad Message-ID: Daniel created an etherpad for the Xena PTG[0]. If you have any topics to discuss please add them so we know whether we need to reserve a time. Thanks. -Ben 0: https://etherpad.opendev.org/p/oslo-xena-topics From C-Albert.Braden at charter.com Tue Mar 16 18:53:11 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Tue, 16 Mar 2021 18:53:11 +0000 Subject: [kolla] Upgrading from Centos7 Train to Centos8 Ussuri Message-ID: I'm testing the Train7->Ussuri8 upgrade on a heat stack, following this document: OpenStack Docs: CentOS 8 I used the instructions here to successfully remove a controller from the cluster: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers Now I'm ready to add one, using the instructions here: https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#adding-new-controllers They say to run bootstrap-servers: kolla-ansible -i bootstrap-servers [ --limit ] The question is, how to run it? My build server is running Centos 7 and has Python 2.7.5. Should I just download Ussuri to my existing build server, or do I need to build a new Centos 8 build server with Python 3? I read the "potential issues" text but it's not obvious whether it is safe to run Ussuri bootstrap-servers on Train hosts. If I don't run bootstrap-servers on the Train controllers, then how does the new Ussuri controller get integrated (RMQ, MariaDB, etc.)? I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Mar 16 19:35:03 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 16 Mar 2021 15:35:03 -0400 Subject: [cinder] docs job failures In-Reply-To: References: Message-ID: <6de72910-a438-364c-9b4b-dacc30c626ec@gmail.com> https://review.opendev.org/c/openstack/cinder/+/780907 has merged, so the openstack-tox-docs job for cinder should be functional once again and you can recheck your patches On 3/16/21 1:04 PM, Brian Rosmaita wrote: > Note to reviewers, since all FFE patches need to be reviewed and merged > by friday: > > Don't let a Zuul -1 for openstack-tox-docs stop you from reviewing > patches.  Gorka has a fix posted that we believe will address this: >   https://review.opendev.org/c/openstack/cinder/+/780907 > but it needs to make its way through check and gate first, so it may be > awhile. > > An example of the docs failure is: > https://zuul.opendev.org/t/openstack/build/28c1d829e2d4469d82128a4f295f4dad > > It happens very early in the job, so even if there's a problem with the > docs on a patch you're looking at, you aren't going to see it.  If you > want to check the docs on your patch locally, the change in this patch > works for me, but YMMV: >   https://review.opendev.org/c/openstack/cinder/+/780696 > From lbragstad at gmail.com Tue Mar 16 19:53:19 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 16 Mar 2021 14:53:19 -0500 Subject: Properly consuming system-scope and operator workflows Message-ID: Hey all, Several projects have made significant progress adopting default roles and system-scope during the Wallaby cycle. One part of that work is obviously testing and we've been stringing together various tempest jobs with the new policies enabled to see what falls out. Ironic was working on some of this last week and it led to an interesting discussion on Friday about how we should handle system-scope with the current assumptions about administrator (e.g., project-admin) workflows. The majority of the discussion took place in IRC, but I tried to consolidate as much of the context as I could into an etherpad [0]. I'm socializing the overall discussion so we can get feedback from users, operators, and developers, as well as summarize some comparisons to other systems. Today, by default system administrators are anyone with the 'admin' role on a project. These users are allowed to do things on behalf of users, like rebooting an instance. This mostly works because the administrator's token is project-scoped and the resource they're interacting with is owned by a project. This assumption isn't true in the new default policies. The question is, should system administrators, or users with system-scoped tokens, be allowed to use those tokens to make changes to project-specific resources (e.g., creating a private image in a project, rebooting an instance, deleting a snapshot, etc)? The general feeling was that, yes, system administrators should have the ability to interact with resources owned by projects in their deployment. The discussion then focused on how we actually make that happen. We came up with three different solutions, all of which are detailed in the etherpad [0], but I'll summarize them below. Option 1: Let system-administrators exchange system-scoped (god-mode) tokens for tokens scoped to a specific project without an explicit role assignment on the project Option 2: Require system-administrators to grant themselves authorization on the project, obtain a new token, and perform the operation Option 3: Allow system-administrators to use system-scoped tokens to operate on project-specific resources We're iterating through the pros and cons of each approach in the etherpad. If you haven't taken a look, please do so and add your thoughts. If you're an operator with an existing workflow, we want to hear your feedback. My request for additional feedback aside, several people in the discussion brought up comparison to other identity systems and frameworks (e.g., how do kerberos admins usually handle this sort of thing? how would we fix this if we were using OAuth 2.0/OIDC?) I spent some time Friday thinking through what an OAuth 2.0 flow would look like in OpenStack [1][2]. I don't think my findings were earth-shattering, and I'm sure there are other ways we could implement OAuth 2.0 flows. While adhering to a standard offers guidance and makes some things easier (like integrating with other systems), I don't think it would be the silver bullet for fixing our RBAC woes (at least not this late in the game). OpenStack APIs are massive, complex, and operate at different parts of the technology stack. Some of those APIs are meant for typical users just requesting resources. Others are written with the system administrator as the end user. OpenStack's explosive API and functionality growth out-paced its authorization tools to protect it, allowing project_id to bake itself into resources, services, and components (API code, middleware, database layers). Relaxing the assumption of an always-present project ID in every token is going to be hard and given OpenStack's distributed approach to policy enforcement, truly fixing it would require consistency across the entire ecosystem, regardless of the authentication or authorization framework. Either way, I thought that was an interesting exercise, and I'm curious if others have different opinions or perspectives on those findings. Thanks for reading, Lance [0] https://etherpad.opendev.org/p/consuming-system-scope [1] https://bit.ly/3cjUu6o My first few attempts at mapping this out were wrong, but I left links to the early drafts in the etherpad [2] I ignored a lot of assumptions about how clients interact with OpenStack, or how middleware works. This was purely a thought-experiment. -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Tue Mar 16 21:45:31 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 16 Mar 2021 22:45:31 +0100 Subject: [Horizon][Victoria][ops] - Scope based policies in Horizon not working In-Reply-To: References: <2dcfca03-f801-efb2-e311-0d04aa0b6c8c@zylacomputing.com> Message-ID: Dear Akihiro, thanks a lot for your input, we will wait for such projects to progress further until we take the scoped tokens approach then. In the meantime we will revert to the deprecated (original) json rules and deploy the policy.v3cloudsample.json so that we could add two new roles (cloud_admin and project_admin) Thanks once again On 16/03/2021 01:33, Akihiro Motoki wrote: > Hi, > > Horizon does not support the system-scoped token yet as of Victoria > and coming Wallaby. > > Horizon talks with various services. Some services support the > system-scoped token and > some do not. It is not easy to handle such kind of mixed situations. > As of Victoria, keystone and nova support the system-scoped token but > other services do not. > We can fall back to the project-scoped token with admin role, but many > considerations are > still needed. This is the reason of the slow progress in horizon. > > Victoria horizon also does not understand deprecated policy rules > defined as policy-in-code. > This was addressed in Wallaby, so we can use the same version of > policy.yaml in keystone and horizon. > However, horizon still does not handle the system-scoped token and > horizon still depends on > deprecated rules (rather than the system-scope aware rules). > The horizon team recognizes this is the most important feature gap and > will do the best to address it in Xena cycle. > > Thanks, > Akihiro Motoki (irc: amotoki) > > On Tue, Mar 16, 2021 at 8:02 AM Luke Camilleri > wrote: >> Hi Everyone, we had sent an email to this mailing list last week on an >> issue that we are having which we thought was being generated by >> keystone but on closer inspection it seems that this is coming actually >> from Horizon policy files. >> >> We would like to implement OpenStack using the scope based approach (v3) >> so that we could keep the main roles (reader,member and Admin) and use >> the scopes to differentiate the targets. >> >> We have implemented the below settings in /etc/keystone/keystone.conf: >> >> enforce_scope = true >> enforce_new_defaults = true >> >> All works well with the cli and the scopes except Horizon. We constantly >> receive errors in /var/log/keystone.log like: >> >> 2021-03-15 23:15:55.772 1611435 WARNING >> keystone.server.flask.application >> [req-a89277a2-8fb2-4fcb-b337-e76a73bb3af9 >> 1405ecf1ff7f4e4188648dbd1fb84c07 c584121f9d1a4334a3853c61bb5b9d93 - >> default default] You are not authorized to perform the requested action: >> identity:validate_token.: keystone.exception.ForbiddenAction: You are >> not authorized to perform the requested action: identity:validate_token >> >> but the same command (for example an "openstack domain list") works well >> from the CLI. >> >> We could identify the issue coming from the policy files shipped with >> the dashboard at /etc/openstack-dashboard/ as the rules >> within these policies are not scope based and in json format while >> according to the roadmap it seems that yaml is being used for the policy >> files and they should be scope based. >> >> When we try to convert the files to json to then upgrade them (to use >> the scope syntax) we receive the following errors: >> >> #oslopolicy-convert-json-to-yaml --config-file >> /etc/keystone/keystone.conf --namespace keystone --output-file >> keystone_policy.yaml --policy-file keystone_policy.json >> Traceback (most recent call last): >> File "/bin/oslopolicy-convert-json-to-yaml", line 10, in >> sys.exit(convert_policy_json_to_yaml()) >> File "/usr/lib/python3.6/site-packages/oslo_policy/generator.py", >> line 599, in convert_policy_json_to_yaml >> conf.output_file) >> File "/usr/lib/python3.6/site-packages/oslo_policy/generator.py", >> line 451, in _convert_policy_json_to_yaml >> scope_types=default_rule.scope_types) >> File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line >> 1239, in __init__ >> scope_types=scope_types >> File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line >> 1154, in __init__ >> self.description = description >> File "/usr/lib/python3.6/site-packages/oslo_policy/policy.py", line >> 1251, in description >> raise InvalidRuleDefault('Description is required') >> oslo_policy.policy.InvalidRuleDefault: Invalid policy rule default: >> Description is required. >> >> We believe that converting such files to yaml and then upgrading them >> for scope based access (v3) should solve our issues based on the fact >> that if we disable the enforce _scope and enforce_new_defaults we can >> have a fully functional setup (but not scope based) which defies the >> initial objective >> >> Thanks in advance for anyone that may be able to assist with this issue >> >> From senrique at redhat.com Tue Mar 16 23:52:58 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Tue, 16 Mar 2021 20:52:58 -0300 Subject: [cinder] New Cinder Bug Squad Meeting Message-ID: Hello, As discussed in the last meeting, Cinder will have an additional mini meeting to discuss bugs. [1] Cinder Bug Squad Meeting - Weekly on Wednesday at 1500 UTC in #openstack-cinder (IRC webclient ) - Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting Feel free to add your bugs to the Agenda if I haven't done so already. Cheers, Sofi [1] http://eavesdrop.openstack.org/#Cinder_Bug_Squad_Meeting -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Wed Mar 17 00:54:11 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Tue, 16 Mar 2021 20:54:11 -0400 Subject: [charms] Trilio Charms 21.03 release is now available Message-ID: The 21.03 release of the Trilio charms is now available. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/2103_Trilio.html == Highlights == * TrilioVault 4.1 The charms now support TrilioVault release 4.1. * OpenStack Ussuri The charms add support for OpenStack Ussuri on both Ubuntu 18.04 LTS (Bionic) and Ubuntu 20.04 LTS (Focal). This is in addition to current Bionic support for OpenStack Queens, Stein, and Train. == Thank you == Lots of thanks to the following people who contributed to this release via code changes, documentation updates and testing efforts. Alex Kavanagh Aurelien Lourot Chris MacNaughton David Ames Liam Young Marian Gasparovic Peter Matulis -- OpenStack Charms Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Mar 17 01:08:51 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Tue, 16 Mar 2021 22:08:51 -0300 Subject: [cinder] Bug deputy report for week of 2021-03-08 Message-ID: Hello, This is a bug report from 2021-03-09 to 2021-03-16. Critical: - https://bugs.launchpad.net/cinder/+bug/1832164: "SADeprecationWarning: The joinedload_all() function is deprecated, and will be removed in a future release. Please use method chaining with joinedload() instead". Assigned to Gorka Eguileor. High: - https://bugs.launchpad.net/cinder/+bug/1919161: "Automatic quota refresh counting temporary volumes". Assigned to Gorka Eguileor. Medium: - https://bugs.launchpad.net/cinder/+bug/1918229: "Nimble free space calculation". Unassigned but looks like Ajitha Robert may be working on this. Low: - https://bugs.launchpad.net/cinder/+bug/1918932: "volume delete rejection message doesn't include awaiting-transfer state". (low-hanging-fruit) Unassigned. - https://bugs.launchpad.net/cinder/+bug/1918307: " api-ref: Make documentation of HTTP 403 consistent". Unassigned. Incomplete: - https://bugs.launchpad.net/cinder/+bug/1918449: " ensure_export model_update ignored ". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1918879: "id of encryption for volume type not verified". Unassigned. Not a bug:- Feel free to reply/reach me if I missed something. Regards Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Wed Mar 17 05:42:25 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 16 Mar 2021 22:42:25 -0700 Subject: [manila][requirements] FFE requested for python-manilaclient In-Reply-To: References: <20210315224226.v7uqyt2solsfkhtv@mthode.org> Message-ID: On Tue, Mar 16, 2021 at 1:30 AM Herve Beraud wrote: > > As already discussed with you on IRC if these changes are merged ASAP and if Matthew is Ok from a requirements perspective then I'm ok to proceed. > > A version of python-manilaclient client have been frozen yesterday so now we can take advantage of the RFE mechanismes to land these changes that face a flaky CI. Thank you so much for your patience, Hervé. We found a bug on the server side that was causing the client CI to be flaky. It's fixed now, and all changes have merged. I requested a release: https://review.opendev.org/c/openstack/releases/+/780999 > > Le lun. 15 mars 2021 à 23:59, Goutham Pacha Ravi a écrit : >> >> On Mon, Mar 15, 2021 at 3:47 PM Matthew Thode wrote: >> > >> > On 21-03-15 14:48:12, Goutham Pacha Ravi wrote: >> > > Hello, >> > > >> > > We were tracking one last feature inclusion into python-manilaclient >> > > for the wallaby release: >> > > https://review.opendev.org/c/openstack/python-manilaclient/+/775020 >> > > >> > > The change does not introduce any new requirements, but adds new SDK >> > > methods and shell implementations for end users to harness a feature >> > > addition in manila to be able to update their security services. >> > > >> > > The release team was very gracious in allowing us to wrap up code >> > > reviews pertaining to the above change over the weekend; however, the >> > > gate was uncooperative this morning by pointing us to lingering >> > > sporadic failures that are a combination of bad test case logic as >> > > well as occasional test environment setup issues. (Zuul is awesome, >> > > but it has a way to call out our procrastinations) >> > > >> > > No projects that depend on python-manilaclient need this change in >> > > their current form because it is net new feature functionality >> > > (existing SDK/CLI functionality is unaltered). However, I thought i >> > > should check here before requesting a re-release of >> > > python-manilaclient under the Wallaby cycle. >> > > >> > > Thanks, >> > > Goutham >> > > >> > >> > Fine by me from a requirements perspective. Thanks. I imagine this >> > will be a version bump on top of https://review.opendev.org/780667 then? >> >> Yes. I worked with Hervé Beraud to get that release in, and pursue >> this FFE/RFE in parallel to include >> https://review.opendev.org/c/openstack/python-manilaclient/+/775020 >> into 2.6.0. >> >> >> > >> > -- >> > Matthew Thode >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From mark at stackhpc.com Wed Mar 17 08:51:47 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 17 Mar 2021 08:51:47 +0000 Subject: [kolla] Upgrading from Centos7 Train to Centos8 Ussuri In-Reply-To: References: Message-ID: On Tue, 16 Mar 2021 at 18:54, Braden, Albert wrote: > > I’m testing the Train7->Ussuri8 upgrade on a heat stack, following this document: Hi Albert, The procedure takes you from Train on CentOS 7 to Train on CentOS 8. A Ussuri upgrade would be a separate step. I'll try to make that clearer in the docs. Mark > > > > OpenStack Docs: CentOS 8 > > > > I used the instructions here to successfully remove a controller from the cluster: > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > Now I’m ready to add one, using the instructions here: > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#adding-new-controllers > > > > They say to run bootstrap-servers: > > > > kolla-ansible -i bootstrap-servers [ --limit ] > > > > The question is, how to run it? My build server is running Centos 7 and has Python 2.7.5. Should I just download Ussuri to my existing build server, or do I need to build a new Centos 8 build server with Python 3? > > > > I read the “potential issues” text but it’s not obvious whether it is safe to run Ussuri bootstrap-servers on Train hosts. If I don’t run bootstrap-servers on the Train controllers, then how does the new Ussuri controller get integrated (RMQ, MariaDB, etc.)? > > > > > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From mark at stackhpc.com Wed Mar 17 08:54:31 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 17 Mar 2021 08:54:31 +0000 Subject: [kolla] Upgrading from Centos7 Train to Centos8 Ussuri In-Reply-To: References: Message-ID: On Wed, 17 Mar 2021 at 08:51, Mark Goddard wrote: > > On Tue, 16 Mar 2021 at 18:54, Braden, Albert > wrote: > > > > I’m testing the Train7->Ussuri8 upgrade on a heat stack, following this document: > > Hi Albert, > > The procedure takes you from Train on CentOS 7 to Train on CentOS 8. A > Ussuri upgrade would be a separate step. I'll try to make that clearer > in the docs. > > Mark It does include the following near the beginning: The Train release is the last release to support CentOS 7, and the first to support CentOS 8. CentOS 7 users wishing to upgrade beyond Train must therefore migrate to CentOS 8. The upgrade path looks like this: * Stein & earlier (CentOS 7) * Train (CentOS 7) * Train (CentOS 8) * Ussuri & later (CentOS 8) > > > > > > > > > OpenStack Docs: CentOS 8 > > > > > > > > I used the instructions here to successfully remove a controller from the cluster: > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > > > > > Now I’m ready to add one, using the instructions here: > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#adding-new-controllers > > > > > > > > They say to run bootstrap-servers: > > > > > > > > kolla-ansible -i bootstrap-servers [ --limit ] > > > > > > > > The question is, how to run it? My build server is running Centos 7 and has Python 2.7.5. Should I just download Ussuri to my existing build server, or do I need to build a new Centos 8 build server with Python 3? > > > > > > > > I read the “potential issues” text but it’s not obvious whether it is safe to run Ussuri bootstrap-servers on Train hosts. If I don’t run bootstrap-servers on the Train controllers, then how does the new Ussuri controller get integrated (RMQ, MariaDB, etc.)? > > > > > > > > > > > > > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > > > > > The contents of this e-mail message and > > any attachments are intended solely for the > > addressee(s) and may contain confidential > > and/or legally privileged information. If you > > are not the intended recipient of this message > > or if this message has been addressed to you > > in error, please immediately alert the sender > > by reply e-mail and then delete this message > > and any attachments. If you are not the > > intended recipient, you are notified that > > any use, dissemination, distribution, copying, > > or storage of this message or any attachment > > is strictly prohibited. From hberaud at redhat.com Wed Mar 17 09:08:55 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 17 Mar 2021 10:08:55 +0100 Subject: [manila][requirements] FFE requested for python-manilaclient In-Reply-To: References: <20210315224226.v7uqyt2solsfkhtv@mthode.org> Message-ID: You're welcome :) Thank you for your commitment and for your investment around this topic! I approved your patch and I'll ask another review to my comrades. Le mer. 17 mars 2021 à 06:42, Goutham Pacha Ravi a écrit : > On Tue, Mar 16, 2021 at 1:30 AM Herve Beraud wrote: > > > > As already discussed with you on IRC if these changes are merged ASAP > and if Matthew is Ok from a requirements perspective then I'm ok to proceed. > > > > A version of python-manilaclient client have been frozen yesterday so > now we can take advantage of the RFE mechanismes to land these changes that > face a flaky CI. > > Thank you so much for your patience, Hervé. We found a bug on the > server side that was causing the client CI to be flaky. > It's fixed now, and all changes have merged. > > I requested a release: > https://review.opendev.org/c/openstack/releases/+/780999 > > > > > > Le lun. 15 mars 2021 à 23:59, Goutham Pacha Ravi > a écrit : > >> > >> On Mon, Mar 15, 2021 at 3:47 PM Matthew Thode > wrote: > >> > > >> > On 21-03-15 14:48:12, Goutham Pacha Ravi wrote: > >> > > Hello, > >> > > > >> > > We were tracking one last feature inclusion into python-manilaclient > >> > > for the wallaby release: > >> > > https://review.opendev.org/c/openstack/python-manilaclient/+/775020 > >> > > > >> > > The change does not introduce any new requirements, but adds new SDK > >> > > methods and shell implementations for end users to harness a feature > >> > > addition in manila to be able to update their security services. > >> > > > >> > > The release team was very gracious in allowing us to wrap up code > >> > > reviews pertaining to the above change over the weekend; however, > the > >> > > gate was uncooperative this morning by pointing us to lingering > >> > > sporadic failures that are a combination of bad test case logic as > >> > > well as occasional test environment setup issues. (Zuul is awesome, > >> > > but it has a way to call out our procrastinations) > >> > > > >> > > No projects that depend on python-manilaclient need this change in > >> > > their current form because it is net new feature functionality > >> > > (existing SDK/CLI functionality is unaltered). However, I thought i > >> > > should check here before requesting a re-release of > >> > > python-manilaclient under the Wallaby cycle. > >> > > > >> > > Thanks, > >> > > Goutham > >> > > > >> > > >> > Fine by me from a requirements perspective. Thanks. I imagine this > >> > will be a version bump on top of https://review.opendev.org/780667 > then? > >> > >> Yes. I worked with Hervé Beraud to get that release in, and pursue > >> this FFE/RFE in parallel to include > >> https://review.opendev.org/c/openstack/python-manilaclient/+/775020 > >> into 2.6.0. > >> > >> > >> > > >> > -- > >> > Matthew Thode > >> > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Wed Mar 17 09:13:22 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Wed, 17 Mar 2021 10:13:22 +0100 Subject: [telemetry][ptg] Telemetry PTG planning Message-ID: Hi there, for Telemetry, we had a short planning meeting after PTG last year. If you are interested in Telemetry, or have something Telemetry related to talk about, please add yourself and the topic to the planning etherpad [1]. Matthias [1] https://etherpad.opendev.org/p/telemetry-xena-ptg From skaplons at redhat.com Wed Mar 17 09:23:04 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 17 Mar 2021 10:23:04 +0100 Subject: [neutron] Unit test jobs broken Message-ID: <20210317092304.lzkqj66r35pbhjbz@p1.localdomain> Hi, Our UT gate jobs are now broken due to new neutron-lib release. Fix for that is already proposed in [1]. So if Your UT jobs are failing like e.g. in [2] please don't recheck Your patch before [1] will be merged. [1] https://review.opendev.org/c/openstack/neutron/+/780802 [2] https://8ee33ec1a424812a1857-16390f05fde22eb723929856dcc38fcd.ssl.cf5.rackcdn.com/779878/3/check/openstack-tox-py36/26ea91f/testr_results.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From eblock at nde.ag Wed Mar 17 09:30:18 2021 From: eblock at nde.ag (Eugen Block) Date: Wed, 17 Mar 2021 09:30:18 +0000 Subject: How to detach cinder volume In-Reply-To: Message-ID: <20210317093018.Horde.bLqxsURyJK6mc5BSZ0li1nv@webmail.nde.ag> Hi, if the volume was in use by the vm you'll have to reboot it to properly release all open files. As far as I know a snapshot of an instance booted from volume will only snapshot the respective volume. That should still be possible, but I guess it would be inconsistent because of the open files. Is reboot not an option? Regards, Eugen Zitat von "Md. Hejbul Tawhid MUNNA" : > Hi, > > We are using openstack rocky. > > One of our openstack vm was running with 3 volume , 1 is bootable and > another 2 is normal volume . We have deleted 1 normal volume without > properly detaching. > > So volume is deleted but instance is still showing 3 volume is attached. > Now we can't snapshot the instance and facing some others issue. > > Please advise how we can detach the volume(deleted) from the instance > > Note: We have reset state volume attached status to detached and delete the > volume. > > Regards, > Munna From lyarwood at redhat.com Wed Mar 17 10:29:38 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 17 Mar 2021 10:29:38 +0000 Subject: How to detach cinder volume In-Reply-To: References: Message-ID: On Mon, 15 Mar 2021 at 15:38, Md. Hejbul Tawhid MUNNA wrote: > > Hi, > > We are using openstack rocky. > > One of our openstack vm was running with 3 volume , 1 is bootable and another 2 is normal volume . We have deleted 1 normal volume without properly detaching. > > So volume is deleted but instance is still showing 3 volume is attached. Now we can't snapshot the instance and facing some others issue. > > Please advise how we can detach the volume(deleted) from the instance > > Note: We have reset state volume attached status to detached and delete the volume. Ewww I wish we made this harder. Please try to avoid resetting states like this unless you really have to. The cleanest way of detaching the volume from the instance is going to be to mark the volume attachment as deleted within the Nova database and hard rebooting the instance. $ mysql nova_cell1 MariaDB [nova_cell1]> update block_device_mapping set deleted = id where volume_id = '$volume_id' and instance_uuid = '$instance_uuid'; Confirm the volume is no longer listed as attached and then hard reboot: $ openstack server volume list $instance $ openstack server reboot --hard $instance Depending on your volume backend you will likely need to manually clean up any now stale volume connections on the host. For example, deleting any mpath devices etc. You might want to consider a full compute host reboot to ensure things are clean. Anyway, hope this helps, Lee From oliver.wenz at dhbw-mannheim.de Wed Mar 17 11:34:50 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Wed, 17 Mar 2021 12:34:50 +0100 (CET) Subject: [glance][openstack-ansible] Snapshots disappear during saving Message-ID: <449121094.37359.1615980890360@ox.dhbw-mannheim.de> Hi! We are currently experiencing problems with our OpenStack Ansible Victoria cloud when trying to create snapshots from instances. In some cases, everything works but often the pending snapshots just disappear. When this happens, the following glance-api.service errors show up: Mar 17 08:33:12 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:33:11.975 85 INFO glance.api.v2.image_data [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:15.283 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] REQ: curl -i http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images/1b9af05a-f7c9-4315-9354-e09f1df66321-00001 -X PUT -H "X-Auth-Token: gAAAAABgUb7IBvZ_..." Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:15.285 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP STATUS: 504 Gateway Time-out Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:15.285 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP HEADERS: {'content-length': '92', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'} Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:15.286 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP BODY: b"

504 Gateway Time-out

\nThe server didn't respond in time.\n\n" Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:16.306 85 ERROR glance_store._drivers.swift.store [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Error during chunked upload to backend, deleting stale chunks.: swiftclient.exceptions.ClientException: put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to reset contents for reupload. Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:16.320 85 ERROR glance_store._drivers.swift.store [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Failed to add object to Swift. Got error from Swift: put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to reset contents for reupload..: swiftclient.exceptions.ClientException: put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to reset contents for reupload. Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:16.327 85 ERROR glance.api.v2.image_data [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Failed to upload image data due to internal error: glance_store.exceptions.BackendException: Failed to add object to Swift. Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Caught error: Failed to add object to Swift. Got error from Swift: put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to reset contents for reupload..: glance_store.exceptions.BackendException: Failed to add object to Swift. Got error from Swift: put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to reset contents for reupload.. 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi Traceback (most recent call last): 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", line 1014, in add 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self._delete_stale_chunks( 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self.force_reraise() 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi six.reraise(self.type_, self.value, self.tb) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi raise value 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", line 1003, in add 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi manager.get_connection().put_object( 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1960, in put_object 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi return self._retry(reset_func, put_object, container, obj, contents, 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1843, in _retry 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi reset_func(func, *args, **kwargs) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1940, in _default_reset 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi raise ClientException('put_object(%r, %r, ...) failure and no ' 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi swiftclient.exceptions.ClientException: put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to reset contents for reupload. 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi During handling of the above exception, another exception occurred: 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi Traceback (most recent call last): 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/wsgi.py", line 1347, in __call__ 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi action_result = self.dispatch(self.controller, action, 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/wsgi.py", line 1391, in dispatch 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi return method(*args, **kwargs) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/utils.py", line 416, in wrapped 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi return func(self, req, *args, **kwargs) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 298, in upload 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self._restore(image_repo, image) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self.force_reraise() 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi six.reraise(self.type_, self.value, self.tb) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi raise value 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 163, in upload 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi image.set_data(data, size, backend=backend) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/domain/proxy.py", line 208, in set_data 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self.base.set_data(data, size, backend=backend, set_active=set_active) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/notifier.py", line 501, in set_data 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi _send_notification(notify_error, 'image.upload', msg) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self.force_reraise() 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi six.reraise(self.type_, self.value, self.tb) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi raise value 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/notifier.py", line 447, in set_data 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self.repo.set_data(data, size, backend=backend, 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/policy.py", line 198, in set_data 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi return self.image.set_data(*args, **kwargs) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/quota/__init__.py", line 318, in set_data 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self.image.set_data(data, size=size, backend=backend, 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/location.py", line 567, in set_data 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi self._upload_to_store(data, verifier, backend, size) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/location.py", line 458, in _upload_to_store 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi multihash, loc_meta) = self.store_api.add_with_multihash( 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/multi_backend.py", line 398, in add_with_multihash 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi return store_add_to_backend_with_multihash( 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/multi_backend.py", line 480, in store_add_to_backend_with_multihash 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi (location, size, checksum, multihash, metadata) = store.add( 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/driver.py", line 279, in add_adapter 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi metadata_dict) = store_add_fun(*args, **kwargs) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/capabilities.py", line 176, in op_checker 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi return store_op_fun(store, *args, **kwargs) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", line 1082, in add 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi raise glance_store.BackendException(msg) 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi glance_store.exceptions.BackendException: Failed to add object to Swift. 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi Got error from Swift: put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to reset contents for reupload.. 2021-03-17 08:35:16.425 85 ERROR glance.common.wsgi Mar 17 08:35:16 infra1-glance-container-99614ac2 uwsgi[85]: Wed Mar 17 08:35:16 2021 - uwsgi_response_writev_headers_and_body_do(): Connection reset by peer [core/writer.c line 306] during PUT /v2/images/1b9af05a-f7c9-4315-9354-e09f1df66321/file (192.168.110.215) Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 08:35:16.441 85 CRITICAL glance [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Unhandled error: OSError: write error 2021-03-17 08:35:16.441 85 ERROR glance OSError: write error 2021-03-17 08:35:16.441 85 ERROR glance It seems as if the problems occur more often with large instances, i.e. there are fewer problems with new Ubuntu 20.04 instances but after an 'apt-upgrade' on the instance, the error occurs every time a snapshot is taken. Any help is much appreciated! Kind regards, Oliver From C-Albert.Braden at charter.com Wed Mar 17 12:26:44 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 17 Mar 2021 12:26:44 +0000 Subject: [EXTERNAL] Re: [kolla] Upgrading from Centos7 Train to Centos8 Ussuri In-Reply-To: References: Message-ID: <647cf8f6163c40d38112beca0e7b54ae@ncwmexgp009.CORP.CHARTERCOM.com> Is there a separate document for Train->Ussuri? Maybe the "Ussuri & later (CentOS 8)" text could be a link. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 17, 2021 4:52 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Upgrading from Centos7 Train to Centos8 Ussuri CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 16 Mar 2021 at 18:54, Braden, Albert wrote: > > I’m testing the Train7->Ussuri8 upgrade on a heat stack, following this document: Hi Albert, The procedure takes you from Train on CentOS 7 to Train on CentOS 8. A Ussuri upgrade would be a separate step. I'll try to make that clearer in the docs. Mark > > > > OpenStack Docs: CentOS 8 > > > > I used the instructions here to successfully remove a controller from the cluster: > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > Now I’m ready to add one, using the instructions here: > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#adding-new-controllers > > > > They say to run bootstrap-servers: > > > > kolla-ansible -i bootstrap-servers [ --limit ] > > > > The question is, how to run it? My build server is running Centos 7 and has Python 2.7.5. Should I just download Ussuri to my existing build server, or do I need to build a new Centos 8 build server with Python 3? > > > > I read the “potential issues” text but it’s not obvious whether it is safe to run Ussuri bootstrap-servers on Train hosts. If I don’t run bootstrap-servers on the Train controllers, then how does the new Ussuri controller get integrated (RMQ, MariaDB, etc.)? > > > > > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From james.slagle at gmail.com Wed Mar 17 13:17:23 2021 From: james.slagle at gmail.com (James Slagle) Date: Wed, 17 Mar 2021 09:17:23 -0400 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210309155219.cp3gfvwrywy2huot@gchamoul-mac> Message-ID: On Tue, Mar 9, 2021 at 11:05 AM Marios Andreou wrote: > > > On Tue, Mar 9, 2021 at 5:52 PM Gaël Chamoulaud > wrote: > >> On 09/Mar/2021 17:46, Marios Andreou wrote: >> > Is your proposal that David is added to the tripleo-core group [1] with >> the >> > understanding that voting rights will be exercised only in the >> following repos: >> > tripleo-validations, validations-common and validations-libs? >> >> Yes exactly! Sorry for the confusion. >> > > > ACK no problem ;) As I said we need to be transparent and fair towards > everyone. > > +1 from me to your proposal. > +1!. Thanks for the effort David! -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Mar 17 13:29:40 2021 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 17 Mar 2021 07:29:40 -0600 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: +1 On Tue, Mar 9, 2021 at 7:59 AM Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews > and he > is a source of great ideas for the future of the Validation Framework. > That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team > next > week. > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Mar 17 14:06:50 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 17 Mar 2021 14:06:50 +0000 Subject: [EXTERNAL] Re: [kolla] Upgrading from Centos7 Train to Centos8 Ussuri In-Reply-To: <647cf8f6163c40d38112beca0e7b54ae@ncwmexgp009.CORP.CHARTERCOM.com> References: <647cf8f6163c40d38112beca0e7b54ae@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: On Wed, 17 Mar 2021 at 12:27, Braden, Albert wrote: > > Is there a separate document for Train->Ussuri? Maybe the "Ussuri & later (CentOS 8)" text could be a link. That's just a normal major upgrade. https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html > > -----Original Message----- > From: Mark Goddard > Sent: Wednesday, March 17, 2021 4:52 AM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: [kolla] Upgrading from Centos7 Train to Centos8 Ussuri > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > On Tue, 16 Mar 2021 at 18:54, Braden, Albert > wrote: > > > > I’m testing the Train7->Ussuri8 upgrade on a heat stack, following this document: > > Hi Albert, > > The procedure takes you from Train on CentOS 7 to Train on CentOS 8. A > Ussuri upgrade would be a separate step. I'll try to make that clearer > in the docs. > > Mark > > > > > > > > > OpenStack Docs: CentOS 8 > > > > > > > > I used the instructions here to successfully remove a controller from the cluster: > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > > > > > Now I’m ready to add one, using the instructions here: > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#adding-new-controllers > > > > > > > > They say to run bootstrap-servers: > > > > > > > > kolla-ansible -i bootstrap-servers [ --limit ] > > > > > > > > The question is, how to run it? My build server is running Centos 7 and has Python 2.7.5. Should I just download Ussuri to my existing build server, or do I need to build a new Centos 8 build server with Python 3? > > > > > > > > I read the “potential issues” text but it’s not obvious whether it is safe to run Ussuri bootstrap-servers on Train hosts. If I don’t run bootstrap-servers on the Train controllers, then how does the new Ussuri controller get integrated (RMQ, MariaDB, etc.)? > > > > > > > > > > > > > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > > > > > The contents of this e-mail message and > > any attachments are intended solely for the > > addressee(s) and may contain confidential > > and/or legally privileged information. If you > > are not the intended recipient of this message > > or if this message has been addressed to you > > in error, please immediately alert the sender > > by reply e-mail and then delete this message > > and any attachments. If you are not the > > intended recipient, you are notified that > > any use, dissemination, distribution, copying, > > or storage of this message or any attachment > > is strictly prohibited. > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From dvd at redhat.com Wed Mar 17 14:33:57 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Wed, 17 Mar 2021 10:33:57 -0400 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: +1 DVD On Wed, Mar 17, 2021 at 9:33 AM Alex Schultz wrote: > +1 > > On Tue, Mar 9, 2021 at 7:59 AM Gaël Chamoulaud > wrote: > >> Hi TripleO Devs, >> >> David is already a key member of our team since a long time now, he >> provided all the needed ansible roles for the Validation Framework into >> tripleo-ansible-operator. He continuously provides excellent code reviews >> and he >> is a source of great ideas for the future of the Validation Framework. >> That's >> why we would highly benefit from his addition to the core reviewer team. >> >> Assuming that there are no objections, we will add David to the core team >> next >> week. >> >> Thanks, David, for your excellent work! >> >> -- >> Gaël Chamoulaud - (He/Him/His) >> .::. Red Hat .::. OpenStack .::. >> .::. DFG:DF Squad:VF .::. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Wed Mar 17 14:47:44 2021 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 17 Mar 2021 16:47:44 +0200 Subject: [horizon] Welcome Tatiana Ovchinnikova in horizon-core team Message-ID: Hi team, As agreed during the horizon meeting [1] last week, I've added Tatiana to horizon-core team. Tatiana, thanks for your contributions and welcome back to the team! [1] http://eavesdrop.openstack.org/meetings/horizon/2021/horizon.2021-03-10-15.01.log.html Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From root.mch at gmail.com Wed Mar 17 14:52:24 2021 From: root.mch at gmail.com (=?UTF-8?Q?=C4=B0zzettin_Erdem?=) Date: Wed, 17 Mar 2021 17:52:24 +0300 Subject: [MURANO] Can not deploy environment on existing network Message-ID: Hi everyone, I am tryng to use Murano on OSA-stable/ussuri. I have encountered an error; when I try to deploy an environment on existing network, it creates the VM but do not attach floating IP to it. In murano-agent.log on the vm there is "rabbitmq queue not found" error -->http://paste.openstack.org/show/803667/ , rabbitmq logs ---> http://paste.openstack.org/show/803668/, murano-engine logs ---> http://paste.openstack.org/show/803669/, murano.conf on containers----> http://paste.openstack.org/show/803672/ I can attach floating IP to the VM after Murano fails and also I can deploy environment if I use "create new network" option. Do you have any suggestions about it? Thanks, İzzettin -------------- next part -------------- An HTML attachment was scrubbed... URL: From pangliye at inspur.com Wed Mar 17 08:25:27 2021 From: pangliye at inspur.com (=?utf-8?B?TGl5ZSBQYW5nKOmAhOeri+S4mik=?=) Date: Wed, 17 Mar 2021 08:25:27 +0000 Subject: =?utf-8?B?562U5aSNOiBbYWxsXUludHJvZHVjdGlvbiB0byB2ZW51cyB3aGljaCBpcyB0?= =?utf-8?B?aGUgcHJvamVjdCBvZiBsb2cgbWFuYWdlbWVudCBhbmQgaGFzIGJlZW4gY29u?= =?utf-8?Q?tributed_to_the_OpenStack_community?= In-Reply-To: References: <5c911ca6ab4cbcf43da283710ac19ae8@sslemail.net> Message-ID: Sorry to reply you so late, venus could already be installed through devstack, you can try it. In addition, we will also consider it in kolla-ansible. 发件人: Laurent Dumont 发送时间: 2021年1月12日 12:57 收件人: Liye Pang(逄立业) 抄送: openstack-discuss at lists.openstack.org 主题: Re: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community This seems really interesting. Tracing events with request-ids is something that is quite useful. What is the current state? Can it be deployed by a third party? On Sun, Jan 10, 2021 at 4:01 PM Liye Pang(逄立业) > wrote: Hello everyone, after feedback from a large number of operations and maintenance personnel in InCloud OpenStack, we developed the log management project “Venus” for the OpenStack projects and that has contributed to the OpenStack community. The following is an introduction to “Venus”. If there is interest in the community, we are interested in proposing it to become an official OpenStack project in the future. Background In the day-to-day operation and maintenance of large-scale cloud platform, the following problems are encountered: l Time-consuming for log querying while the server increasing to thousands. l Difficult to retrieve logs, since there are many modules in the platform, e.g. systems service, compute, storage, network and other platform services. l The large amount and dispersion of log make faults are difficult to be discovered. l Because of distributed and interaction between components of the cloud platform, and scattered logs between components, it will take more time to locate problems. About Venus According to the key requirements of OpenStack in log storage, retrieval, analysis and so on, we introduced Venus project, a unified log management module. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation and other needs, which involves helping operator or maintainer to quickly solve retrieve problems, grasp the operational health of the platform, and improve the management capabilities of the cloud platform. Additionally, this module plans to use machine learning algorithms to quickly locate IT failures and root causes, and improve operation and maintenance efficiency. Application scenario Venus played a key role in the following scenarios: l Retrieval: Provide a simple and easy-to-use way to retrieve all log and the context. l Analysis: Realize log association, field value statistics, and provide multi-scene and multi-dimensional visual analysis reports. l Alerts:Convert retrieval into active alerts to realize the error finding in massive logs. l Issue location: Establish a chain relationship and knowledge graphs to quickly locate problems. Overall structure The architecture of log management system based on Venus and elastic search is as follows: Diagram 0: Architecture of Venus venus_api: API module,provide API、rest-api service. venus_manager: Internal timing task module to realize the core functions of the log system. Current progress The current progress of the Venus project is as follows: l Collection:Develop fluentd collection tasks based on collectd to read, filter, format and send plug-ins for OpenStack, operating systems, and platform services, etc. l Index:Dealing with multi-dimensional index data in elasticsearch, and provide more concise and comprehensive authentication interface to return query results. l Analysis:Analyzing and display the related module errors, Mariadb connection errors, and Rabbitmq connection errors. l Alerts:Develop alarm task code to set threshold for the number of error logs of different modules at different times, and provides alarm services and notification services. l Location:Develop the call chain analysis function based on global_requested series, which can show the execution sequence, time and error information, etc., and provide the export operation. l Management:Develop configuration management functions in the log system, such as alarm threshold setting, timing task management, and log saving time setting, etc. Application examples Two examples of Venus application scenarios are as follows. 1. The virtual machine creation operation was performed on the cloud platform and it was found that the virtual machine was not created successfully. First, we can find the request id of the operation and jump to the virtual machine creation call chain page. Then, we can query the calling process, view and download the details of the log of the call. 2. In the cloud platform, the error log of each module can be converted into alarms to remind the users. Further, we can retrieve the details of the error log and error log statistics. Next step The next step of the Venus project is as follows: * Collection:In addition to fluent, other collection plugins such as logstash will be integrated. * Analysis: Explore more operation and maintenance scenarios, and conduct statistical analysis and alarm on key data. * display: The configuration, analysis and alarm of Venus will be integrated into horizon in the form of plugin. * location: Form clustering log and construct knowledge map, and integrate algorithm class library to locate the root cause of the fault. Venus Project Registry Venus library: https://opendev.org/inspur/venus You can grab the source code using the following git command: git clone https://opendev.org/inspur/venus.git Venus Demo Youtu.be: https://youtu.be/mE2MoEx3awM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 24507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 3184 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 8136 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 15944 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 8405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3046 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 15175 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 8496 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From marcin.juszkiewicz at linaro.org Wed Mar 17 15:12:35 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 17 Mar 2021 16:12:35 +0100 Subject: [kolla][ptg] Xena PTG slots In-Reply-To: References: Message-ID: <376b6b71-9b56-5f10-9c37-cb3094e15be9@linaro.org> W dniu 16.03.2021 o 11:49, Mark Goddard pisze: > I would be interested to hear from anyone who would be interested in > joining one or more slots in the 6:00 - 8:00 UTC window on one or more > days. If there will be a need for session then I will be present. From munnaeebd at gmail.com Wed Mar 17 15:48:39 2021 From: munnaeebd at gmail.com (Md. Hejbul Tawhid MUNNA) Date: Wed, 17 Mar 2021 21:48:39 +0600 Subject: How to detach cinder volume In-Reply-To: References: Message-ID: Dear lee, Thank you for your reply. We will try this solution by changing db. Regards, Munna On Wed, 17 Mar 2021, 16:29 Lee Yarwood, wrote: > On Mon, 15 Mar 2021 at 15:38, Md. Hejbul Tawhid MUNNA > wrote: > > > > Hi, > > > > We are using openstack rocky. > > > > One of our openstack vm was running with 3 volume , 1 is bootable and > another 2 is normal volume . We have deleted 1 normal volume without > properly detaching. > > > > So volume is deleted but instance is still showing 3 volume is attached. > Now we can't snapshot the instance and facing some others issue. > > > > Please advise how we can detach the volume(deleted) from the instance > > > > Note: We have reset state volume attached status to detached and delete > the volume. > > Ewww I wish we made this harder. Please try to avoid resetting states > like this unless you really have to. > > The cleanest way of detaching the volume from the instance is going to > be to mark the volume attachment as deleted within the Nova database > and hard rebooting the instance. > > $ mysql nova_cell1 > MariaDB [nova_cell1]> update block_device_mapping set deleted = id > where volume_id = '$volume_id' and instance_uuid = '$instance_uuid'; > > Confirm the volume is no longer listed as attached and then hard reboot: > > $ openstack server volume list $instance > $ openstack server reboot --hard $instance > > Depending on your volume backend you will likely need to manually > clean up any now stale volume connections on the host. For example, > deleting any mpath devices etc. You might want to consider a full > compute host reboot to ensure things are clean. > > Anyway, hope this helps, > > Lee > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Mar 17 15:52:10 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 17 Mar 2021 16:52:10 +0100 Subject: [ironic] [infra] Making Glean work with IPA for static IP assignment In-Reply-To: <20201126011956.GB522326@fedora19.localdomain> References: <20201125020901.GA522326@fedora19.localdomain> <20201126011956.GB522326@fedora19.localdomain> Message-ID: Hi, Getting back to this, sorry for the delay. Yes, I'm pretty sure it's NetworkManager, not something else. Here are relevant parts of boot logs from a recent runs: [ 63.613821] NetworkManager[244]: [1615995259.7778] NetworkManager (version 1.26.0-12.el8_3) is starting... (for the first time) [ 71.637264] systemd[1]: Starting Glean for interface enp1s0 with NetworkManager... Starting Glean for interface enp1s0 with NetworkManager... [ 77.622901] glean.sh[327]: mount: /mnt/config: /dev/sr0 already mounted on /mnt/config. !!! As you see, Glean starts quite early, but then... !!! [ 92.699494] NetworkManager[244]: [1615995288.9848] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) [ 93.040232] NetworkManager[244]: [1615995289.3256] manager: (enp1s0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) [ 94.434450] NetworkManager[244]: [1615995290.7198] device (enp1s0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') [ 94.713545] NetworkManager[244]: [1615995290.9986] device (enp1s0): carrier: link connected [ 96.487825] NetworkManager[244]: [1615995292.7699] device (enp1s0): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') [ 96.712608] NetworkManager[244]: [1615995292.9979] policy: auto-activating connection 'Wired connection 1' (cabef811-9cf9-3d92-9391-95712a3d3481) !!! This auto-activation triggers DHCP !!! [ 97.789768] NetworkManager[244]: [1615995294.0750] device (enp1s0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') [ 98.084735] NetworkManager[244]: [1615995294.3699] dhcp4 (enp1s0): activation: beginning transaction (timeout in 30 seconds) [ 98.303574] NetworkManager[244]: [1615995294.5883] dhcp4 (enp1s0): dhclient started with pid 382 [ 108.882870] NetworkManager[244]: [1615995305.0369] dhcp4 (enp1s0): address 192.168.122.105 !!! 10 seconds later we have the IP address configured !!! [ 126.636082] glean.sh[326]: DEBUG:glean:Starting glean [ 127.885587] glean.sh[326]: DEBUG:glean:Only considering interface enp1s0 from arguments [ 127.908001] glean.sh[326]: DEBUG:glean:Interface matched: enp1s0 (52:54:00:9e:b1:16) [ 127.920045] glean.sh[326]: DEBUG:glean:52:54:00:9e:b1:16 configured via config-drive [ 128.635484] systemd[1]: Started Glean for interface enp1s0 with NetworkManager. !!! 20 seconds later (it's a nested VM, everything is slow) glean actually kicks in !!! [ 130.752564] systemd[1]: Reached target Network is Online. At this point the IP address is from DHCP, not from Glean. Any ideas? Dmitry On Thu, Nov 26, 2020 at 2:20 AM Ian Wienand wrote: > On Wed, Nov 25, 2020 at 11:54:13AM +0100, Dmitry Tantsur wrote: > > > > # systemd-analyze critical-chain > > multi-user.target @2min 6.301s > > └─tuned.service @1min 32.273s +34.024s > > └─network.target @1min 31.590s > > └─network-pre.target @1min 31.579s > > └─glean at enp1s0.service @36.594s +54.952s > > └─system-glean.slice @36.493s > > └─system.slice @4.083s > > └─-.slice @4.080s > > > > # systemd-analyze critical-chain NetworkManager.service > > NetworkManager.service +9.287s > > └─network-pre.target @1min 31.579s > > └─glean at enp1s0.service @36.594s +54.952s > > └─system-glean.slice @36.493s > > └─system.slice @4.083s > > └─-.slice @4.080s > > > It seems that the ordering is correct and the interface service is > > executed, but the IP address is nonetheless wrong. > > I agree, this seems to say to me that NetworkManager should run after > network.pre-target, and glean at enp1s0 should be running before it. > > The glean at enp1s0.service is set as oneshot [1] which should prevent > network-pre.target being reached until it exits: > > oneshot ... [the] service manager will consider the unit up after the > main process exits. It will then start follow-up units. > > To the best of my knowledge the dependencies are correct; but if you > go through the "git log" of the project you can find some history of > us thinking ordering was correct and finding issues. > > > Can it be related to how long glean takes to run in my case (54 seconds > vs > > 1 second in your case)? > > The glean script doesn't run asynchronously in any way (at least not > on purpose!). I can't see any way it could exit before the ifcfg file > is written out. > > > # cat /etc/sysconfig/network-scripts/ifcfg-enp1s0 > ... > > The way NM support works is writing out this file which is read by the > NM ifcfg-rh plugin [2]. AFAIK that's built-in to NM so would not be > missing, and I think you'd have to go to effort to manually edit > /etc/NetworkManager/conf.d/99-main-plugins.conf to have it ignored. > > I'm afraid that's overall not much help. Are you sure there isn't an > errant dhclient running somehow that grabs a different address? Does > it get the correct address on reboot; implying the ifcfg- file is read > correctly but somehow isn't in place before NetworkManager starts? > > -i > > [1] > https://opendev.org/opendev/glean/src/branch/master/glean/init/glean-nm at .service#L13 > [2] > https://developer.gnome.org/NetworkManager/stable/nm-settings-ifcfg-rh.html > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Wed Mar 17 16:36:18 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 17 Mar 2021 17:36:18 +0100 Subject: [blazar][ptg] Xena PTG Message-ID: Hello, We have allocated two hours during the Xena PTG for the Blazar project. We will meet on Thursday, April 22, 2021 from 15:00 to 17:00 UTC in the Bexar room. The agenda will be decided over the coming weeks and kept at https://etherpad.opendev.org/p/xena-ptg-blazar Anyone can propose discussion topics through the Etherpad or in our IRC meeting. Feel free to join if you want to know more about Reservation as a Service. Cheers, Pierre Riteau (priteau) From rosmaita.fossdev at gmail.com Wed Mar 17 17:26:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 17 Mar 2021 13:26:51 -0400 Subject: [cinder] festival of XS reviews 19 March 2021 Message-ID: <596acc4b-f928-5204-d560-e8f0e7c6f346@gmail.com> Hello Cinder community members, Due to popular demand, we will hold the Second Cinder Festival of XS Reviews on Friday 19 March (yes, that's this Friday at the end of this very week). Apologies for the late notice, but with RC-1 being released next week, we need to give people time to revise and resubmit (if necessary) so the fixes can make it into the Wallaby release. what: The Second Cinder Festival of XS Reviews when: Friday 19 March 2021 from 1400-1600 UTC where: https://bluejeans.com/3228528973 See you there! brian From rosmaita.fossdev at gmail.com Wed Mar 17 18:08:07 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 17 Mar 2021 14:08:07 -0400 Subject: [cinder] Xena PTG Planning Message-ID: Hello Cinderinos, Some preliminary info about Cinder at the PTG: We'll meet for 3 hours each day on Tuesday 20 April through Friday 23 April from 1300-1600 UTC. - on Monday you are free to attend any PTG sessions you like - we'll have Happy Hour at 1500 UTC on Tuesday 20 April We'd like to have a "drivers day" devoted to discussion of issues impacting backend drivers. If you are a driver maintainer and have a preference for which day this is, please leave a note on the etherpad: https://etherpad.opendev.org/p/xena-ptg-cinder-planning Driver maintainers and anyone else who has something to discuss with the Cinder team, feel free to add a topic to the etherpad. Finally, don't forget to register for the PTG; info is on the etherpad. cheers, brian From jay.faulkner at verizonmedia.com Wed Mar 17 18:16:36 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Wed, 17 Mar 2021 11:16:36 -0700 Subject: [E] Re: [ironic] [infra] Making Glean work with IPA for static IP assignment In-Reply-To: References: <20201125020901.GA522326@fedora19.localdomain> <20201126011956.GB522326@fedora19.localdomain> Message-ID: Does adding a Before=NetworkManager.service into the service file for glean-nm.service help with the ordering, perhaps? -Jay Faulkner On Wed, Mar 17, 2021 at 8:55 AM Dmitry Tantsur wrote: > Hi, > > Getting back to this, sorry for the delay. Yes, I'm pretty sure it's > NetworkManager, not something else. Here are relevant parts of boot logs > from a recent runs: > > [ 63.613821] NetworkManager[244]: [1615995259.7778] > NetworkManager (version 1.26.0-12.el8_3) is starting... (for the first time) > [ 71.637264] systemd[1]: Starting Glean for interface enp1s0 with > NetworkManager... > Starting Glean for interface enp1s0 with NetworkManager... > [ 77.622901] glean.sh[327]: mount: /mnt/config: /dev/sr0 already mounted > on /mnt/config. > > !!! As you see, Glean starts quite early, but then... !!! > > [ 92.699494] NetworkManager[244]: [1615995288.9848] manager: > (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) > [ 93.040232] NetworkManager[244]: [1615995289.3256] manager: > (enp1s0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) > [ 94.434450] NetworkManager[244]: [1615995290.7198] device > (enp1s0): state change: unmanaged -> unavailable (reason 'managed', > sys-iface-state: 'external') > [ 94.713545] NetworkManager[244]: [1615995290.9986] device > (enp1s0): carrier: link connected > [ 96.487825] NetworkManager[244]: [1615995292.7699] device > (enp1s0): state change: unavailable -> disconnected (reason 'none', > sys-iface-state: 'managed') > [ 96.712608] NetworkManager[244]: [1615995292.9979] policy: > auto-activating connection 'Wired connection 1' > (cabef811-9cf9-3d92-9391-95712a3d3481) > > !!! This auto-activation triggers DHCP !!! > > [ 97.789768] NetworkManager[244]: [1615995294.0750] device > (enp1s0): state change: config -> ip-config (reason 'none', > sys-iface-state: 'managed') > [ 98.084735] NetworkManager[244]: [1615995294.3699] dhcp4 > (enp1s0): activation: beginning transaction (timeout in 30 seconds) > [ 98.303574] NetworkManager[244]: [1615995294.5883] dhcp4 > (enp1s0): dhclient started with pid 382 > [ 108.882870] NetworkManager[244]: [1615995305.0369] dhcp4 > (enp1s0): address 192.168.122.105 > > !!! 10 seconds later we have the IP address configured !!! > > [ 126.636082] glean.sh[326]: DEBUG:glean:Starting glean > [ 127.885587] glean.sh[326]: DEBUG:glean:Only considering interface > enp1s0 from arguments > [ 127.908001] glean.sh[326]: DEBUG:glean:Interface matched: enp1s0 > (52:54:00:9e:b1:16) > [ 127.920045] glean.sh[326]: DEBUG:glean:52:54:00:9e:b1:16 configured via > config-drive > [ 128.635484] systemd[1]: Started Glean for interface enp1s0 with > NetworkManager. > > !!! 20 seconds later (it's a nested VM, everything is slow) glean actually > kicks in !!! > > [ 130.752564] systemd[1]: Reached target Network is Online. > > At this point the IP address is from DHCP, not from Glean. > > Any ideas? > > Dmitry > > On Thu, Nov 26, 2020 at 2:20 AM Ian Wienand wrote: > >> On Wed, Nov 25, 2020 at 11:54:13AM +0100, Dmitry Tantsur wrote: >> > >> > # systemd-analyze critical-chain >> > multi-user.target @2min 6.301s >> > └─tuned.service @1min 32.273s +34.024s >> > └─network.target @1min 31.590s >> > └─network-pre.target @1min 31.579s >> > └─glean at enp1s0.service @36.594s +54.952s >> > └─system-glean.slice @36.493s >> > └─system.slice @4.083s >> > └─-.slice @4.080s >> > >> > # systemd-analyze critical-chain NetworkManager.service >> > NetworkManager.service +9.287s >> > └─network-pre.target @1min 31.579s >> > └─glean at enp1s0.service @36.594s +54.952s >> > └─system-glean.slice @36.493s >> > └─system.slice @4.083s >> > └─-.slice @4.080s >> >> > It seems that the ordering is correct and the interface service is >> > executed, but the IP address is nonetheless wrong. >> >> I agree, this seems to say to me that NetworkManager should run after >> network.pre-target, and glean at enp1s0 should be running before it. >> >> The glean at enp1s0.service is set as oneshot [1] which should prevent >> network-pre.target being reached until it exits: >> >> oneshot ... [the] service manager will consider the unit up after the >> main process exits. It will then start follow-up units. >> >> To the best of my knowledge the dependencies are correct; but if you >> go through the "git log" of the project you can find some history of >> us thinking ordering was correct and finding issues. >> >> > Can it be related to how long glean takes to run in my case (54 seconds >> vs >> > 1 second in your case)? >> >> The glean script doesn't run asynchronously in any way (at least not >> on purpose!). I can't see any way it could exit before the ifcfg file >> is written out. >> >> > # cat /etc/sysconfig/network-scripts/ifcfg-enp1s0 >> ... >> >> The way NM support works is writing out this file which is read by the >> NM ifcfg-rh plugin [2]. AFAIK that's built-in to NM so would not be >> missing, and I think you'd have to go to effort to manually edit >> /etc/NetworkManager/conf.d/99-main-plugins.conf to have it ignored. >> >> I'm afraid that's overall not much help. Are you sure there isn't an >> errant dhclient running somehow that grabs a different address? Does >> it get the correct address on reboot; implying the ifcfg- file is read >> correctly but somehow isn't in place before NetworkManager starts? >> >> -i >> >> [1] >> https://opendev.org/opendev/glean/src/branch/master/glean/init/glean-nm at .service#L13 >> >> [2] >> https://developer.gnome.org/NetworkManager/stable/nm-settings-ifcfg-rh.html >> >> >> > > -- > Red Hat GmbH, https://de.redhat.com/ > > , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Mar 17 19:27:17 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 17 Mar 2021 20:27:17 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib Message-ID: <14401054.zQ1IGEecQ2@p1> Hi, I'm raising (again) RFE to ask if we can release, and include yet another new version of the neutron-lib in the Wallaby release. Recently we included version 2.10.0 which contained fix for bug related to the secure rbac policies but yesterday we found out one more bug related to the same policies [1]. Fix for that second bug is already merged [2] and new release patch is proposed at [3]. This time this new version don't bumps any other dependencies' versions or things like that. There is only one additional fix related to the way how context with new secure rbac policies is handled. Please let me know if I You need anything else regrarding this RFE. [1] https://launchpad.net/bugs/1919386 [2] https://review.opendev.org/c/openstack/neutron-lib/+/781075 [3] https://review.opendev.org/c/openstack/releases/+/781149 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From mthode at mthode.org Wed Mar 17 19:44:53 2021 From: mthode at mthode.org (Matthew Thode) Date: Wed, 17 Mar 2021 14:44:53 -0500 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <14401054.zQ1IGEecQ2@p1> References: <14401054.zQ1IGEecQ2@p1> Message-ID: <20210317194453.6scpcqcdeogganbi@mthode.org> On 21-03-17 20:27:17, Slawek Kaplonski wrote: > Hi, > > I'm raising (again) RFE to ask if we can release, and include yet another new > version of the neutron-lib in the Wallaby release. > Recently we included version 2.10.0 which contained fix for bug related to the > secure rbac policies but yesterday we found out one more bug related to the > same policies [1]. Fix for that second bug is already merged [2] and new > release patch is proposed at [3]. > This time this new version don't bumps any other dependencies' versions or > things like that. There is only one additional fix related to the way how > context with new secure rbac policies is handled. > > Please let me know if I You need anything else regrarding this RFE. > > [1] https://launchpad.net/bugs/1919386 > [2] https://review.opendev.org/c/openstack/neutron-lib/+/781075 > [3] https://review.opendev.org/c/openstack/releases/+/781149 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat yep, good for release /reqs hat -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Wed Mar 17 20:11:25 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 17 Mar 2021 16:11:25 -0400 Subject: [cinder] review priorities over the next week Message-ID: We're in the final push before RC-1 next week. Top priority are the FFE patches listed on the etherpad, which must be merged by Friday: https://etherpad.opendev.org/p/cinder-wallaby-features Other than that, work from the priority dashboard: http://tiny.cc/CinderPriorities If you have a follow-up patch requested during review of a feature (or if you have found a bug in a feature that needs to be fixed), please add it to the wallaby features etherpad (at the top) so a core reviewer can give it a +1 priority and make it show up in the priorities dashboard: https://etherpad.opendev.org/p/cinder-wallaby-features Finally, I think Rajat is still trying to get a release out from stable/train, so you can check here, too: https://etherpad.opendev.org/p/stable-releases-review-tracker-22-02-2021 And, don't forget about the Second Cinder Festival of XS Reviews on Friday 19 March 1400-1600 UTC. Happy Reviewing! From yoshito.itou.dr at hco.ntt.co.jp Thu Mar 18 00:01:54 2021 From: yoshito.itou.dr at hco.ntt.co.jp (Yoshito Ito) Date: Thu, 18 Mar 2021 09:01:54 +0900 Subject: [heat-translator] Ask new release 2.2.1 to fix Zuul jobs In-Reply-To: <3c44044b-487b-0e6f-1889-358e5eafb74d@hco.ntt.co.jp_1> References: <3c44044b-487b-0e6f-1889-358e5eafb74d@hco.ntt.co.jp_1> Message-ID: <0201d24a-f137-fb06-34b2-2c5a04c3a40b@hco.ntt.co.jp_1> Hi heat-translator core members, Thank you for your cooperation! We could merge the target patches to master branch. Now we need to release 2.2.1 to fix the bug in Wallaby release, so please check the release patch [1]. [1] https://review.opendev.org/c/openstack/releases/+/781175 Best regards, Yoshito Ito On 2021/03/10 16:00, Yoshito Ito wrote: > Hi heat-translator core members, > > I'd like you to review the following patches [1][2], and to ask if we > can release 2.2.1 with these commits to fix our Zuul jobs. > > In release 2.2.0, our Zuul jobs are broken [3] because of new release of > tosca-parser 2.3.0, which provides strict validation of required > attributes in [4]. The patch [1] fix this issue by updating our wrong > test samples. The other [2] is now blocked by this issue and was better > to be merged in 2.2.0. > > I missed the 2.2.0 release patch [5] because none of us was added as > reviewers. So after merging [1] and [2], I will submit a patch to make > new 2.2.1 release. > > [1] https://review.opendev.org/c/openstack/heat-translator/+/779642 > [2] https://review.opendev.org/c/openstack/heat-translator/+/778612 > [3] https://bugs.launchpad.net/heat-translator/+bug/1918360 > [4] > https://opendev.org/openstack/tosca-parser/commit/00d3a394d5a3bc13ed7d2f1d71affd9ab71e4318 > > [5] https://review.opendev.org/c/openstack/releases/+/777964 > > > Best regards, > > Yoshito Ito > > From iwienand at redhat.com Thu Mar 18 05:12:22 2021 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 18 Mar 2021 16:12:22 +1100 Subject: [ironic] [infra] Making Glean work with IPA for static IP assignment In-Reply-To: References: <20201125020901.GA522326@fedora19.localdomain> <20201126011956.GB522326@fedora19.localdomain> Message-ID: On Wed, Mar 17, 2021 at 04:52:10PM +0100, Dmitry Tantsur wrote: > [ 63.613821] NetworkManager[244]: [1615995259.7778] NetworkManager (version 1.26.0-12.el8_3) is starting... (for the first time) > [ 71.637264] systemd[1]: Starting Glean for interface enp1s0 with > Any ideas? That seems to say that the NetworkManager daemon is starting before glean.sh. My NetworkManager /usr/lib/systemd/system/NetworkManager.service has [Unit] Description=Network Manager Documentation=man:NetworkManager(8) Wants=network.target After=network-pre.target dbus.service Before=network.target network.service The glean service https://opendev.org/opendev/glean/src/branch/master/glean/init/glean at .service has [Unit] Description=Glean for interface %I DefaultDependencies=no Before=network-pre.target Wants=network-pre.target ... [Service] Type=oneshot It feels like we're really doing out best to tell NetworkManager to start after network-pre.target and glean to start before it. The service is "oneshot", doesn't exit until it is finished, and has no timeout, so I don't see how network-pre can become active before glean at .service finishes? Can you run with "debug" on the kernel command-line, to maybe see why it chose to start NM? Can you dump "systemd-analyze" plot maybe? I know we looked at the dependency chain previously and it seemed OK ... As you've seen with https://review.opendev.org/c/opendev/glean/+/781133 https://review.opendev.org/c/opendev/glean/+/781174 there are certainly ways we can optimise glean more. But I really would have thought these would just slow down the boot, not cause ordering issues... -i From hberaud at redhat.com Thu Mar 18 08:19:07 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 18 Mar 2021 09:19:07 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20210317194453.6scpcqcdeogganbi@mthode.org> References: <14401054.zQ1IGEecQ2@p1> <20210317194453.6scpcqcdeogganbi@mthode.org> Message-ID: +1 Le mer. 17 mars 2021 à 20:47, Matthew Thode a écrit : > On 21-03-17 20:27:17, Slawek Kaplonski wrote: > > Hi, > > > > I'm raising (again) RFE to ask if we can release, and include yet > another new > > version of the neutron-lib in the Wallaby release. > > Recently we included version 2.10.0 which contained fix for bug related > to the > > secure rbac policies but yesterday we found out one more bug related to > the > > same policies [1]. Fix for that second bug is already merged [2] and new > > release patch is proposed at [3]. > > This time this new version don't bumps any other dependencies' versions > or > > things like that. There is only one additional fix related to the way > how > > context with new secure rbac policies is handled. > > > > Please let me know if I You need anything else regrarding this RFE. > > > > [1] https://launchpad.net/bugs/1919386 > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/781075 > > [3] https://review.opendev.org/c/openstack/releases/+/781149 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > yep, good for release > > /reqs hat > > -- > Matthew Thode > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Mar 18 09:21:37 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 18 Mar 2021 10:21:37 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: References: <14401054.zQ1IGEecQ2@p1> <20210317194453.6scpcqcdeogganbi@mthode.org> Message-ID: <15486364.HTGGRg1TTV@p1> Hi, Dnia czwartek, 18 marca 2021 09:19:07 CET Herve Beraud pisze: > +1 > > Le mer. 17 mars 2021 à 20:47, Matthew Thode a écrit : > > > On 21-03-17 20:27:17, Slawek Kaplonski wrote: > > > Hi, > > > > > > I'm raising (again) RFE to ask if we can release, and include yet > > another new > > > version of the neutron-lib in the Wallaby release. > > > Recently we included version 2.10.0 which contained fix for bug related > > to the > > > secure rbac policies but yesterday we found out one more bug related to > > the > > > same policies [1]. Fix for that second bug is already merged [2] and new > > > release patch is proposed at [3]. > > > This time this new version don't bumps any other dependencies' versions > > or > > > things like that. There is only one additional fix related to the way > > how > > > context with new secure rbac policies is handled. > > > > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > [1] https://launchpad.net/bugs/1919386 > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/781075 > > > [3] https://review.opendev.org/c/openstack/releases/+/781149 > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > yep, good for release > > > > /reqs hat > > > > -- > > Matthew Thode > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > Thank You for the approval. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From christian.rohmann at inovex.de Thu Mar 18 09:46:59 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 18 Mar 2021 10:46:59 +0100 Subject: [Neutron] [Designate] Private / Internal DNS Zones with custom records for i.e. service discovery Message-ID: <289b7ea1-5c62-f22b-1d2b-ef1daeb12292@inovex.de> Hey Openstack-Discuss, apart from the standardized and auto-created records for ports / floating-ips and instances (https://docs.openstack.org/neutron/latest/admin/config-dns-int.html) - is there any way to allow users to add their own records which then only resolve internally? The Neutron API (https://docs.openstack.org/api-ref/network/v2/#id52) seems to be all about the resources it manages, so no additional or custom records there. Looking at the Designate API https://docs.openstack.org/api-ref/dns/?expanded=create-zone-detail#create-zone is does not seem to be an option to mark a zone as "internal" or "private". But maybe there is another way to add records to the internal zone? I am thinking of an only internally resolvable / valid DNS zone carrying records for i.e. service discovery / cluster forming. There are more and more tools just looking up a DNS records to find cluster members ... * ElasticSearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-hosts-providers.html#settings-based-hosts-provider * Hazelcast: https://github.com/hazelcast/hazelcast-kubernetes#understanding-discovery-modes * HiveMQ: https://github.com/hivemq/hivemq-dns-cluster-discovery-extension/blob/master/README.adoc#configuration * RabbitMQ: https://www.rabbitmq.com/cluster-formation.html#peer-discovery-dns [...] and with Kubernetes and the headless service concept there are more tools (ab)using DNS for this every week. So having internal dns zones which only resolve within the project would be really helpful. The hyperscalers call this feature * AWS "Private hosted zones" (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) * Azure "Private DNS" (https://medium.com/azure-architects/exploring-azure-private-dns-be65de08f780) * GCP "Private zone" (https://cloud.google.com/blog/products/networking/introducing-private-dns-zones-resolve-to-keep-internal-networks-concealed) * Alibaba Cloud "DNS PrivateZone" (https://www.alibabacloud.com/product/private-zone) Regards Christian From hberaud at redhat.com Thu Mar 18 10:11:27 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 18 Mar 2021 11:11:27 +0100 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <15486364.HTGGRg1TTV@p1> References: <14401054.zQ1IGEecQ2@p1> <20210317194453.6scpcqcdeogganbi@mthode.org> <15486364.HTGGRg1TTV@p1> Message-ID: 2.10.1 is now released and ready for requirements update https://review.opendev.org/c/openstack/requirements/+/781225 Le jeu. 18 mars 2021 à 10:22, Slawek Kaplonski a écrit : > Hi, > > Dnia czwartek, 18 marca 2021 09:19:07 CET Herve Beraud pisze: > > +1 > > > > Le mer. 17 mars 2021 à 20:47, Matthew Thode a écrit > : > > > > > On 21-03-17 20:27:17, Slawek Kaplonski wrote: > > > > Hi, > > > > > > > > I'm raising (again) RFE to ask if we can release, and include yet > > > another new > > > > version of the neutron-lib in the Wallaby release. > > > > Recently we included version 2.10.0 which contained fix for bug > related > > > to the > > > > secure rbac policies but yesterday we found out one more bug related > to > > > the > > > > same policies [1]. Fix for that second bug is already merged [2] and > new > > > > release patch is proposed at [3]. > > > > This time this new version don't bumps any other dependencies' > versions > > > or > > > > things like that. There is only one additional fix related to the way > > > how > > > > context with new secure rbac policies is handled. > > > > > > > > Please let me know if I You need anything else regrarding this RFE. > > > > > > > > [1] https://launchpad.net/bugs/1919386 > > > > [2] https://review.opendev.org/c/openstack/neutron-lib/+/781075 > > > > [3] https://review.opendev.org/c/openstack/releases/+/781149 > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > yep, good for release > > > > > > /reqs hat > > > > > > -- > > > Matthew Thode > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > > > Thank You for the approval. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Thu Mar 18 10:44:47 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Thu, 18 Mar 2021 12:44:47 +0200 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <449121094.37359.1615980890360@ox.dhbw-mannheim.de> References: <449121094.37359.1615980890360@ox.dhbw-mannheim.de> Message-ID: <374941616064157@mail.yandex.ru> Hi Olver, Am I right that you're also using OpenStack Swift and it's intentional to store images there? Since the issue is related to the upload process into the Swift. So also checking Swift logs be usefull as well. 17.03.2021, 13:42, "Oliver Wenz" : > Hi! > We are currently experiencing problems with our OpenStack Ansible Victoria cloud > when trying to create snapshots from instances. In some cases, everything works > but often the pending snapshots just disappear. > > When this happens, the following glance-api.service errors show up: > > Mar 17 08:33:12 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:33:11.975 85 INFO glance.api.v2.image_data > [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 > ebe0fe5f3893495e82598c07716f5d45 - default default] Unable to create trust: no > such option collect_timing in group [keystone_authtoken] Use the existing user > token. > Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:15.283 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a > 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default > default] REQ: curl -i > http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images/1b9af05a-f7c9-4315-9354-e09f1df66321-00001 > -X PUT -H "X-Auth-Token: gAAAAABgUb7IBvZ_..." > Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:15.285 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a > 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default > default] RESP STATUS: 504 Gateway Time-out > Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:15.285 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a > 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default > default] RESP HEADERS: {'content-length': '92', 'cache-control': 'no-cache', > 'content-type': 'text/html', 'connection': 'close'} > Mar 17 08:35:15 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:15.286 85 INFO swiftclient [req-32345bbf-a88f-450e-90b2-69c6a5804a7a > 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default > default] RESP BODY: b"

504 Gateway Time-out

\nThe server > didn't respond in time.\n\n" > Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:16.306 85 ERROR glance_store._drivers.swift.store > [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 > ebe0fe5f3893495e82598c07716f5d45 - default default] Error during chunked upload > to backend, deleting stale chunks.: swiftclient.exceptions.ClientException: > put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) > failure and no ability to reset contents for reupload. > Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:16.320 85 ERROR glance_store._drivers.swift.store > [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 > ebe0fe5f3893495e82598c07716f5d45 - default default] Failed to add object to > Swift. >                                                                       Got error > from Swift: put_object('glance_images', > '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to > reset contents for reupload..: swiftclient.exceptions.ClientException: > put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) > failure and no ability to reset contents for reupload. > Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:16.327 85 ERROR glance.api.v2.image_data > [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 > ebe0fe5f3893495e82598c07716f5d45 - default default] Failed to upload image data > due to internal error: glance_store.exceptions.BackendException: Failed to add > object to Swift. > Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi > [req-32345bbf-a88f-450e-90b2-69c6a5804a7a 956806468e9f43dbaad1807a5208de52 > ebe0fe5f3893495e82598c07716f5d45 - default default] Caught error: Failed to add > object to Swift. >                                                                       Got error > from Swift: put_object('glance_images', > '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to > reset contents for reupload..: glance_store.exceptions.BackendException: Failed > to add object to Swift. >                                                                       Got error > from Swift: put_object('glance_images', > '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) failure and no ability to > reset contents for reupload.. >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi Traceback (most recent call last): >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", > line 1014, in add >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self._delete_stale_chunks( >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self.force_reraise() >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi six.reraise(self.type_, self.value, > self.tb) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, > in reraise >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi raise value >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", > line 1003, in add >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi >     manager.get_connection().put_object( >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", > line 1960, in put_object >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi return self._retry(reset_func, > put_object, container, obj, contents, >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", > line 1843, in _retry >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi reset_func(func, *args, **kwargs) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", > line 1940, in _default_reset >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi raise > ClientException('put_object(%r, %r, ...) failure and no ' >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi swiftclient.exceptions.ClientException: > put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) > failure and no ability to reset contents for reupload. >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi During handling of the above exception, > another exception occurred: >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi Traceback (most recent call last): >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/wsgi.py", > line 1347, in __call__ >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi action_result = > self.dispatch(self.controller, action, >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/wsgi.py", > line 1391, in dispatch >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi return method(*args, **kwargs) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/utils.py", > line 416, in wrapped >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi return func(self, req, *args, > **kwargs) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/v2/image_data.py", > line 298, in upload >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self._restore(image_repo, image) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self.force_reraise() >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi six.reraise(self.type_, self.value, > self.tb) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, > in reraise >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi raise value >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/v2/image_data.py", > line 163, in upload >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi image.set_data(data, size, > backend=backend) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/domain/proxy.py", > line 208, in set_data >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self.base.set_data(data, size, > backend=backend, set_active=set_active) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/notifier.py", > line 501, in set_data >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi _send_notification(notify_error, > 'image.upload', msg) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self.force_reraise() >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi six.reraise(self.type_, self.value, > self.tb) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, > in reraise >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi raise value >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/notifier.py", > line 447, in set_data >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self.repo.set_data(data, size, > backend=backend, >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/policy.py", > line 198, in set_data >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi return self.image.set_data(*args, > **kwargs) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/quota/__init__.py", > line 318, in set_data >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self.image.set_data(data, > size=size, backend=backend, >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/location.py", > line 567, in set_data >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi self._upload_to_store(data, > verifier, backend, size) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/location.py", > line 458, in _upload_to_store >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi multihash, loc_meta) = > self.store_api.add_with_multihash( >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/multi_backend.py", > line 398, in add_with_multihash >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi return > store_add_to_backend_with_multihash( >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/multi_backend.py", > line 480, in store_add_to_backend_with_multihash >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi (location, size, checksum, > multihash, metadata) = store.add( >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/driver.py", > line 279, in add_adapter >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi metadata_dict) = > store_add_fun(*args, **kwargs) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/capabilities.py", > line 176, in op_checker >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi return store_op_fun(store, *args, > **kwargs) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi File > "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", > line 1082, in add >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi raise > glance_store.BackendException(msg) >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi > glance_store.exceptions.BackendException: Failed to add object to Swift. >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi Got error from Swift: > put_object('glance_images', '1b9af05a-f7c9-4315-9354-e09f1df66321-00001', ...) > failure and no ability to reset contents for reupload.. >                                                                       2021-03-17 > 08:35:16.425 85 ERROR glance.common.wsgi > Mar 17 08:35:16 infra1-glance-container-99614ac2 uwsgi[85]: Wed Mar 17 08:35:16 > 2021 - uwsgi_response_writev_headers_and_body_do(): Connection reset by peer > [core/writer.c line 306] during PUT > /v2/images/1b9af05a-f7c9-4315-9354-e09f1df66321/file (192.168.110.215) > Mar 17 08:35:16 infra1-glance-container-99614ac2 glance-wsgi-api[85]: 2021-03-17 > 08:35:16.441 85 CRITICAL glance [req-32345bbf-a88f-450e-90b2-69c6a5804a7a > 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default > default] Unhandled error: OSError: write error >                                                                       2021-03-17 > 08:35:16.441 85 ERROR glance OSError: write error >                                                                       2021-03-17 > 08:35:16.441 85 ERROR glance > > It seems as if the problems occur more often with large instances, i.e. there > are fewer problems with new Ubuntu 20.04 instances but after an 'apt-upgrade' > on the instance, the error occurs every time a snapshot is taken. Any help is > much appreciated! > > Kind regards, > Oliver --  Kind Regards, Dmitriy Rabotyagov From dtantsur at redhat.com Thu Mar 18 11:18:25 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 18 Mar 2021 12:18:25 +0100 Subject: [ironic] [infra] Making Glean work with IPA for static IP assignment In-Reply-To: References: <20201125020901.GA522326@fedora19.localdomain> <20201126011956.GB522326@fedora19.localdomain> Message-ID: Ian, Jay, On Thu, Mar 18, 2021 at 6:12 AM Ian Wienand wrote: > On Wed, Mar 17, 2021 at 04:52:10PM +0100, Dmitry Tantsur wrote: > > [ 63.613821] NetworkManager[244]: [1615995259.7778] > NetworkManager (version 1.26.0-12.el8_3) is starting... (for the first time) > > [ 71.637264] systemd[1]: Starting Glean for interface enp1s0 with > > > Any ideas? > > That seems to say that the NetworkManager daemon is starting before > glean.sh. > > My NetworkManager /usr/lib/systemd/system/NetworkManager.service has > > [Unit] > Description=Network Manager > Documentation=man:NetworkManager(8) > Wants=network.target > After=network-pre.target dbus.service > I have this too. > Before=network.target network.service > > The glean service > > https://opendev.org/opendev/glean/src/branch/master/glean/init/glean at .service > has > > [Unit] > Description=Glean for interface %I > DefaultDependencies=no > Before=network-pre.target > Wants=network-pre.target > ... > [Service] > Type=oneshot > > It feels like we're really doing out best to tell NetworkManager to > start after network-pre.target and glean to start before it. > > The service is "oneshot", doesn't exit until it is finished, and has > no timeout, so I don't see how network-pre can become active before > glean at .service finishes? > > Can you run with "debug" on the kernel command-line, to maybe see why > it chose to start NM? Can you dump "systemd-analyze" plot maybe? I > know we looked at the dependency chain previously and it seemed OK ... > I think systemd ordering is of no use here. What I suspect is happening is NetworkManager starting to start before udev inserts glean-nm@ services. The issue with network-pre is similar. It does not finish before glean-nm@ starts, but it does finish long after NetworkManager. The explanation I can come up with is the following: network-pre is a passive target, it does not fire until something requests it. glean-nm@ requests it with Wants=network-pre, but at this point NetworkManager is already starting, so its After=network-pre (without Wants, as intended) does not have an effect. These are pure speculations at this point, but that's all I have. What I'm considering now to fix Glean is an additional systemd service that will start glean without arguments (i.e. for all interfaces that are already up) very early, maybe explicitly Before=NetworkManager. Since it will be a normal service, not one inserted by udev, the ordering will work correctly. > > As you've seen with > > https://review.opendev.org/c/opendev/glean/+/781133 > https://review.opendev.org/c/opendev/glean/+/781174 > > there are certainly ways we can optimise glean more. But I really > would have thought these would just slow down the boot, not cause > ordering issues... > Oh, and another thing: Glean has a lock that is interface-agnostic (i.e. global). Which means that while it's processing the loopback interface, it cannot be processing real interfaces. This forced serialization may contribute to the slowness. In the end, we may go down a different path in ironic-python-agent since we may not really want Glean by default, only when configdrive is present. But fixing Glean would be nice anyway. Dmitry > > -i > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Mar 18 13:45:38 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 18 Mar 2021 14:45:38 +0100 Subject: [neutron][stable] neutron-lib stable branches core reviewers Message-ID: <2771831.n0TQhf9oRt@p1> Hi, I just noticed that neutron-lib project has got own group "neutron-lib-stable-maint" which has +2 powers in neutron-lib stable branches [1]. As I see now in gerrit that group don't have any members. Would it be maybe possible to remove that group and add "neutron-stable-maint" to the neutron-lib stable branches instead? If yes, should I simply propose patch to change [1] or is there any other way which I should do it? [1] https://github.com/openstack/project-config/blob/master/gerrit/acls/openstack/neutron-lib.config -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Thu Mar 18 13:46:21 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 18 Mar 2021 14:46:21 +0100 Subject: OSP13 - Queens - Inconsistent data between in DB between ipamallocation and ipallocation In-Reply-To: References: Message-ID: Hello Laurent: Can you, in the Launchpad bug, describe how did you find this issue and what did you do to fix it? If you don't mind, please include as much info as possible: version, logs, steps to reproduce, etc. Thank you in advance. On Mon, Mar 15, 2021 at 2:36 PM Laurent Dumont wrote: > Hey everyone, > > We just troubleshooted (and fixed) a weird issue with ipamallocation. It > looked like ipallocations we're not properly removed from the > ipamallocation table. > > This caused port creations to fail with an error about the IP being > already allocated (when there wasn't an actual port using the IP). > > We found this old issue https://bugs.launchpad.net/neutron/+bug/1884532 > but not a whole lot of details in there. > > Has anyone seen this before? > > Thanks! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Mar 18 13:59:58 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 18 Mar 2021 13:59:58 +0000 Subject: [Neutron] [Designate] Private / Internal DNS Zones with custom records for i.e. service discovery In-Reply-To: <289b7ea1-5c62-f22b-1d2b-ef1daeb12292@inovex.de> References: <289b7ea1-5c62-f22b-1d2b-ef1daeb12292@inovex.de> Message-ID: <20210318135957.i5o4335m4c5rbc6w@yuggoth.org> On 2021-03-18 10:46:59 +0100 (+0100), Christian Rohmann wrote: [...] > is there any way to allow users to add their own records which > then only resolve internally? [...] > Looking at the Designate API > https://docs.openstack.org/api-ref/dns/?expanded=create-zone-detail#create-zone > is does not seem to be an option to mark a zone as "internal" or > "private". But maybe there is another way to add records to the > internal zone? > > I am thinking of an only internally resolvable / valid DNS zone > carrying records for i.e. service discovery / cluster forming. [...] The traditional term for what you're describing is "split-horizon DNS" (implemented via things like BIND's "views" mechanism). I see there's a split_view zone type which is proposed in this spec: https://specs.openstack.org/openstack/designate-specs/specs/ussuri/split-view.html Poking in code review, it looks like it may be in progress: https://review.opendev.org/q/topic:bug/1875939 If this is of interest to you, please do help review and test the feature to make sure it will meet your requirements. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From eblock at nde.ag Thu Mar 18 14:10:49 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 18 Mar 2021 14:10:49 +0000 Subject: [nova] Train: how to use newer API microversion Message-ID: <20210318141049.Horde.z7gBKZkNsdpWduDhCMYW75H@webmail.nde.ag> Hi *, I'm a little confused and could use some guidance regarding endpoints and microversions. Just recently I did a lot of maintenance in our old cloud, I believe it was created with Kilo and is now Train. Anyway, I wanted to use the "server create" option "--nic none" but couldn't because of the v2 endpoint for the nova service, right? According to the cli help this option is available from microversion 2.37+. This was the first attempt before adding new endpoints: controller:~ # openstack server create --image --nic none --flavor 3 test-nic-none nics must be a list or a tuple, not These were the endpoints before I added new ones: controller:~ # openstack endpoint list | grep nova | | RegionOne | nova | compute | True | public | http://controller:/v2/%(tenant_id)s | | | RegionOne | nova | compute | True | internal | http://controller:/v2/%(tenant_id)s | | | RegionOne | nova | compute | True | admin | http://controller:/v2/%(tenant_id)s | I checked with the Train install guide and created new endpoints for nova: controller:~ # openstack endpoint create --region RegionOne compute public http://controller:/v2.1 controller:~ # openstack endpoint create --region RegionOne compute internal http://controller:/v2.1 controller:~ # openstack endpoint create --region RegionOne compute admin http://controller:/v2.1 I should mention that I haven't restarted any services yet, I wanted to check each step carefully since people are working in that cloud. In the debug output from creating a new instance I saw that it's still trying to use v2 api, so I disabled it, resulting in error messages that there's no compute service available. So I reenabled the old endpoints and now have both versions active. Then I tried this: controller:~ # openstack --os-compute-api-version 2.37 server create --image --nic none --flavor 3 test-nic-none Invalid input for field/attribute networks. Value: none. 'none' is not of type 'array' (HTTP 400) (Request-ID: req-b3c53ab2-8040-42cf-bf07-d03e8f46cfd5) Now I'm running out of ideas how to upgrade our api endpoints so the services will actually be able to use them properly. I read Train release notes (we're planning to upgrade to Ussuri soon) and multiple docs about microversions but nothing seems to cover my issue. Could anyone shed some light? I must admit, during the upgrade cycles in the past years I just hoped everything would keep working and it did, I didn't put too much attention in the api versions yet. Any help is appreciated! Thanks and best regards, Eugen From gmann at ghanshyammann.com Thu Mar 18 14:44:23 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 18 Mar 2021 09:44:23 -0500 Subject: [nova] Train: how to use newer API microversion In-Reply-To: <20210318141049.Horde.z7gBKZkNsdpWduDhCMYW75H@webmail.nde.ag> References: <20210318141049.Horde.z7gBKZkNsdpWduDhCMYW75H@webmail.nde.ag> Message-ID: <17845cb6fd5.fa9c27a1737010.1699855162024585304@ghanshyammann.com> ---- On Thu, 18 Mar 2021 09:10:49 -0500 Eugen Block wrote ---- > Hi *, > > I'm a little confused and could use some guidance regarding endpoints > and microversions. > Just recently I did a lot of maintenance in our old cloud, I believe > it was created with Kilo and is now Train. > > Anyway, I wanted to use the "server create" option "--nic none" but > couldn't because of the v2 endpoint for the nova service, right? > According to the cli help this option is available from microversion > 2.37+. > > This was the first attempt before adding new endpoints: > > controller:~ # openstack server create --image --nic none > --flavor 3 test-nic-none > nics must be a list or a tuple, not You can specify the microversion per API request via the below arg CLI: --os-comoute-api-version 2.37 API: in the header : OpenStack-API-Version: compute 2.37 OR X-OpenStack-Nova-API-Version: 2.37 And here is the detail of each microversion what they have changed: - https://docs.openstack.org/nova/latest/reference/api-microversion-history.html -gmann > > > These were the endpoints before I added new ones: > > controller:~ # openstack endpoint list | grep nova > | | RegionOne | nova | compute | True | > public | http://controller:/v2/%(tenant_id)s | > | | RegionOne | nova | compute | True | > internal | http://controller:/v2/%(tenant_id)s | > | | RegionOne | nova | compute | True | admin > | http://controller:/v2/%(tenant_id)s | > > > I checked with the Train install guide and created new endpoints for nova: > > controller:~ # openstack endpoint create --region RegionOne compute > public http://controller:/v2.1 > controller:~ # openstack endpoint create --region RegionOne compute > internal http://controller:/v2.1 > controller:~ # openstack endpoint create --region RegionOne compute > admin http://controller:/v2.1 > > > I should mention that I haven't restarted any services yet, I wanted > to check each step carefully since people are working in that cloud. > > In the debug output from creating a new instance I saw that it's still > trying to use v2 api, so I disabled it, resulting in error messages > that there's no compute service available. So I reenabled the old > endpoints and now have both versions active. Then I tried this: > > controller:~ # openstack --os-compute-api-version 2.37 server create > --image --nic none --flavor 3 test-nic-none > Invalid input for field/attribute networks. Value: none. 'none' is not > of type 'array' (HTTP 400) (Request-ID: > req-b3c53ab2-8040-42cf-bf07-d03e8f46cfd5) > > Now I'm running out of ideas how to upgrade our api endpoints so the > services will actually be able to use them properly. I read Train > release notes (we're planning to upgrade to Ussuri soon) and multiple > docs about microversions but nothing seems to cover my issue. > > Could anyone shed some light? I must admit, during the upgrade cycles > in the past years I just hoped everything would keep working and it > did, I didn't put too much attention in the api versions yet. Any help > is appreciated! > > Thanks and best regards, > Eugen > > > From eblock at nde.ag Thu Mar 18 14:48:44 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 18 Mar 2021 14:48:44 +0000 Subject: [nova] Train: how to use newer API microversion In-Reply-To: <17845cb6fd5.fa9c27a1737010.1699855162024585304@ghanshyammann.com> References: <20210318141049.Horde.z7gBKZkNsdpWduDhCMYW75H@webmail.nde.ag> <17845cb6fd5.fa9c27a1737010.1699855162024585304@ghanshyammann.com> Message-ID: <20210318144844.Horde.Npl7DgmvUtIPNBSRFOssR4U@webmail.nde.ag> Hi and thank you, > You can specify the microversion per API request via the below arg > > CLI: --os-comoute-api-version 2.37 I already tried that, please find the failing command at the end of my previous email. > And here is the detail of each microversion what they have changed: > > - > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html I also found that already, but it doesn't describe how I can move from v2 to v2.1 in order to use only the newer API. Do you have any information about that? Zitat von Ghanshyam Mann : > ---- On Thu, 18 Mar 2021 09:10:49 -0500 Eugen Block > wrote ---- > > Hi *, > > > > I'm a little confused and could use some guidance regarding endpoints > > and microversions. > > Just recently I did a lot of maintenance in our old cloud, I believe > > it was created with Kilo and is now Train. > > > > Anyway, I wanted to use the "server create" option "--nic none" but > > couldn't because of the v2 endpoint for the nova service, right? > > According to the cli help this option is available from microversion > > 2.37+. > > > > This was the first attempt before adding new endpoints: > > > > controller:~ # openstack server create --image --nic none > > --flavor 3 test-nic-none > > nics must be a list or a tuple, not > > > You can specify the microversion per API request via the below arg > > CLI: --os-comoute-api-version 2.37 > API: in the header : > > OpenStack-API-Version: compute 2.37 OR > X-OpenStack-Nova-API-Version: 2.37 > > And here is the detail of each microversion what they have changed: > > - > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html > > -gmann > > > > > > > These were the endpoints before I added new ones: > > > > controller:~ # openstack endpoint list | grep nova > > | | RegionOne | nova | compute | True | > > public | http://controller:/v2/%(tenant_id)s | > > | | RegionOne | nova | compute | True | > > internal | http://controller:/v2/%(tenant_id)s | > > | | RegionOne | nova | compute | True | admin > > | http://controller:/v2/%(tenant_id)s | > > > > > > I checked with the Train install guide and created new endpoints for nova: > > > > controller:~ # openstack endpoint create --region RegionOne compute > > public http://controller:/v2.1 > > controller:~ # openstack endpoint create --region RegionOne compute > > internal http://controller:/v2.1 > > controller:~ # openstack endpoint create --region RegionOne compute > > admin http://controller:/v2.1 > > > > > > I should mention that I haven't restarted any services yet, I wanted > > to check each step carefully since people are working in that cloud. > > > > In the debug output from creating a new instance I saw that it's still > > trying to use v2 api, so I disabled it, resulting in error messages > > that there's no compute service available. So I reenabled the old > > endpoints and now have both versions active. Then I tried this: > > > > controller:~ # openstack --os-compute-api-version 2.37 server create > > --image --nic none --flavor 3 test-nic-none > > Invalid input for field/attribute networks. Value: none. 'none' is not > > of type 'array' (HTTP 400) (Request-ID: > > req-b3c53ab2-8040-42cf-bf07-d03e8f46cfd5) > > > > Now I'm running out of ideas how to upgrade our api endpoints so the > > services will actually be able to use them properly. I read Train > > release notes (we're planning to upgrade to Ussuri soon) and multiple > > docs about microversions but nothing seems to cover my issue. > > > > Could anyone shed some light? I must admit, during the upgrade cycles > > in the past years I just hoped everything would keep working and it > > did, I didn't put too much attention in the api versions yet. Any help > > is appreciated! > > > > Thanks and best regards, > > Eugen > > > > > > From dtantsur at redhat.com Thu Mar 18 14:55:29 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 18 Mar 2021 15:55:29 +0100 Subject: [ironic] [infra] Making Glean work with IPA for static IP assignment In-Reply-To: References: <20201125020901.GA522326@fedora19.localdomain> <20201126011956.GB522326@fedora19.localdomain> Message-ID: On Thu, Mar 18, 2021 at 12:18 PM Dmitry Tantsur wrote: > Ian, Jay, > > > On Thu, Mar 18, 2021 at 6:12 AM Ian Wienand wrote: > >> On Wed, Mar 17, 2021 at 04:52:10PM +0100, Dmitry Tantsur wrote: >> > [ 63.613821] NetworkManager[244]: [1615995259.7778] >> NetworkManager (version 1.26.0-12.el8_3) is starting... (for the first time) >> > [ 71.637264] systemd[1]: Starting Glean for interface enp1s0 with >> >> > Any ideas? >> >> That seems to say that the NetworkManager daemon is starting before >> glean.sh. >> >> My NetworkManager /usr/lib/systemd/system/NetworkManager.service has >> >> [Unit] >> Description=Network Manager >> Documentation=man:NetworkManager(8) >> Wants=network.target >> After=network-pre.target dbus.service >> > > I have this too. > > >> Before=network.target network.service >> >> The glean service >> >> https://opendev.org/opendev/glean/src/branch/master/glean/init/glean at .service >> has >> >> [Unit] >> Description=Glean for interface %I >> DefaultDependencies=no >> Before=network-pre.target >> Wants=network-pre.target >> ... >> [Service] >> Type=oneshot >> >> It feels like we're really doing out best to tell NetworkManager to >> start after network-pre.target and glean to start before it. >> >> The service is "oneshot", doesn't exit until it is finished, and has >> no timeout, so I don't see how network-pre can become active before >> glean at .service finishes? >> >> Can you run with "debug" on the kernel command-line, to maybe see why >> it chose to start NM? Can you dump "systemd-analyze" plot maybe? I >> know we looked at the dependency chain previously and it seemed OK ... >> > > I think systemd ordering is of no use here. What I suspect is happening is > NetworkManager starting to start before udev inserts glean-nm@ services. > > The issue with network-pre is similar. It does not finish before glean-nm@ > starts, but it does finish long after NetworkManager. The explanation I can > come up with is the following: network-pre is a passive target, it does not > fire until something requests it. glean-nm@ requests it with > Wants=network-pre, but at this point NetworkManager is already starting, so > its After=network-pre (without Wants, as intended) does not have an effect. > > These are pure speculations at this point, but that's all I have. > > What I'm considering now to fix Glean is an additional systemd service > that will start glean without arguments (i.e. for all interfaces that are > already up) very early, maybe explicitly Before=NetworkManager. Since it > will be a normal service, not one inserted by udev, the ordering will work > correctly. > This approach has worked! The first change is https://review.opendev.org/c/opendev/glean/+/781460 that allows an optional early service. The second is https://review.opendev.org/c/openstack/diskimage-builder/+/781491 for the DIB to pass extra install arguments. I've also added Clark's and yours patches to the picture. They provide an improvement but alone don't seem enough to fix the problem. Dmitry > > >> >> As you've seen with >> >> https://review.opendev.org/c/opendev/glean/+/781133 >> https://review.opendev.org/c/opendev/glean/+/781174 >> >> there are certainly ways we can optimise glean more. But I really >> would have thought these would just slow down the boot, not cause >> ordering issues... >> > > Oh, and another thing: Glean has a lock that is interface-agnostic (i.e. > global). Which means that while it's processing the loopback interface, it > cannot be processing real interfaces. This forced serialization may > contribute to the slowness. > > In the end, we may go down a different path in ironic-python-agent since > we may not really want Glean by default, only when configdrive is present. > But fixing Glean would be nice anyway. > > Dmitry > > >> >> -i >> >> > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Mar 18 15:01:51 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 18 Mar 2021 16:01:51 +0100 Subject: [nova] Train: how to use newer API microversion In-Reply-To: References: <20210318141049.Horde.z7gBKZkNsdpWduDhCMYW75H@webmail.nde.ag> <17845cb6fd5.fa9c27a1737010.1699855162024585304@ghanshyammann.com> <20210318144844.Horde.Npl7DgmvUtIPNBSRFOssR4U@webmail.nde.ag> Message-ID: On Thu, Mar 18, 2021 at 3:50 PM Eugen Block wrote: > I also found that already, but it doesn't describe how I can move from > v2 to v2.1 in order to use only the newer API. Do you have any > information about that? If you (or your end users) rely on the legacy APIs, you can have both like this (192.0.2.40 is the example IP address of the API host; the /compute could be :some_port if you prefer that): +----------------------------------+-----------+--------------+----------------+---------+-----------+------------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+----------------+---------+-----------+------------------------------------------------+ | xxx | RegionOne | nova | compute | True | public | http://192.0.2.40/compute/v2.1 | | xxx | RegionOne | nova_legacy | compute_legacy | True | public | http://192.0.2.40/compute/v2/$(project_id)s | +----------------------------------+-----------+--------------+----------------+---------+-----------+------------------------------------------------+ If you don't need the legacy APIs, you can delete their endpoints. -yoctozepto From bkslash at poczta.onet.pl Thu Mar 18 15:05:27 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Thu, 18 Mar 2021 16:05:27 +0100 Subject: [Vitrage] Cannot delete old entities on graph Message-ID: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> Hi, I’ve deployed Kolla-Ansible (Victoria) and after using it for some time I have errors from Vitrage service: 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init [-] Got Exception for event {'vitrage_entity_type': 'consistency', 'vitrage_datasource_action': 'update', 'vitrage_sample_date': '2021-03-18T13:30:19Z', 'vitrage_event_type': 'delete_entity', 'vitrage_id': 'a19330f1-49c1-4a97-9787-c80b645f51a6', 'id': '93c1df77-dbb5-4ba7-9b7d-16fad97169f1', 'vitrage_type': 'cinder.volume', 'vitrage_category': 'RESOURCE', 'is_real_vitrage_id': True}: KeyError: 'vitrage_is_placeholder' 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init Traceback (most recent call last): 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/graph_init.py", line 146, in do_work 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init return do_work_func(event) 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/graph_init.py", line 111, in process_event 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init self.processor.process_event(event) 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor.py", line 63, in process_event 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init self.actions[entity.action](entity.vertex, entity.neighbors) 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor.py", line 149, in delete_entity 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init PUtils.delete_placeholder_vertex(self.entity_graph, vertex) 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor_utils.py", line 68, in delete_placeholder_vertex 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init if not vertex[VProps.VITRAGE_IS_PLACEHOLDER]: 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/graph/driver/elements.py", line 24, in __getitem__ 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init return self.properties[key] 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init KeyError: 'vitrage_is_placeholder' 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init As I can see in Horizon, there are some old volumes , which in Cinder database have proper (DELETED/DETACHED) status, but they’re still visible on Entity Graph, and as in error message above - cannot be deleted by vitrage-graph. How to delete such entities? Best regards, Adam Tomas From marios at redhat.com Thu Mar 18 15:51:31 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 18 Mar 2021 17:51:31 +0200 Subject: [tripleo] launchpad wallaby-3 milestone closed use wallaby-rc1 Message-ID: Hi tripleo o/ as just mentioned on irc #tripleo, I closed out the wallaby-3 launchpad milestone yesterday. If you want to set a milestone in bugs please use wallaby-rc1 (wallaby-3 is no longer in the "assign milestone" list). I moved the wallaby-3 unclosed bugs to wallaby-rc1 with our helper script [1] results at [2]. Please try and keep bug status updated. Things like moving it to fix-released once we're done merging fixes - it is easy to forget about that. We have 155 bugs triaged at https://launchpad.net/tripleo/+milestone/wallaby-rc1 at least some of those can be moved to fixed/closed nofix etc. Yes I know, #cool story bro thank you for reading ;) and for your help on bug status regards, marios [1] https://opendev.org/openstack/tripleo-ci/src/branch/master/scripts/move_bugs/move_bugs.py [2] https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3668289 From bkslash at poczta.onet.pl Thu Mar 18 16:04:41 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Thu, 18 Mar 2021 17:04:41 +0100 Subject: [rabbitmq][kolla-ansible] - RabbitMQ disconnects - 60s timeout Message-ID: <21A9D0D4-31CF-4B08-B358-5A7F63C44E3A@poczta.onet.pl> Hi, I have a problem with rabbitmq heartbeats timeout (described also here: https://bugzilla.redhat.com/show_bug.cgi?id=1711794 ). Although I have threads=1 the problem still persists, generating a lot of messages in logs: 2021-03-18 15:17:17.482 [error] <0.122.51> closing AMQP connection <0.122.51> (x.x.x.100:60456 -> x.x.x.100:5672 - mod_wsgi:699:9a813fcb-c29f-4886-82bc-00bf478b6b64): missed heartbeats from client, timeout: 60s 2021-03-18 15:17:17.484 [info] <0.846.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed 2021-03-18 15:18:15.918 [error] <0.150.51> closing AMQP connection <0.150.51> (x.x.x.111:41934 -> x.x.x.100:5672 - mod_wsgi:697:b608c7b8-9644-434e-93af-c00222c0a700): missed heartbeats from client, timeout: 60s 2021-03-18 15:18:15.920 [error] <0.153.51> closing AMQP connection <0.153.51> (x.x.x.111:41936 -> x.x.x.100:5672 - mod_wsgi:697:77348197-148b-41a6-928f-c5eddfab57c9): missed heartbeats from client, timeout: 60s 2021-03-18 15:18:15.920 [info] <0.1527.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed 2021-03-18 15:18:15.922 [info] <0.1531.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed 2021-03-18 15:20:16.080 [info] <0.2196.51> accepting AMQP connection <0.2196.51> (x.x.x.111:34826 -> x.x.x.100:5672) 2021-03-18 15:20:16.080 [info] <0.2199.51> accepting AMQP connection <0.2199.51> (x.x.x.111:34828 -> x.x.x.100:5672) I’ve set heartbeat = 600 in rabbitmq.conf and still get disconnections after 60s timeout… How to set proper timeout to avoid disconnections? Best regards Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Mar 18 18:08:32 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 18 Mar 2021 11:08:32 -0700 Subject: [Neutron] [Designate] Private / Internal DNS Zones with custom records for i.e. service discovery In-Reply-To: <20210318135957.i5o4335m4c5rbc6w@yuggoth.org> References: <289b7ea1-5c62-f22b-1d2b-ef1daeb12292@inovex.de> <20210318135957.i5o4335m4c5rbc6w@yuggoth.org> Message-ID: Currently Designate does not support DNS views (split-horizon), so there is no way to tag records as internal vs. external. This is a widely requested enhancement. As Jeremy mentioned, there is a specification and proposed code for a version of split-horizon, though I'm not sure it meets your use case (This is a current stream of discussion on the patch). The current proposed patch requires the operator to define the internal and external IP address ranges. These are not user configurable. I think there is more design discussion needed on this topic and I plan to include it in our PTG agenda. For now, please feel free to review and comment on the existing patch. As an interim solution, you could create zones for the various purposes and manage them directly in Designate, it just wouldn't provide much automation. Michael On Thu, Mar 18, 2021 at 7:04 AM Jeremy Stanley wrote: > > On 2021-03-18 10:46:59 +0100 (+0100), Christian Rohmann wrote: > [...] > > is there any way to allow users to add their own records which > > then only resolve internally? > [...] > > Looking at the Designate API > > https://docs.openstack.org/api-ref/dns/?expanded=create-zone-detail#create-zone > > is does not seem to be an option to mark a zone as "internal" or > > "private". But maybe there is another way to add records to the > > internal zone? > > > > I am thinking of an only internally resolvable / valid DNS zone > > carrying records for i.e. service discovery / cluster forming. > [...] > > The traditional term for what you're describing is "split-horizon > DNS" (implemented via things like BIND's "views" mechanism). I see > there's a split_view zone type which is proposed in this spec: > > https://specs.openstack.org/openstack/designate-specs/specs/ussuri/split-view.html > > Poking in code review, it looks like it may be in progress: > > https://review.opendev.org/q/topic:bug/1875939 > > If this is of interest to you, please do help review and test the > feature to make sure it will meet your requirements. > -- > Jeremy Stanley From elod.illes at est.tech Thu Mar 18 18:24:06 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 18 Mar 2021 19:24:06 +0100 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: References: Message-ID: As fungi pointed out maybe an easy solution is to replace the nodejs4-publish-to-npm job to nodejs8-publish-to-npm [1]. Reviews are welcome :) [1] https://review.opendev.org/c/openstack/project-config/+/781536 Előd On 2021. 03. 16. 10:34, Herve Beraud wrote: > Here is a summary from our previous meeting [1] related to this topic. > > Apparently the error is caused by the old version of nodejs [2] > (usually lower than v6). > > This error is on an independent project [3] so normally the NodeJS > supported runtimes are the same that with the current series (AFAIK we > use NodeJS 10) [4] > > This job uses xenial and xenial seems to provide nodejs 4.2.6 [5] that > could explain why we see this error. > > However, this job inherits from `publish-openstack-artifacts` [6] and > this one defines a nodeset based on focal [7] so I wonder why we use > xenial here [8]. > Notice that focal comes with NodeJS 10.19 [4]. > > The last execution of this job (`release-openstack-javascript`) was > successful but it was 18 months ago [9], unfortunately logs aren't > longer available for these builds. > It could be worth seeing which version of ubuntu was used during this > period. > > Maybe the solution is simply moving this javascript job onto a nodeset > based on focal. > > Thoughts? > > [1] > eavesdrop.openstack.org/meetings/releaseteam/2021/releaseteam.2021-03-11-17.00.log.html#l-186 > > [2] > https://stackoverflow.com/questions/64414716/unexpected-token-in-yarn-installation > > [3] > https://releases.openstack.org/reference/release_models.html#independent > > [4] > https://governance.openstack.org/tc/reference/runtimes/wallaby.html#node-js-runtime-for-wallaby > > [5] https://pkgs.org/search/?q=nodejs > [6] > https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.yaml#L780 > > [7] > https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.yaml#L8 > > [8] > https://zuul.opendev.org/t/openstack/build/cdffd2a26a0d4a5b8137edb392fa5971/log/job-output.txt#743 > > [9] > https://zuul.opendev.org/t/openstack/builds?job_name=release-openstack-javascript > > > Le jeu. 11 mars 2021 à 12:42, Thierry Carrez > a écrit : > > We had a release job failure during the processing of the tag > event when > 1.3.0 was (successfully) pushed to > openstack/monasca-grafana-datasource. > > Tags on this repository trigger the release-openstack-javascript job, > which failed during pre playbook when trying to run yarn --version > with > the following error: > > /usr/share/yarn/lib/cli.js:46100 >    let { >        ^ > > SyntaxError: Unexpected token { >      at exports.runInThisContext (vm.js:53:16) >      at Module._compile (module.js:373:25) >      at Object.Module._extensions..js (module.js:416:10) >      at Module.load (module.js:343:32) >      at Function.Module._load (module.js:300:12) >      at Module.require (module.js:353:17) >      at require (internal/module.js:12:17) >      at Object. (/usr/share/yarn/bin/yarn.js:24:13) >      at Module._compile (module.js:409:26) >      at Object.Module._extensions..js (module.js:416:10) > > See https://zuul.opendev.org/t/openstack/build > > /cdffd2a26a0d4a5b8137edb392fa5971 > > This prevented the job from running (likely resulting in nothing > being > uploaded to NPM? Not a JS job specialist), which in turn prevented > announce-release job from announcing it. > > -- > Thierry Carrez (ttx) > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Mar 18 18:49:10 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 18 Mar 2021 19:49:10 +0100 Subject: [neutron] Unit test jobs broken In-Reply-To: <20210317092304.lzkqj66r35pbhjbz@p1.localdomain> References: <20210317092304.lzkqj66r35pbhjbz@p1.localdomain> Message-ID: <2783278.PtLJcCXYeI@p1> Hi, Patch [1] is now merged so it should be fine to recheck Your patches now. Dnia środa, 17 marca 2021 10:23:04 CET Slawek Kaplonski pisze: > Hi, > > Our UT gate jobs are now broken due to new neutron-lib release. Fix for that > is already proposed in [1]. So if Your UT jobs are failing like e.g. in [2] > please don't recheck Your patch before [1] will be merged. > > [1] https://review.opendev.org/c/openstack/neutron/+/780802 > [2] > https:// 8ee33ec1a424812a1857-16390f05fde22eb723929856dcc38fcd.ssl.cf5.rackcdn > .com/779878/3/check/openstack-tox-py36/26ea91f/testr_results.html > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From eyalb1 at gmail.com Thu Mar 18 20:56:56 2021 From: eyalb1 at gmail.com (Eyal B) Date: Thu, 18 Mar 2021 22:56:56 +0200 Subject: [Vitrage] Cannot delete old entities on graph In-Reply-To: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> References: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> Message-ID: Hi As a workaround stop the vitrage graph process run vitrage-purge-data and start vitrage graph again Eyal On Thu, Mar 18, 2021, 17:10 Adam Tomas wrote: > Hi, > I’ve deployed Kolla-Ansible (Victoria) and after using it for some time I > have errors from Vitrage service: > > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init [-] Got > Exception for event {'vitrage_entity_type': 'consistency', > 'vitrage_datasource_action': 'update', 'vitrage_sample_date': > '2021-03-18T13:30:19Z', 'vitrage_event_type': 'delete_entity', > 'vitrage_id': 'a19330f1-49c1-4a97-9787-c80b645f51a6', 'id': > '93c1df77-dbb5-4ba7-9b7d-16fad97169f1', 'vitrage_type': 'cinder.volume', > 'vitrage_category': 'RESOURCE', 'is_real_vitrage_id': True}: KeyError: > 'vitrage_is_placeholder' > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init Traceback > (most recent call last): > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File > "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/graph_init.py", > line 146, in do_work > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init return > do_work_func(event) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File > "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/graph_init.py", > line 111, in process_event > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init > self.processor.process_event(event) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File > "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor.py", > line 63, in process_event > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init > self.actions[entity.action](entity.vertex, entity.neighbors) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File > "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor.py", > line 149, in delete_entity > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init > PUtils.delete_placeholder_vertex(self.entity_graph, vertex) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File > "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor_utils.py", > line 68, in delete_placeholder_vertex > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init if not > vertex[VProps.VITRAGE_IS_PLACEHOLDER]: > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File > "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/graph/driver/elements.py", > line 24, in __getitem__ > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init return > self.properties[key] > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init KeyError: > 'vitrage_is_placeholder' > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init > > As I can see in Horizon, there are some old volumes , which in Cinder > database have proper (DELETED/DETACHED) status, but they’re still visible > on Entity Graph, and as in error message above - cannot be deleted by > vitrage-graph. How to delete such entities? > > Best regards, > Adam Tomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Mar 19 01:59:47 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 18 Mar 2021 21:59:47 -0400 Subject: OSP13 - Queens - Inconsistent data between in DB between ipamallocation and ipallocation In-Reply-To: References: Message-ID: Hey Rodolfo, Just did, thanks! Laurent On Thu, Mar 18, 2021 at 9:46 AM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello Laurent: > > Can you, in the Launchpad bug, describe how did you find this issue and > what did you do to fix it? > > If you don't mind, please include as much info as possible: version, logs, > steps to reproduce, etc. > > Thank you in advance. > > On Mon, Mar 15, 2021 at 2:36 PM Laurent Dumont > wrote: > >> Hey everyone, >> >> We just troubleshooted (and fixed) a weird issue with ipamallocation. It >> looked like ipallocations we're not properly removed from the >> ipamallocation table. >> >> This caused port creations to fail with an error about the IP being >> already allocated (when there wasn't an actual port using the IP). >> >> We found this old issue https://bugs.launchpad.net/neutron/+bug/1884532 >> but not a whole lot of details in there. >> >> Has anyone seen this before? >> >> Thanks! >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Fri Mar 19 04:42:06 2021 From: sorrison at gmail.com (Sam Morrison) Date: Fri, 19 Mar 2021 15:42:06 +1100 Subject: [Cinder] Review pretty please Message-ID: Hi Cinder cores, I've had a simple review [1] waiting for several months, could I please get some feedback on this. Thanks! Sam [1] https://review.opendev.org/c/openstack/cinder/+/764875 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Fri Mar 19 06:20:24 2021 From: mike.carden at gmail.com (Mike Carden) Date: Fri, 19 Mar 2021 17:20:24 +1100 Subject: [Cinder] Review pretty please In-Reply-To: References: Message-ID: I'm not a Cinder dev, but I could not help looking at this suggested change. To me at least, it's an insta-merge, but I think it might get overlooked because the Commit Message doesn't detail "What you would now see" versus "What you should see", and the cognitive load on reviewers is high enough that it gets ignored. +1 from someone who has no right to say so. -- MC On Fri, Mar 19, 2021 at 3:43 PM Sam Morrison wrote: > Hi Cinder cores, > > I've had a simple review [1] waiting for several months, could I please > get some feedback on this. > > Thanks! > Sam > > [1] https://review.opendev.org/c/openstack/cinder/+/764875 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Mar 19 06:43:39 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 19 Mar 2021 07:43:39 +0100 Subject: [release] Release countdown for week R-3 Mar 22 - Mar 26 Message-ID: Development Focus ----------------- The Release Candidate (RC) deadline is next Thursday, 25 March, 2021. Work should be focused on fixing any release-critical bugs. General Information ------------------- All deliverables released under a cycle-with-rc model should have a first release candidate by the end of the week, from which a stable/wallaby branch will be cut. This branch will track the Wallaby release. Once stable/wallaby has been created, the master branch will be ready to switch to Xena development. While the master branch will no longer be feature-frozen, please prioritize any work necessary for completing Wallaby plans. Release-critical bugfixes will need to be merged in the master branch first, then backported to the stable/wallaby branch before a new release candidate can be proposed. Actions ------- Early in the week, the release team will be proposing RC1 patches for all cycle-with-rc projects, using the latest commit from the master branch. If your team is ready to go for cutting RC1, please let us know by leaving a +1 on these patches. If there are still a few more patches needed before RC1, you can -1 the patch and update it later in the week with the new commit hash you would like to use. Remember, stable/wallaby branches will be created with this, so you will want to make sure you have what you need included to avoid needing to backport changes from the master branch (which will technically then be Xena) to this stable branch for any additional RCs before the final release. The release team will also be proposing releases for any deliverable following a cycle-with-intermediary model that has not produced any Wallaby release so far. Finally, now is a good time to finalize release highlights. Release highlights help shape the messaging around the release and make sure that your work is properly represented. Upcoming Deadlines & Dates -------------------------- RC1 deadline: 25 March, 2021 (R-3 week) Final RC deadline: 08 April, 2021 (R-1 week) Final Wallaby release: 14 April, 2021 Xena virtual PTG: 19 - 23 April, 2021 Thanks for your reading -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Mar 19 06:51:14 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 19 Mar 2021 07:51:14 +0100 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: References: Message-ID: Thanks Előd! @fungi: Can we try to reenqueue the job? Le jeu. 18 mars 2021 à 19:26, Előd Illés a écrit : > As fungi pointed out maybe an easy solution is to replace the > nodejs4-publish-to-npm job to nodejs8-publish-to-npm [1]. > > Reviews are welcome :) > > [1] https://review.opendev.org/c/openstack/project-config/+/781536 > > Előd > > > On 2021. 03. 16. 10:34, Herve Beraud wrote: > > Here is a summary from our previous meeting [1] related to this topic. > > Apparently the error is caused by the old version of nodejs [2] (usually > lower than v6). > > This error is on an independent project [3] so normally the NodeJS > supported runtimes are the same that with the current series (AFAIK we use > NodeJS 10) [4] > > This job uses xenial and xenial seems to provide nodejs 4.2.6 [5] that > could explain why we see this error. > > However, this job inherits from `publish-openstack-artifacts` [6] and this > one defines a nodeset based on focal [7] so I wonder why we use xenial here > [8]. > Notice that focal comes with NodeJS 10.19 [4]. > > The last execution of this job (`release-openstack-javascript`) was > successful but it was 18 months ago [9], unfortunately logs aren't longer > available for these builds. > It could be worth seeing which version of ubuntu was used during this > period. > > Maybe the solution is simply moving this javascript job onto a nodeset > based on focal. > > Thoughts? > > [1] > eavesdrop.openstack.org/meetings/releaseteam/2021/releaseteam.2021-03-11-17.00.log.html#l-186 > [2] > https://stackoverflow.com/questions/64414716/unexpected-token-in-yarn-installation > [3] > https://releases.openstack.org/reference/release_models.html#independent > [4] > https://governance.openstack.org/tc/reference/runtimes/wallaby.html#node-js-runtime-for-wallaby > [5] https://pkgs.org/search/?q=nodejs > [6] > https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.yaml#L780 > [7] > https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.yaml#L8 > [8] > https://zuul.opendev.org/t/openstack/build/cdffd2a26a0d4a5b8137edb392fa5971/log/job-output.txt#743 > [9] > https://zuul.opendev.org/t/openstack/builds?job_name=release-openstack-javascript > > Le jeu. 11 mars 2021 à 12:42, Thierry Carrez a > écrit : > >> We had a release job failure during the processing of the tag event when >> 1.3.0 was (successfully) pushed to openstack/monasca-grafana-datasource. >> >> Tags on this repository trigger the release-openstack-javascript job, >> which failed during pre playbook when trying to run yarn --version with >> the following error: >> >> /usr/share/yarn/lib/cli.js:46100 >> let { >> ^ >> >> SyntaxError: Unexpected token { >> at exports.runInThisContext (vm.js:53:16) >> at Module._compile (module.js:373:25) >> at Object.Module._extensions..js (module.js:416:10) >> at Module.load (module.js:343:32) >> at Function.Module._load (module.js:300:12) >> at Module.require (module.js:353:17) >> at require (internal/module.js:12:17) >> at Object. (/usr/share/yarn/bin/yarn.js:24:13) >> at Module._compile (module.js:409:26) >> at Object.Module._extensions..js (module.js:416:10) >> >> See https://zuul.opendev.org/t/openstack/build >> /cdffd2a26a0d4a5b8137edb392fa5971 >> >> This prevented the job from running (likely resulting in nothing being >> uploaded to NPM? Not a JS job specialist), which in turn prevented >> announce-release job from announcing it. >> >> -- >> Thierry Carrez (ttx) >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Mar 19 07:20:13 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 19 Mar 2021 08:20:13 +0100 Subject: [neutron] Drivers meeting agenda - 19.03.2021 Message-ID: <4259705.7RyFcYuWY7@p1> Hi, First of all sorry for sending it so late - I simply forgot to send this email yesterday :) For today's drivers meeting we have 1 RFE to discuss: * https://bugs.launchpad.net/neutron/+bug/1904559 - it is continuation of the discussion from last week. Michael Johnson from Designate team provided his feedack on it so I think that based on that we can make final decision about that RFE on our today's meeting. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Mar 19 07:31:06 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 19 Mar 2021 08:31:06 +0100 Subject: [neutron] Drivers meeting agenda - 19.03.2021 In-Reply-To: <4259705.7RyFcYuWY7@p1> References: <4259705.7RyFcYuWY7@p1> Message-ID: <4953130.vsqod696Dq@p1> Hi, Dnia piątek, 19 marca 2021 08:20:13 CET Slawek Kaplonski pisze: > Hi, > > First of all sorry for sending it so late - I simply forgot to send this email > yesterday :) > For today's drivers meeting we have 1 RFE to discuss: > * https://bugs.launchpad.net/neutron/+bug/1904559 - it is continuation of the > discussion from last week. Michael Johnson from Designate team provided his > feedack on it so I think that based on that we can make final decision about > that RFE on our today's meeting. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat One more thing. I would also want to talk about patch [1]. Long story short problem which we have with that is that with new secure rbac policies we have personas like SYSTEM_ADMIN and PROJECT_ADMIN. SYSTEM_ADMIN don't have any project_id in context. We set some policies like e.g. create network with provider:physical_network given to be available only for SYSTEM_ADMIN user. And the issue with that is that such SYSTEM_ADMIN user needs to always pass --project_id as a parameter if wants to create such network as network has to have owner and there is no project_id in context of such request. And my question for discussion is: should we relax our default policies as patch [1] proposes or not. We discussed that on last team meeting on Tuesday but I'm still not convinced if we should really do it. So I want to discuss that once again today :) [1] https://review.opendev.org/c/openstack/neutron/+/780978 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bkslash at poczta.onet.pl Fri Mar 19 08:42:14 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Fri, 19 Mar 2021 09:42:14 +0100 Subject: [Vitrage] Cannot delete old entities on graph In-Reply-To: References: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> Message-ID: Hi - thanks, that worked :) Actually I was using vitrage-purge-data, but I didn’t stop/start Vitrage service (I use Kolla, so Vitrage is inside docker container and stopping the process would of course stop the container. Instead now I did vitrage-purge-data&& kill $VITRAGE_MASTER_PID). Thanks again! Best regards, Adam > Wiadomość napisana przez Eyal B w dniu 18.03.2021, o godz. 21:56: > > Hi > > As a workaround stop the vitrage graph process run vitrage-purge-data and start vitrage graph again > > Eyal > > On Thu, Mar 18, 2021, 17:10 Adam Tomas > wrote: > Hi, > I’ve deployed Kolla-Ansible (Victoria) and after using it for some time I have errors from Vitrage service: > > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init [-] Got Exception for event {'vitrage_entity_type': 'consistency', 'vitrage_datasource_action': 'update', 'vitrage_sample_date': '2021-03-18T13:30:19Z', 'vitrage_event_type': 'delete_entity', 'vitrage_id': 'a19330f1-49c1-4a97-9787-c80b645f51a6', 'id': '93c1df77-dbb5-4ba7-9b7d-16fad97169f1', 'vitrage_type': 'cinder.volume', 'vitrage_category': 'RESOURCE', 'is_real_vitrage_id': True}: KeyError: 'vitrage_is_placeholder' > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init Traceback (most recent call last): > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/graph_init.py", line 146, in do_work > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init return do_work_func(event) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/graph_init.py", line 111, in process_event > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init self.processor.process_event(event) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor.py", line 63, in process_event > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init self.actions[entity.action](entity.vertex, entity.neighbors) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor.py", line 149, in delete_entity > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init PUtils.delete_placeholder_vertex(self.entity_graph, vertex) > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/entity_graph/processor/processor_utils.py", line 68, in delete_placeholder_vertex > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init if not vertex[VProps.VITRAGE_IS_PLACEHOLDER]: > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init File "/var/lib/kolla/venv/lib/python3.8/site-packages/vitrage/graph/driver/elements.py", line 24, in __getitem__ > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init return self.properties[key] > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init KeyError: 'vitrage_is_placeholder' > 2021-03-18 13:30:19.873 7 ERROR vitrage.entity_graph.graph_init > > As I can see in Horizon, there are some old volumes , which in Cinder database have proper (DELETED/DETACHED) status, but they’re still visible on Entity Graph, and as in error message above - cannot be deleted by vitrage-graph. How to delete such entities? > > Best regards, > Adam Tomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchamoul at redhat.com Fri Mar 19 09:58:43 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Fri, 19 Mar 2021 10:58:43 +0100 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> Message-ID: <20210319095843.gjw3tj76gh7pvtfp@gchamoul-mac> Ladies and Gentlemen, First, thanks for your votes! With the agreement of Marios, I've added David as part of the tripleo-core group and his voting rights will be exercised only on the following repositories: - tripleo-validations - validations-common - validations-libs - python-tripleoclient (only on VF related patches touching the tripleo_validator.py CLI Code) However, it would be great to create a dedicated core group for the Validation Framework team including those repositories. And thus being more compliant with what we do with others TripleO Teams. Marios, any thoughts? Congratulations David and thanks again for your hard work! On 09/Mar/2021 15:53, Gaël Chamoulaud wrote: > Hi TripleO Devs, > > David is already a key member of our team since a long time now, he > provided all the needed ansible roles for the Validation Framework into > tripleo-ansible-operator. He continuously provides excellent code reviews and he > is a source of great ideas for the future of the Validation Framework. That's > why we would highly benefit from his addition to the core reviewer team. > > Assuming that there are no objections, we will add David to the core team next > week. > > Thanks, David, for your excellent work! > > -- > Gaël Chamoulaud - (He/Him/His) > .::. Red Hat .::. OpenStack .::. > .::. DFG:DF Squad:VF .::. -- Gaël Chamoulaud - (He/Him/His) .::. Red Hat .::. OpenStack .::. .::. DFG:DF Squad:VF .::. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dev.faz at gmail.com Fri Mar 19 10:45:18 2021 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Fri, 19 Mar 2021 11:45:18 +0100 Subject: [ops] Bandwidth problem on computes In-Reply-To: <7f573860-4ef8-39e0-d475-6ec2319c0880@cc.in2p3.fr> References: <7f573860-4ef8-39e0-d475-6ec2319c0880@cc.in2p3.fr> Message-ID: Hi, can you repeat your tests with * iperf from compute1 -> compute2 * iperf from compute2 -> compute1 * ip -r output of both nodes * watching top while doing the iperf and reporting the process using most cpu? * provding ethtool -k for all nics in compute1+2 Fabian Am Di., 16. März 2021 um 14:49 Uhr schrieb Jahson Babel : > > Hello everyone, > I have a bandwidth problem between the computes nodes of an openstack > cluster. > This cluster runs on Rocky version with OpenVSwitch. > To simplify I'll just pick 3 servers, one controller and two computes > nodes all connected to the same switch. > Every server is configured with two 10G links. Those links are > configured in LACP /teaming. > > From what I understand of teaming and this configuration I should be > able to get 10Gbps between all three nodes. > But if I iperf we are way below this : > > compute1 # sudo iperf3 -c compute2 -p 5201 > Connecting to host compute2, port 5201 > [ 4] local X.X.X.X port 44946 connected to X.X.X.X port 5201 > [ ID] Interval Transfer Bandwidth Retr Cwnd > [ 4] 0.00-1.00 sec 342 MBytes 2.87 Gbits/sec 137 683 KBytes > [ 4] 1.00-2.00 sec 335 MBytes 2.81 Gbits/sec 8 501 KBytes > > Plus the problem seems to be only present with incoming traffic. Which > mean I can almost get the full 10gbps if I iperf from a compute to the > controller. > > compute1 # sudo iperf3 -c controller -p 5201 > Connecting to host controller, port 5201 > [ 4] local X.X.X.X port 39008 connected to X.X.X.X port 5201 > [ ID] Interval Transfer Bandwidth Retr Cwnd > [ 4] 0.00-1.00 sec 1.10 GBytes 9.41 Gbits/sec 0 691 KBytes > [ 4] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec 0 803 KBytes > > If I do the opposite I get the same results I was getting between the 2 > computes. > From the tests we've done it seems related to the openstack's services, > specifically neutron or OpenVSwitch. From the time those services are > running we can't get the full bandwidth. > Stopping the services won't fix the issue, in our case removing the > packages and rebooting is the only way to obtain the full bandwidth > between computes. > > I voluntarily didn't mention VMs to simplify the question but of course > this behavior can also be observed in VMs > > Knowing that we can achieve 10Gbps it doesn't seems related to the > hardware nor the OS. That why we suspect OpenStack's services. > But I couldn't find any evidence or misconfiguration that could confirm > that. > So if anyone got some hints about that kind of setup and/or how mitigate > bandwidth decrease I would appreciate. > Let know if you need more info. > Thanks in advance, > > Jahson > > From mark at stackhpc.com Fri Mar 19 10:56:01 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 19 Mar 2021 10:56:01 +0000 Subject: [keystone][kolla][horizon] versioned or unversioned keystone endpoints? Message-ID: Hi, There seems to be some inconsistency around whether Keystone endpoints in the catalog should be versioned or not. Specifically, should they have a /v3 suffix? I did a survey of code using keystone-manage bootstrap via codesearch: * Kolla Ansible: unversioned * DevStack: unversioned * puppet-keystone: unversioned * Keystone install guides: versioned * OSA: unversioned I imagine that when v2 was around, the endpoint needed to be unversioned. But was there a point when it became recommended to add a version? I'm asking because of an issue in horizon when downloading a clouds.yaml file for application credentials. If the auth_url field is unversioned, then some older client libraries will fail with a Not Found error during e.g. openstack server list. I tested using Stein and Train U-C. The former fails, the latter works. If I use Stein U-C, then update keystoneauth1 to 3.17.3 (from Train U-C), it works. Thanks, Mark From eblock at nde.ag Fri Mar 19 12:16:19 2021 From: eblock at nde.ag (Eugen Block) Date: Fri, 19 Mar 2021 12:16:19 +0000 Subject: [nova] Train: how to use newer API microversion In-Reply-To: <20210318151056.Horde.DbugMnO_MHC_FTfWQQLCJF5@webmail.nde.ag> References: <20210318141049.Horde.z7gBKZkNsdpWduDhCMYW75H@webmail.nde.ag> <17845cb6fd5.fa9c27a1737010.1699855162024585304@ghanshyammann.com> <20210318144844.Horde.Npl7DgmvUtIPNBSRFOssR4U@webmail.nde.ag> <20210318151056.Horde.DbugMnO_MHC_FTfWQQLCJF5@webmail.nde.ag> Message-ID: <20210319121619.Horde.OV4lywdGI3FszqcRe2H-fGz@webmail.nde.ag> Alright, I made it work it seems. After I added the new endpoints (v2.1) and disabled the old ones (v2) I restarted nova-api service. This allowed me to use microversion 2.37 to use the option "--nic none", so that works as expected now. I just don't know yet if there are any clients that require v2 so for now I let them in "disabled" state. Thanks, Eugen Zitat von Eugen Block : >> If you (or your end users) rely on the legacy APIs, you can have both >> like this (192.0.2.40 is the example IP address of the API host; the >> /compute could be :some_port if you prefer that): > > I started with disabling the old endpoints but then I couldn't use > compute at all to create a new instance. There must be something > still relying on the old endpoint but I can't find it right now. > I'll give it another shot tomorrow. Thanks for your input. > > > Zitat von Radosław Piliszek : > >> On Thu, Mar 18, 2021 at 3:50 PM Eugen Block wrote: >>> I also found that already, but it doesn't describe how I can move from >>> v2 to v2.1 in order to use only the newer API. Do you have any >>> information about that? >> >> If you (or your end users) rely on the legacy APIs, you can have both >> like this (192.0.2.40 is the example IP address of the API host; the >> /compute could be :some_port if you prefer that): >> >> +----------------------------------+-----------+--------------+----------------+---------+-----------+------------------------------------------------+ >> | ID | Region | Service Name | Service Type | Enabled | >> Interface | URL | >> +----------------------------------+-----------+--------------+----------------+---------+-----------+------------------------------------------------+ >> | xxx | RegionOne | nova | compute | True | public >> | http://192.0.2.40/compute/v2.1 | >> | xxx | RegionOne | nova_legacy | compute_legacy | True | public >> | http://192.0.2.40/compute/v2/$(project_id)s | >> +----------------------------------+-----------+--------------+----------------+---------+-----------+------------------------------------------------+ >> >> If you don't need the legacy APIs, you can delete their endpoints. >> >> -yoctozepto From jahson.babel at cc.in2p3.fr Fri Mar 19 13:21:46 2021 From: jahson.babel at cc.in2p3.fr (Jahson Babel) Date: Fri, 19 Mar 2021 14:21:46 +0100 Subject: [ops] Bandwidth problem on computes In-Reply-To: References: <7f573860-4ef8-39e0-d475-6ec2319c0880@cc.in2p3.fr> Message-ID: <792fa0e0-8401-a230-3362-66a7b9c009a8@cc.in2p3.fr> Hi Fabian, Thank you for taking the time to respond. Here are all the things you asked : - compute1 => compute2 iperf, route, top : https://pastebin.com/fZ54xx19 - compute2 => compute1 iperf, route, top : https://pastebin.com/EZ2FZPCq - compute1 nic ethtool : https://pastebin.com/MVVFVuDj - compute2 nic ethtool : https://pastebin.com/Tb9zQVaf For the ethtool part consider picking only compute1 because I've already tried to play a little bit with GRO/GSO on compute2 without seeing any improvements so far. That's why it is different. The configuration on the compute1 is the default on our hypervisors. Plus I didn't make the ethtool on the VMs's interfaces. I picked the teaming interface, the management and the tunnel interface. In my opinion the tunnel interfaces should not matter but I included anyway. Can the high load from ksoftirqd lead to a such impact on bandwidth ? Let me know if you need something else. Jahson On 19/03/2021 11:45, Fabian Zimmermann wrote: > Hi, > > can you repeat your tests with > > * iperf from compute1 -> compute2 > * iperf from compute2 -> compute1 > * ip -r output of both nodes > * watching top while doing the iperf and reporting the process using most cpu? > * provding ethtool -k for all nics in compute1+2 > > Fabian > > Am Di., 16. März 2021 um 14:49 Uhr schrieb Jahson Babel > : >> Hello everyone, >> I have a bandwidth problem between the computes nodes of an openstack >> cluster. >> This cluster runs on Rocky version with OpenVSwitch. >> To simplify I'll just pick 3 servers, one controller and two computes >> nodes all connected to the same switch. >> Every server is configured with two 10G links. Those links are >> configured in LACP /teaming. >> >> From what I understand of teaming and this configuration I should be >> able to get 10Gbps between all three nodes. >> But if I iperf we are way below this : >> >> compute1 # sudo iperf3 -c compute2 -p 5201 >> Connecting to host compute2, port 5201 >> [ 4] local X.X.X.X port 44946 connected to X.X.X.X port 5201 >> [ ID] Interval Transfer Bandwidth Retr Cwnd >> [ 4] 0.00-1.00 sec 342 MBytes 2.87 Gbits/sec 137 683 KBytes >> [ 4] 1.00-2.00 sec 335 MBytes 2.81 Gbits/sec 8 501 KBytes >> >> Plus the problem seems to be only present with incoming traffic. Which >> mean I can almost get the full 10gbps if I iperf from a compute to the >> controller. >> >> compute1 # sudo iperf3 -c controller -p 5201 >> Connecting to host controller, port 5201 >> [ 4] local X.X.X.X port 39008 connected to X.X.X.X port 5201 >> [ ID] Interval Transfer Bandwidth Retr Cwnd >> [ 4] 0.00-1.00 sec 1.10 GBytes 9.41 Gbits/sec 0 691 KBytes >> [ 4] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec 0 803 KBytes >> >> If I do the opposite I get the same results I was getting between the 2 >> computes. >> From the tests we've done it seems related to the openstack's services, >> specifically neutron or OpenVSwitch. From the time those services are >> running we can't get the full bandwidth. >> Stopping the services won't fix the issue, in our case removing the >> packages and rebooting is the only way to obtain the full bandwidth >> between computes. >> >> I voluntarily didn't mention VMs to simplify the question but of course >> this behavior can also be observed in VMs >> >> Knowing that we can achieve 10Gbps it doesn't seems related to the >> hardware nor the OS. That why we suspect OpenStack's services. >> But I couldn't find any evidence or misconfiguration that could confirm >> that. >> So if anyone got some hints about that kind of setup and/or how mitigate >> bandwidth decrease I would appreciate. >> Let know if you need more info. >> Thanks in advance, >> >> Jahson >> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2964 bytes Desc: S/MIME Cryptographic Signature URL: From hberaud at redhat.com Fri Mar 19 14:09:27 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 19 Mar 2021 15:09:27 +0100 Subject: [oslo] Wallaby branches are now cut for oslo world Message-ID: Hello Osloers, As $subject (FYI) Thanks for your attention -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Mar 19 14:36:20 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 19 Mar 2021 10:36:20 -0400 Subject: [Cinder] Review pretty please In-Reply-To: References: Message-ID: On 3/19/21 12:42 AM, Sam Morrison wrote: > Hi Cinder cores, > > I've had a simple review [1] waiting for several months, could I please > get some feedback on this. Unfortunately, we're in the string freeze now so we can't approve the change until after RC-1 when the stable branch is cut. Also, left a suggestion for a revision on the review. Hopefully our monthly "Festival of XS Reviews" will prevent this situation from happening again. We don't want to discourage small changes like yours that actually have high impact because they're user facing. > > Thanks! > Sam > > [1] https://review.opendev.org/c/openstack/cinder/+/764875 > From marios at redhat.com Fri Mar 19 15:50:20 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 19 Mar 2021 17:50:20 +0200 Subject: [tripleo] Nominate David J. Peacock (dpeacock) for Validation Framework Core In-Reply-To: <20210319095843.gjw3tj76gh7pvtfp@gchamoul-mac> References: <20210309145322.p6op7bqzzbguryqs@gchamoul-mac> <20210319095843.gjw3tj76gh7pvtfp@gchamoul-mac> Message-ID: Hey Gael On Fri, Mar 19, 2021 at 12:00 PM Gaël Chamoulaud wrote: > > Ladies and Gentlemen, > > First, thanks for your votes! > > With the agreement of Marios, I've added David as part of the tripleo-core group > and his voting rights will be exercised only on the following repositories: thanks for adding David I can see he is in the tripleo-core group members list at [1]. > > - tripleo-validations > - validations-common > - validations-libs > - python-tripleoclient (only on VF related patches touching the > tripleo_validator.py CLI Code) > > However, it would be great to create a dedicated core group for the Validation > Framework team including those repositories. And thus being more compliant with > what we do with others TripleO Teams. Marios, any thoughts? > As briefly discussed this morning on #tripleo yeah I think it makes sense. I just did some digging about that so a couple of thoughts; these repos are under tripleo governance [2] so first thing so we can create a new group and add the tripleo-core group as members. I'm not sure how we create a new tripleo-validation (or whatever name) gerrit group yet though so there's that ;)... I couldn't quickly find info about that in hound [3] but I'm sure we can work that out _eventually_ Then I think we need to add a file like that [4] for the validation repos to set the access for the new group. Let's see if there are any other thoughts or comments about that proposal before we proceed anyway, regards, marios [1] https://review.opendev.org/admin/groups/0319cee8020840a3016f46359b076fa6b6ea831a,members [2] https://opendev.org/openstack/governance/src/commit/98614886e2a93c5981be4fc9508ce31f8fcd2d61/reference/projects.yaml#L3150-L3158 [3] https://codesearch.opendev.org/?q=tripleo-core&i=nope&files=&excludeFiles=&repos= [4] https://opendev.org/openstack/project-config/src/branch/master/gerrit/acls/openstack/puppet-tripleo.config From fungi at yuggoth.org Fri Mar 19 16:59:09 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 19 Mar 2021 16:59:09 +0000 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: References: Message-ID: <20210319165909.7xiey7glo4mep5nq@yuggoth.org> On 2021-03-19 07:51:14 +0100 (+0100), Herve Beraud wrote: [...] > @fungi: Can we try to reenqueue the job? [...] In doing so, we discovered that Ubuntu Focal doesn't like and isn't covered by the https://deb.nodesource.com/node_8.x/ package repository, so I've proposed a pair of changes to try using Node 10 instead, which should (in theory) work on Focal: https://review.opendev.org/781825 https://review.opendev.org/781826 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Fri Mar 19 17:54:08 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 19 Mar 2021 11:54:08 -0600 Subject: [tripleo][ci] ovb featureset01/35 and other jobs are down Message-ID: Greetings, ATM, the RDO team is working w/ Vexxhost to diagnose an outage in vexxhost that has caused various problems w/ heat stacks and now overcloud node registration and introspection. The issue is tracked w/ vexx and https://bugs.launchpad.net/tripleo/+bug/1920101. The third party check jobs are not expected to pass at this time. We'll follow up when we have more information. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Mar 19 20:05:53 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 19 Mar 2021 15:05:53 -0500 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <177b29d5bec.f253e1e0846575.1811981491977105734@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> <20210120224518.qslnzm55l77xdrry@yuggoth.org> <177b29d5bec.f253e1e0846575.1811981491977105734@ghanshyammann.com> Message-ID: <1784c18215e.10d2ece11817607.3943316040810905641@ghanshyammann.com> ---- On Wed, 17 Feb 2021 18:49:53 -0600 Ghanshyam Mann wrote ---- > ---- On Wed, 20 Jan 2021 16:45:19 -0600 Jeremy Stanley wrote ---- > > On 2021-01-20 07:26:05 +0000 (+0000), Lucian Petrut wrote: > > > For Windows related projects such as os-win and networking-hyperv, > > > we decided to keep the lower constraints job but remove indirect > > > dependencies from the lower-constraints.txt file. > > > > > > This made it much easier to maintain and it allows us to at least cover > > > direct dependencies. I suggest considering this approach instead of > > > completely dropping the lower constraints job, whenever possible. > > > Another option might be to make it non-voting while it’s getting fixed. > > [...] > > > > The fewer dependencies a project has, the easier this becomes. I'm > > not against projects continuing to do it if they can get it to work, > > but wouldn't want us to pressure folks to spend undue effort on it > > when they already have a lot more on their plates. I can understand > > where for projects with a very large set of direct dependencies this > > still has the problem that your stated minimums may conflict with > > (semi-circular) dependencies declared elsewhere in the transitive > > dependency set outside your lower-constraints.txt/requirements.txt > > file. > > I tried with the direct deps with cinder, nova and placement by keeping only > deps what we have in requirements.txt, test-requirements.txt, and 'extras' in setup.cfg. > We need to get all those three to pass the requirement-check job as it checks for > all those deps to be in lower-constraints.txt > > - https://review.opendev.org/q/topic:%22l-c-direct-deps-only%22+(status:open%20OR%20status:merged) > > Some nova test failing which might need some deps version bump but overall It seems direct deps work > fine and removes a lot of deps from l-c file (77 from nova, 64 from cinder). With that testing, I am ok > with that proposal now (as in my experience on community goals effort, I spent 50% of the time on > fixing the indirect deps ). > > I am summarizing the discussion and earlier proposal below, please let us know if that works fine for everyone > and accordingly, we can take the next step to document this somewhere and project start working on this. > > - Only keep direct deps in lower-constraints.txt > - Remove the lower constraints testing from all stable branches. TC continued the discussion on the previous week[1] and this week's meeting[2]. With all the current resources and compute bandwidth and effort of maintaining this, it is fine to drop if the project wants. In the Project team guide, lower bound testing is already mentioned as an optional thing to test[3]. We can clarify it more and in the testing section too, I am doing that in - https://review.opendev.org/c/openstack/project-team-guide/+/781900 As summary: This is up to the project to maintain and test the lower bounds if they want to drop it then also it is entirely fine. Feel free to reach out to TC on #openstack-tc for further query/clarification. [1] http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-03-11-15.00.log.html#l-126 [2] http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-03-18-15.01.log.html#l-146 [3] 3rd paragraph in https://docs.openstack.org/project-team-guide/dependency-management.html#solution -gmann > > -gmann > > > -- > > Jeremy Stanley > > > > From Stefan.Kelber at gmx.de Fri Mar 19 21:16:22 2021 From: Stefan.Kelber at gmx.de (Stefan Kelber) Date: Fri, 19 Mar 2021 22:16:22 +0100 Subject: OpenStack Victoria - Kolla-Ansible - Cinder - iSCSI-backend - Update driver status failed: config is uninitialized. Message-ID: An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Mar 20 09:36:39 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 20 Mar 2021 10:36:39 +0100 Subject: OpenStack Victoria - Kolla-Ansible - Cinder - iSCSI-backend - Update driver status failed: config is uninitialized. In-Reply-To: References: Message-ID: Hello Stefan, try adding in cinder-volume config, DEFAULT section: debug = true and then attaching its logs. That sometimes unveils the underlying issue. -yoctozepto On Sat, Mar 20, 2021 at 12:41 AM Stefan Kelber wrote: > > Hello List, > > > > i have trouble connecting an INFORTREND iSCSI array as external iSCSI backend to Cinder in OpenStack Victoria, deployed via Kolla-Ansible (All in One, it's still PoC). > > > > I use a volume driver provided by the manufacturer, which is listed as "completely" supported by OpenStack. > > And i can use it to log in to the array using the client. > > iSCSI session is established. > > Firewall etc off, as recommended. > > I can even start cinder-volume and as long as this service stays up, i can create volumes and remove them, but always they are reported in "Error" state. > > Soon after having started the service cinder-volume, the service shuts down. > > When reading the logfiles, it looks like the following error is the first one to show up, when starting the cinder-volume: > > > > cinder-volume.log: > > WARNING cinder.volume.manager [req-1290dad2-d09e-40f1-9b38-2f4b5e7accb8 - - - - -] Update driver status failed: (config name IFT-ISCSI) is uninitialized. > > ERROR cinder.service [-] Manager for service cinder-volume host at IFT-ISCSI is reporting problems, not sending heartbeat. Service will appear "down". > > DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 > > DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:282 > > > > cinder.conf is setup according to documentation by the manufacturer, and matches what i found similar from DELL and HITACHI. > > Googling for this error unveils that this is a very frequent error, but non of the proposed solutions works for me. > > Does any body have a clue where to look? > > I am running out of ideas, unfortunately > > Best > > SK From fungi at yuggoth.org Sat Mar 20 14:45:55 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 20 Mar 2021 14:45:55 +0000 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: <20210319165909.7xiey7glo4mep5nq@yuggoth.org> References: <20210319165909.7xiey7glo4mep5nq@yuggoth.org> Message-ID: <20210320144554.t7qvrsyjtqku5pe3@yuggoth.org> On 2021-03-19 16:59:09 +0000 (+0000), Jeremy Stanley wrote: > On 2021-03-19 07:51:14 +0100 (+0100), Herve Beraud wrote: > [...] > > @fungi: Can we try to reenqueue the job? > [...] > > In doing so, we discovered that Ubuntu Focal doesn't like and isn't > covered by the https://deb.nodesource.com/node_8.x/ package > repository, so I've proposed a pair of changes to try using Node 10 > instead, which should (in theory) work on Focal: > > https://review.opendev.org/781825 > https://review.opendev.org/781826 That's gotten farther, hopefully https://review.opendev.org/781966 will address the next bug we've exposed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From phalen at gmail.com Sun Mar 21 01:24:52 2021 From: phalen at gmail.com (Kendrick) Date: Sat, 20 Mar 2021 21:24:52 -0400 Subject: iPXE update/configuration for the undercloud. Message-ID: I have a melanox connectx3 and a few broadcom systems that do not properly pxe boot, using maas the broadcoms work fine. I was looking around and one of the solutions to similar problems from 2017 and 19 were to use a different version of the ipxe files. I was hoping to know what settings/changes/scripts openstack undercloud used so i can make sure the connectx3 support is built in to the ipxe file and what files needed to be copied where from the source buld to the containers since the containers have about 20 sets of the ipxe files from what i can see when i use find to locate them from the host. Regards Kendrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From apps at mossakowski.ch Sat Mar 20 09:27:21 2021 From: apps at mossakowski.ch (apps at mossakowski.ch) Date: Sat, 20 Mar 2021 09:27:21 +0000 Subject: [octavia] victoria - loadbalancer works but its operational status is offline Message-ID: Hello, I have stable/victoria baremetal openstack with octavia installed on centos8 using openvswitch mechanism driver: octavia api on controller, health-manager,housekeeping,worker on 3 compute/network nodes. Official docs include only ubuntu with linuxbridge mechanism but I used https://github.com/prastamaha/openstack-octavia as a reference to get it working on centos8 with ovs. I will push those docs instructions for centos8 soon: https://github.com/openstack/octavia/tree/master/doc/source/install. I created basic http scenario using https://docs.openstack.org/octavia/victoria/user/guides/basic-cookbook.html#deploy-a-basic-http-load-balancer. Loadbalancer works but its operational status is offline (openstack_loadbalancer_outputs.txt). On all octavia workers I see the same warning message in health_manager.log: Health Manager experienced an exception processing a heartbeat message from ('172.31.255.233', 1907). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' I've searched for related active bug but all I found is this not related in my opinion: https://storyboard.openstack.org/#!/story/2008615 I'm attaching all info I've gathered: - octavia.conf and health_manager debug logs (octavia_config_and_health_manager_logs.txt) - tcpdump from amphora VM (tcpdump_from_amphora_vm.txt) - tcpdump from octavia worker (tcpdump_from_octavia_worker.txt) - debug amphora-agent.log from amphora VM (amphora-agent.log) Can you point me to the right direction what I have missed? Thanks! Piotr Mossakowski https://github.com/moss2k13 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: openstack_loadbalancer_outputs.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: amphora-agent.log Type: text/x-log Size: 9818 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: tcpdump_from_octavia_worker.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: tcpdump_from_amphora_vm.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: octavia_config_and_health_manager_logs.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - apps at mossakowski.ch - 0x9FDBE75C.asc Type: application/pgp-keys Size: 662 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From Stefan.Kelber at gmx.de Sat Mar 20 20:43:40 2021 From: Stefan.Kelber at gmx.de (Stefan Kelber) Date: Sat, 20 Mar 2021 21:43:40 +0100 Subject: Fw: Aw: Re: OpenStack Victoria - Kolla-Ansible - Cinder - iSCSI-backend - Update driver status failed: config is uninitialized. References: Message-ID: An HTML attachment was scrubbed... URL: From erdosi.peter at kifu.gov.hu Sun Mar 21 09:45:57 2021 From: erdosi.peter at kifu.gov.hu (=?UTF-8?B?RXJkxZFzaSBQw6l0ZXI=?=) Date: Sun, 21 Mar 2021 10:45:57 +0100 Subject: OpenStack Victoria - Kolla-Ansible - Cinder - iSCSI-backend - Update driver status failed: config is uninitialized. In-Reply-To: References: Message-ID: <58e83e70-839b-da21-7dd3-6f697e40e3d5@kifu.gov.hu> Hello! 2021. 03. 19. 22:16 keltezéssel, Stefan Kelber írta: > > Does any body have a clue where to look? > > I am running out of ideas, unfortunately > Just a quick question: (we use dell emc and fujitsu eternus iSCSI backend with Rocky now) Did you add the following network interfaces to the server, which runs cinder-volume:  - Management: (i mean, there shoud be an API, which need to be reached)  - iSCSI interface(s): If you want to upload image, the cinder-volume process shoud login to the iSCSI, copy the image (from glance 4example), and logout. Do you see iSCSI sessions (iscsiadm -m session) creating? Also: did you check the guide for the storage? (I've found a config option with the Eternus, which need to be configured in storage side, until that, it's not working) maybe you have somethin similar? Regards:  Peter -- *Erdősi Péter * /Informatikus, IKT Fejlesztési Főosztály / *Kormányzati Informatikai Fejlesztési Ügynökség * cím: 1134 Budapest, Váci út 35. tel: +36 1 450 3080 e-mail: erdosi.peter at kifu.gov.hu KIFÜ - www.kifu.gov.hu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ndjfpddambiphjgi.png Type: image/png Size: 3666 bytes Desc: not available URL: From Stefan.Kelber at gmx.de Sun Mar 21 15:35:33 2021 From: Stefan.Kelber at gmx.de (Stefan Kelber) Date: Sun, 21 Mar 2021 16:35:33 +0100 Subject: Fw: Aw: Re: OpenStack Victoria - Kolla-Ansible - Cinder - iSCSI-backend - Update driver status failed: config is uninitialized. References: Message-ID: An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Mar 21 21:00:55 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 21 Mar 2021 21:00:55 +0000 Subject: [openstack-community] Authenticating to devstack and listing nova flavors + servers In-Reply-To: <8865F06E-0BB6-4F72-9503-36466252BCFD@automatedsystems.io> References: <8865F06E-0BB6-4F72-9503-36466252BCFD@automatedsystems.io> Message-ID: <20210321210055.nu6zh2xoznsqzo73@yuggoth.org> On 2021-03-21 16:39:42 -0400 (-0400), Joe Wolfe wrote: > I am working on an api and would like to get some more context > behinds authenticating into the api - so I am trying to get a list > out of available flavors 1 and servers to start off with. I am > able to get a token that I can interact with against keystone > which shows up as https://mydevstackurl/identity/ however when I > try and run a simple curl against my compute as follows: > > curl -H "X-Auth-Token: $OS_TOKEN" -s https://http://mydevstackurl/compute/api/v2.1/servers > > I got no response back from curl - no 200 ,no 30x no 4xx nothing. > > Can anyone help to get me started ? The documents seem to be > pretty detailed - but the methods provided don’t seem to work > either in the tutorial: for example: > > curl -s -H "X-Auth-Token: $OS_TOKEN" \ > $OS_COMPUTE_API/flavors \ > | python -m json.tool > > Only returns: > > zsh: no matches found: http://mydevstackurl/identity/v3/auth/tokens?nocatalog > Expecting value: line 1 column 1 (char 0) As the description of the community mailing list states, "This list does not provide support for OpenStack software." I have Cc'd the openstack-discuss mailing list which is intended for "Discussion of OpenStack use and development." Please follow up there with your question. I'm keeping you in the To header in case you're not subscribed there, so that people who reply to this will hopefully include you directly. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jwolfe at automatedsystems.io Sun Mar 21 21:06:06 2021 From: jwolfe at automatedsystems.io (Joe Wolfe) Date: Sun, 21 Mar 2021 17:06:06 -0400 Subject: Authenticating to devstack and listing nova flavors + servers Message-ID: <4A50B48B-189A-4ECA-B0F5-81DE698E126C@automatedsystems.io> Hi All, I am working on an api and would like to get some more context behinds authenticating into the api - so I am trying to get a list out of available flavors 1 and servers to start off with. I am able to get a token that I can interact with against keystone which shows up as https://mydevstackurl/identity/ however when I try and run a simple curl against my compute as follows: curl -H "X-Auth-Token: $OS_TOKEN" -s https://http://mydevstackurl/compute/api/v2.1/servers I got no response back from curl - no 200 ,no 30x no 4xx nothing. Can anyone help to get me started ? The documents seem to be pretty detailed - but the methods provided don’t seem to work either in the tutorial: for example: curl -s -H "X-Auth-Token: $OS_TOKEN" \ $OS_COMPUTE_API/flavors \ | python -m json.tool Only returns: zsh: no matches found: http://mydevstackurl/identity/v3/auth/tokens?nocatalog Expecting value: line 1 column 1 (char 0) Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Mon Mar 22 01:46:28 2021 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 22 Mar 2021 12:46:28 +1100 Subject: Authenticating to devstack and listing nova flavors + servers In-Reply-To: <4A50B48B-189A-4ECA-B0F5-81DE698E126C@automatedsystems.io> References: <4A50B48B-189A-4ECA-B0F5-81DE698E126C@automatedsystems.io> Message-ID: On Sun, Mar 21, 2021 at 05:06:06PM -0400, Joe Wolfe wrote: > curl -s -H "X-Auth-Token: $OS_TOKEN" \ > $OS_COMPUTE_API/flavors \ > | python -m json.tool > > Only returns: > > zsh: no matches found: http://mydevstackurl/identity/v3/auth/tokens?nocatalog > Expecting value: line 1 column 1 (char 0) It looks like zsh coniders the "?" in the URL there as some sort of matching character, and can't find anything. Just quote the URL like "$OS_COMPUTE_API/flavors". In general devstack doesn't work with zsh, but I know some people use [1] for openrc import. -i [1] https://docs.openstack.org/devstack/victoria/faq.html#can-i-at-least-source-openrc-with-zsh From gthiemonge at redhat.com Mon Mar 22 08:10:53 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Mon, 22 Mar 2021 09:10:53 +0100 Subject: [octavia] victoria - loadbalancer works but its operational status is offline In-Reply-To: References: Message-ID: Hi, Most of the OFFLINE operational status issues are caused by communication problems between the amphorae and the Octavia health-manager. In your case, the "Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode'" log message shows that the health-manager receives the heartbeat packets from the amphorae but it is unable to decode them. Those packets are encrypted JSON messages and it seems that the key ([health_manager].heartbeat_key see https://docs.openstack.org/octavia/latest/configuration/configref.html#health-manager) used to encrypt those messages is not defined in your configuration file. So I would suggest configuring it and restarting the Octavia services, then you can re-create or failover the load balancers (you cannot change this parameter in a running load balancer). Gregory On Sun, Mar 21, 2021 at 6:17 PM wrote: > Hello, > I have stable/victoria baremetal openstack with octavia installed on > centos8 using openvswitch mechanism driver: octavia api on controller, > health-manager,housekeeping,worker on 3 compute/network nodes. > Official docs include only ubuntu with linuxbridge mechanism but I used > https://github.com/prastamaha/openstack-octavia as a reference to get it > working on centos8 with ovs. > I will push those docs instructions for centos8 soon: > https://github.com/openstack/octavia/tree/master/doc/source/install. > I created basic http scenario using > https://docs.openstack.org/octavia/victoria/user/guides/basic-cookbook.html#deploy-a-basic-http-load-balancer > . > Loadbalancer works but its operational status is offline > (openstack_loadbalancer_outputs.txt). > On all octavia workers I see the same warning message in > health_manager.log: > Health Manager experienced an exception processing a heartbeat message > from ('172.31.255.233', 1907). Ignoring this packet. Exception: 'NoneType' > object has no attribute 'encode' > I've searched for related active bug but all I found is this not related > in my opinion: https://storyboard.openstack.org/#!/story/2008615 > I'm attaching all info I've gathered: > > - octavia.conf and health_manager debug logs > (octavia_config_and_health_manager_logs.txt) > - tcpdump from amphora VM (tcpdump_from_amphora_vm.txt) > - tcpdump from octavia worker (tcpdump_from_octavia_worker.txt) > - debug amphora-agent.log from amphora VM (amphora-agent.log) > > Can you point me to the right direction what I have missed? > Thanks! > Piotr Mossakowski > https://github.com/moss2k13 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 22 08:15:38 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 22 Mar 2021 09:15:38 +0100 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: <20210320144554.t7qvrsyjtqku5pe3@yuggoth.org> References: <20210319165909.7xiey7glo4mep5nq@yuggoth.org> <20210320144554.t7qvrsyjtqku5pe3@yuggoth.org> Message-ID: I confirm that the latest changes helped to fix the issue. The latest job has been successfully executed https://zuul.openstack.org/build/042af4adb20c472583c278cd365a12ab I think that we can consider that this topic is now closed. Le sam. 20 mars 2021 à 15:49, Jeremy Stanley a écrit : > On 2021-03-19 16:59:09 +0000 (+0000), Jeremy Stanley wrote: > > On 2021-03-19 07:51:14 +0100 (+0100), Herve Beraud wrote: > > [...] > > > @fungi: Can we try to reenqueue the job? > > [...] > > > > In doing so, we discovered that Ubuntu Focal doesn't like and isn't > > covered by the https://deb.nodesource.com/node_8.x/ package > > repository, so I've proposed a pair of changes to try using Node 10 > > instead, which should (in theory) work on Focal: > > > > https://review.opendev.org/781825 > > https://review.opendev.org/781826 > > That's gotten farther, hopefully https://review.opendev.org/781966 > will address the next bug we've exposed. > -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 22 09:19:34 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 22 Mar 2021 10:19:34 +0100 Subject: [release] Xena PTG In-Reply-To: References: Message-ID: Dear team fellow, The poll is now closed. Our PTG will take place on April 19 at 14:00 UTC on Folsom. https://ethercalc.net/oz7q0gds9zfi Do not forget to register to the event the PTG team will contact us with password and event details. https://www.eventbrite.com/e/project-teams-gathering-april-2021-tickets-143360351671 Thanks for your attention Le mar. 16 mars 2021 à 09:43, Herve Beraud a écrit : > Friendly reminder about our PTG session, don't forget to vote for time > slots before the 21st of March > > Le mer. 10 mars 2021 à 10:52, Herve Beraud a écrit : > >> Oï releasers, >> >> The PTG is fast approaching (Apr 19 - 23). >> >> To help to organize the gathering: >> 1) please fill the doodle[1] with the time slots that fit well for you; >> 2) please add your PTG topics in our etherpad[2]. >> >> Voting will be closed on the 21st of March. >> >> Thanks for your reading, >> >> [1] https://doodle.com/poll/8d8n2picqnhchhsv >> [2] https://etherpad.opendev.org/p/xena-ptg-os-relmgt >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Mon Mar 22 09:21:32 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 22 Mar 2021 10:21:32 +0100 (CET) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1836060846.22834.1616404892956@ox.dhbw-mannheim.de> Hi Dmitriy, thanks for your answer! Yes, we do use swift and its use as glance backend is intentional. I got the following from the swift-proxy-server logs in the swift container on the infra host after taking a snapshot: Mar 22 08:43:43 infra1-swift-proxy-container-27169fa7 proxy-server[87]: Client disconnected without sending last chunk (txn: txa7c64547baf0450eb0034-006058588b) (client_ip: 192.168.110. 106) Mar 22 08:43:43 infra1-swift-proxy-container-27169fa7 proxy-server[87]: 192.168.110.106 192.168.110.211 22/Mar/2021/08/43/43 PUT /v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images/3ec63ec2-aa3b-4c3b-a904-b55d1a6ec878 -00001 HTTP/1.0 499 - python-swiftclient-3.10.1 gAAAAABgWFiKMz9R... 204800000 89 - txa7c64547baf0450eb0034-006058588b - 52.2991 - - 1616402571.623875856 1616402623.922997952 0 On the swift host some services logs contain errors. E.g. the swift-container-updater service: Mar 22 08:42:19 bc1bl12 systemd[1]: swift-container-updater.service: Main process exited, code=exited, status=1/FAILURE Mar 22 08:42:19 bc1bl12 systemd[1]: swift-container-updater.service: Failed with result 'exit-code'. Mar 22 08:42:21 bc1bl12 systemd[1]: swift-container-updater.service: Scheduled restart job, restart counter is at 162982. Mar 22 08:42:21 bc1bl12 systemd[1]: Stopped swift-container-updater service. Mar 22 08:42:21 bc1bl12 systemd[1]: Started swift-container-updater service. Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: Traceback (most recent call last): Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", line 803, in config_fallocate_value Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: reserve_value = float(reserve_value[:-1]) Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: ValueError: could not convert string to float: '1%' Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: During handling of the above exception, another exception occurred: Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: Traceback (most recent call last): Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File "/openstack/venvs/swift-22.1.0/bin/swift-container-updater", line 23, in Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: run_daemon(ContainerUpdater, conf_file, **options) Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/daemon.py", line 304, in run_daemon Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: utils.config_fallocate_value(conf.get('fallocate_reserve', '1%')) Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", line 809, in config_fallocate_value Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: raise ValueError('Error: %s is an invalid value for fallocate' Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: ValueError: Error: 1%% is an invalid value for fallocate_reserve. The same exit-code and traceback shows in the logs of swift-container-auditor, swift-account-auditor and swift-account-reaper services. Does this tell you anything useful? We didn't experience any problems when uploading files to containers, only when taking snapshots of instances. Kind regards, Oliver > ------------------------------ > > Message: 4 > Date: Thu, 18 Mar 2021 12:44:47 +0200 > From: Dmitriy Rabotyagov > To: "openstack-discuss at lists.openstack.org" > > Subject: Re: [glance][openstack-ansible] Snapshots disappear during > saving > Message-ID: <374941616064157 at mail.yandex.ru> > Content-Type: text/plain; charset=utf-8 > > Hi Olver, > > Am I right that you're also using OpenStack Swift and it's intentional to > store images there? > Since the issue is related to the upload process into the Swift. So also > checking Swift logs > be usefull as well. > From eblock at nde.ag Mon Mar 22 09:39:16 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 22 Mar 2021 09:39:16 +0000 Subject: [nova] When are "_del" directories deleted? Message-ID: <20210322093916.Horde.G48B_FFCSMjDwSPjjNRoPcv@webmail.nde.ag> Hi *, I stumbled across a different issue when I noticed there's a _del directory in /var/lib/nova/instances/. It's a shared directory to enable live-migration. I noticed these "_del" directories before but I didn't really investigate, everything seems to work as expected. But I would like to understand what exactly happens with such a directory, for example currently there's just one instance having also a _del directory. The timestamp is from 2 years ago when the instance was resized the first time. Looking in the nova code [1] I found the reference to the resize function, I just don't quite understand what would happen with that instance if it was shutdown. Would nova delete it to "clean up"? Can I remove that directory safely to prevent nova from doing anything destructive to that instance? I'd appreciate any insights about this. Just to be safe I created an RBD snapshot of the disk to prevent a deletion. Best regards, Eugen [1] https://opendev.org/openstack/nova/src/commit/146b27f22327f8d60c1017c22ccf18d0e16f1eb7/nova/virt/libvirt/driver.py#L7725 From mark at stackhpc.com Mon Mar 22 09:49:17 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 22 Mar 2021 09:49:17 +0000 Subject: [rabbitmq][kolla-ansible] - RabbitMQ disconnects - 60s timeout In-Reply-To: <21A9D0D4-31CF-4B08-B358-5A7F63C44E3A@poczta.onet.pl> References: <21A9D0D4-31CF-4B08-B358-5A7F63C44E3A@poczta.onet.pl> Message-ID: On Thu, 18 Mar 2021 at 16:05, Adam Tomas wrote: > > Hi, > I have a problem with rabbitmq heartbeats timeout (described also here: https://bugzilla.redhat.com/show_bug.cgi?id=1711794). Although I have threads=1 the problem still persists, generating a lot of messages in logs: > > 2021-03-18 15:17:17.482 [error] <0.122.51> closing AMQP connection <0.122.51> (x.x.x.100:60456 -> x.x.x.100:5672 - mod_wsgi:699:9a813fcb-c29f-4886-82bc-00bf478b6b64): > missed heartbeats from client, timeout: 60s > 2021-03-18 15:17:17.484 [info] <0.846.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed > 2021-03-18 15:18:15.918 [error] <0.150.51> closing AMQP connection <0.150.51> (x.x.x.111:41934 -> x.x.x.100:5672 - mod_wsgi:697:b608c7b8-9644-434e-93af-c00222c0a700): > missed heartbeats from client, timeout: 60s > 2021-03-18 15:18:15.920 [error] <0.153.51> closing AMQP connection <0.153.51> (x.x.x.111:41936 -> x.x.x.100:5672 - mod_wsgi:697:77348197-148b-41a6-928f-c5eddfab57c9): > missed heartbeats from client, timeout: 60s > 2021-03-18 15:18:15.920 [info] <0.1527.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed > 2021-03-18 15:18:15.922 [info] <0.1531.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed > 2021-03-18 15:20:16.080 [info] <0.2196.51> accepting AMQP connection <0.2196.51> (x.x.x.111:34826 -> x.x.x.100:5672) > 2021-03-18 15:20:16.080 [info] <0.2199.51> accepting AMQP connection <0.2199.51> (x.x.x.111:34828 -> x.x.x.100:5672) > > I’ve set heartbeat = 600 in rabbitmq.conf and still get disconnections after 60s timeout… How to set proper timeout to avoid disconnections? Hi Adam, I have seen similar messages in the past, but haven't really looked into it. It seems to happen during some intensive processes like encrypted cinder volume creation. Have you tried configuring oslo.messaging? For example, [oslo_messaging_rabbitmq] heartbeat_timeout_threshold. https://docs.openstack.org/oslo.messaging/latest/configuration/opts.html Mark > > Best regards > Adam > From noonedeadpunk at ya.ru Mon Mar 22 09:50:51 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 22 Mar 2021 11:50:51 +0200 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <1836060846.22834.1616404892956@ox.dhbw-mannheim.de> References: <1836060846.22834.1616404892956@ox.dhbw-mannheim.de> Message-ID: <971831616406310@mail.yandex.ru> An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 22 10:04:15 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 22 Mar 2021 11:04:15 +0100 Subject: [oslo] Cancel meeting today Message-ID: Hello Osloers, I won't be around to run the meeting today. If someone else from the Oslo team wants to run it, please feel free. Thanks for your understanding -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Mon Mar 22 10:06:19 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 22 Mar 2021 12:06:19 +0200 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <971831616406310@mail.yandex.ru> References: <1836060846.22834.1616404892956@ox.dhbw-mannheim.de> <971831616406310@mail.yandex.ru> Message-ID: <984511616407178@mail.yandex.ru> Well, looking into the fix I suggested, I'm not sure if it's valid one. There's really be a mess in patches, and according to the log provided, config needs just `1%` instead of the `1%%` you currently have. And it feels like that's what default behaviour should do with [1]. But I'm pretty sure that this error was making swift fail and thus having weird issues while operating. So I'm not sure what specifily wrong with value of swift_fallocate_reserve - maybe we've missed some config to define it or it has been overriden somewhere, but it feels like current default should cover issue you see in swift... [1] https://opendev.org/openstack/openstack-ansible-os_swift/src/branch/stable/victoria/defaults/main.yml#L162 22.03.2021, 11:56, "Dmitriy Rabotyagov" : > Yes, 1%% is smth we're fighting for years, as this setting changes on the swift side from time to time, and I really lost account which one is valid at the moment. > > Here's related SWIFT bug: > > https://bugs.launchpad.net/swift/+bug/1844368 > > I've just pushed https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/782117 to cover this issue. Can you try applying this change manually to see if this works? > > 22.03.2021, 11:25, "Oliver Wenz" : >> Hi Dmitriy, >> thanks for your answer! Yes, we do use swift and its use as glance backend is >> intentional. >> >> I got the following from the swift-proxy-server logs in the swift container on >> the infra host after taking a snapshot: >> >> Mar 22 08:43:43 infra1-swift-proxy-container-27169fa7 proxy-server[87]: Client >> disconnected without sending last chunk (txn: >> txa7c64547baf0450eb0034-006058588b) (client_ip: 192.168.110. >> 106) >> Mar 22 08:43:43 infra1-swift-proxy-container-27169fa7 proxy-server[87]: >> 192.168.110.106 192.168.110.211 22/Mar/2021/08/43/43 PUT >> /v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images/3ec63ec2-aa3b-4c3b-a904-b55d1a6ec878 >> -00001 HTTP/1.0 499 - python-swiftclient-3.10.1 gAAAAABgWFiKMz9R... 204800000 89 >> - txa7c64547baf0450eb0034-006058588b - 52.2991 - - 1616402571.623875856 >> 1616402623.922997952 0 >> >> On the swift host some services logs contain errors. E.g. the >> swift-container-updater service: >> >> Mar 22 08:42:19 bc1bl12 systemd[1]: swift-container-updater.service: Main >> process exited, code=exited, status=1/FAILURE >> Mar 22 08:42:19 bc1bl12 systemd[1]: swift-container-updater.service: Failed with >> result 'exit-code'. >> Mar 22 08:42:21 bc1bl12 systemd[1]: swift-container-updater.service: Scheduled >> restart job, restart counter is at 162982. >> Mar 22 08:42:21 bc1bl12 systemd[1]: Stopped swift-container-updater service. >> Mar 22 08:42:21 bc1bl12 systemd[1]: Started swift-container-updater service. >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: Traceback (most recent >> call last): >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File >> "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", >> line 803, in config_fallocate_value >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: reserve_value = >> float(reserve_value[:-1]) >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: ValueError: could not >> convert string to float: '1%' >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: During handling of the >> above exception, another exception occurred: >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: Traceback (most recent >> call last): >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File >> "/openstack/venvs/swift-22.1.0/bin/swift-container-updater", line 23, in >> >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: >>     run_daemon(ContainerUpdater, conf_file, **options) >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File >> "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/daemon.py", >> line 304, in run_daemon >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: >>     utils.config_fallocate_value(conf.get('fallocate_reserve', '1%')) >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File >> "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", >> line 809, in config_fallocate_value >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: raise >> ValueError('Error: %s is an invalid value for fallocate' >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: ValueError: Error: 1%% >> is an invalid value for fallocate_reserve. >> >> The same exit-code and traceback shows in the logs of swift-container-auditor, >> swift-account-auditor and swift-account-reaper services. Does this tell you >> anything useful? >> >> We didn't experience any problems when uploading files to containers, only when >> taking snapshots of instances. >> >> Kind regards, >> Oliver >> >>>  ------------------------------ >>> >>>  Message: 4 >>>  Date: Thu, 18 Mar 2021 12:44:47 +0200 >>>  From: Dmitriy Rabotyagov >>>  To: "openstack-discuss at lists.openstack.org" >>>           >>>  Subject: Re: [glance][openstack-ansible] Snapshots disappear during >>>          saving >>>  Message-ID: <374941616064157 at mail.yandex.ru> >>>  Content-Type: text/plain; charset=utf-8 >>> >>>  Hi Olver, >>> >>>  Am I right that you're also using OpenStack Swift and it's intentional to >>>  store images there? >>>  Since the issue is related to the upload process into the Swift. So also >>>  checking Swift logs >>>  be usefull as well. > > -- > Kind Regards, > Dmitriy Rabotyagov --  Kind Regards, Dmitriy Rabotyagov From bcafarel at redhat.com Mon Mar 22 11:35:37 2021 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 22 Mar 2021 12:35:37 +0100 Subject: [neutron][stable] neutron-lib stable branches core reviewers In-Reply-To: <2771831.n0TQhf9oRt@p1> References: <2771831.n0TQhf9oRt@p1> Message-ID: On Thu, 18 Mar 2021 at 14:48, Slawek Kaplonski wrote: > Hi, > > I just noticed that neutron-lib project has got own group > "neutron-lib-stable-maint" which has +2 powers in neutron-lib stable > branches [1]. As I see now in gerrit that group don't have any members. > Would it be maybe possible to remove that group and add > "neutron-stable-maint" to the neutron-lib stable branches instead? If yes, > should I simply propose patch to change [1] or is there any other way which > I should do it? > I guess we never spotted that "neutron-lib-stable-maint" before, as it seems neutron cores had +2 powers before: https://review.opendev.org/c/openstack/neutron-lib/+/717093 But yes it could be useful to have either of these groups (neutron cores or stable cores) in, backports to neutron-lib are rare but some happen - I remember one some time ago where it was valid to have a patch backported. > [1] > https://github.com/openstack/project-config/blob/master/gerrit/acls/openstack/neutron-lib.config > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikatzir at infinidat.com Mon Mar 22 11:36:18 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Mon, 22 Mar 2021 13:36:18 +0200 Subject: [E] [ironic] How to move node from active state to manageable In-Reply-To: References: <54186D58-DF4C-4E1C-BCEA-D19EF3963215@infinidat.com> Message-ID: <1ADEE612-C577-48AC-B861-A0C49DF2265F@infinidat.com> Hello Jay, I have another question in regards to managing nodes. I had a situation where the undercloud-node had a problem with it’s disk and has disconnected from overcloud. I couldn’t restore the undercloud controller and ended up re-installing the undercloud (running 'openstack undercloud install’). The installation ended successfully but now I’m in a situation where Cleanup of nodes fails: (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean failed | True | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | clean failed | True | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ I’ve tried to move node to available state but cannot: (undercloud) [stack at interop010 ~]$ openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 The requested action "provide" can not be performed on node "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". (HTTP 400) How do I make the nodes available again? as the deployment of overcloud fails with: ERROR due to "Message: No valid host was found. , Code: 500” Thanks, Igal > On 4 Mar 2021, at 20:12, Jay Faulkner wrote: > > When a node is active with an instance UUID set, that generally indicates a nova instance (with that UUID) is provisioned onto the node. Nodes that are provisioned (active) are not able to be moved to manageable state. > > If you want to reprovision these nodes, you'll want to delete the associated instances from Nova (openstack server delete instanceUUID), and after they complete a cleaning cycle they'll return to available. > > Good luck, > Jay Faulkner > > > On Thu, Mar 4, 2021 at 10:01 AM Igal Katzir > wrote: > Hello Forum, > > I have an overcloud that gone bad and I am trying to re-deploy it, Running rhos16.1 with one director and two overcloud nodes (compute and controller) > I have re-installed undercloud and having both nodes in an active provisioning state. > Do I need to run introspection again? > Here is the outputted for baremetal node list: > (undercloud) [stack at interop010 ~]$ openstack baremetal node list > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | c7bf16b7-eb3c-4022-88de-7c5a78cda174 | power on | active | False | > | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | 99223f65-6985-4815-92ff-e19a28c2aab1 | power on | active | False | > +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ > When I want to move each node from active > manage I get an error: > (undercloud) [stack at interop010 ~]$ openstack baremetal node manage 4b02703a-f765-4ebb-85ed-75e88b4cbea5 > The requested action "manage" can not be performed on node "4b02703a-f765-4ebb-85ed-75e88b4cbea5" while it is in state "active". (HTTP 400) > > How do I get to a state which is ready for deployment (available) ? > > Thanks, > Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Mar 22 11:56:11 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 22 Mar 2021 11:56:11 +0000 Subject: [nova] When are "_del" directories deleted? In-Reply-To: <20210322093916.Horde.G48B_FFCSMjDwSPjjNRoPcv@webmail.nde.ag> References: <20210322093916.Horde.G48B_FFCSMjDwSPjjNRoPcv@webmail.nde.ag> Message-ID: On Mon, 2021-03-22 at 09:39 +0000, Eugen Block wrote: > Hi *, > > I stumbled across a different issue when I noticed there's a > _del directory in /var/lib/nova/instances/. It's a shared > directory to enable live-migration. I noticed these "_del" directories > before but I didn't really investigate, everything seems to work as > expected. But I would like to understand what exactly happens with > such a directory, for example currently there's just one instance > having also a _del directory. The timestamp is from 2 years ago when > the instance was resized the first time. Looking in the nova code [1] > I found the reference to the resize function, I just don't quite > understand what would happen with that instance if it was shutdown. > > Would nova delete it to "clean up"? > Can I remove that directory safely to prevent nova from doing anything > destructive to that instance? > > I'd appreciate any insights about this. Just to be safe I created an > RBD snapshot of the disk to prevent a deletion. have you enabled https://docs.openstack.org/nova/latest/configuration/config.html#workarounds.ensure_libvirt_rbd_instance_dir_cleanup > > Best regards, > Eugen > > [1] > https://opendev.org/openstack/nova/src/commit/146b27f22327f8d60c1017c22ccf18d0e16f1eb7/nova/virt/libvirt/driver.py#L7725 > > > From eblock at nde.ag Mon Mar 22 12:18:11 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 22 Mar 2021 12:18:11 +0000 Subject: [nova] When are "_del" directories deleted? In-Reply-To: References: <20210322093916.Horde.G48B_FFCSMjDwSPjjNRoPcv@webmail.nde.ag> Message-ID: <20210322121811.Horde.YOoFkRJNwRUV_swJ5pD5ovs@webmail.nde.ag> Thank you, Sean. > have you enabled > https://docs.openstack.org/nova/latest/configuration/config.html#workarounds.ensure_libvirt_rbd_instance_dir_cleanup No, this is not enabled in our environment, it's still the default "false". Zitat von Sean Mooney : > On Mon, 2021-03-22 at 09:39 +0000, Eugen Block wrote: >> Hi *, >> >> I stumbled across a different issue when I noticed there's a >> _del directory in /var/lib/nova/instances/. It's a shared >> directory to enable live-migration. I noticed these "_del" directories >> before but I didn't really investigate, everything seems to work as >> expected. But I would like to understand what exactly happens with >> such a directory, for example currently there's just one instance >> having also a _del directory. The timestamp is from 2 years ago when >> the instance was resized the first time. Looking in the nova code [1] >> I found the reference to the resize function, I just don't quite >> understand what would happen with that instance if it was shutdown. >> >> Would nova delete it to "clean up"? >> Can I remove that directory safely to prevent nova from doing anything >> destructive to that instance? >> >> I'd appreciate any insights about this. Just to be safe I created an >> RBD snapshot of the disk to prevent a deletion. > have you enabled > https://docs.openstack.org/nova/latest/configuration/config.html#workarounds.ensure_libvirt_rbd_instance_dir_cleanup >> >> Best regards, >> Eugen >> >> [1] >> https://opendev.org/openstack/nova/src/commit/146b27f22327f8d60c1017c22ccf18d0e16f1eb7/nova/virt/libvirt/driver.py#L7725 >> >> >> From bkslash at poczta.onet.pl Mon Mar 22 13:21:53 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 22 Mar 2021 14:21:53 +0100 Subject: [rabbitmq][kolla-ansible] - RabbitMQ disconnects - 60s timeout In-Reply-To: References: <21A9D0D4-31CF-4B08-B358-5A7F63C44E3A@poczta.onet.pl> Message-ID: <95CE1C1C-1818-4199-A926-C357A14F1D86@poczta.onet.pl> Hi Mark, this happens without any noticeable load on Openstack. Where should I put [oslo_messaging_rabbitmq] heartbeat_timeout_threshold in kolla-ansible? I can’t find any oslo.config file… Should it be in .conf file of every service? Best regards Adam > Wiadomość napisana przez Mark Goddard w dniu 22.03.2021, o godz. 10:49: > > On Thu, 18 Mar 2021 at 16:05, Adam Tomas > wrote: >> >> Hi, >> I have a problem with rabbitmq heartbeats timeout (described also here: https://bugzilla.redhat.com/show_bug.cgi?id=1711794). Although I have threads=1 the problem still persists, generating a lot of messages in logs: >> >> 2021-03-18 15:17:17.482 [error] <0.122.51> closing AMQP connection <0.122.51> (x.x.x.100:60456 -> x.x.x.100:5672 - mod_wsgi:699:9a813fcb-c29f-4886-82bc-00bf478b6b64): >> missed heartbeats from client, timeout: 60s >> 2021-03-18 15:17:17.484 [info] <0.846.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed >> 2021-03-18 15:18:15.918 [error] <0.150.51> closing AMQP connection <0.150.51> (x.x.x.111:41934 -> x.x.x.100:5672 - mod_wsgi:697:b608c7b8-9644-434e-93af-c00222c0a700): >> missed heartbeats from client, timeout: 60s >> 2021-03-18 15:18:15.920 [error] <0.153.51> closing AMQP connection <0.153.51> (x.x.x.111:41936 -> x.x.x.100:5672 - mod_wsgi:697:77348197-148b-41a6-928f-c5eddfab57c9): >> missed heartbeats from client, timeout: 60s >> 2021-03-18 15:18:15.920 [info] <0.1527.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed >> 2021-03-18 15:18:15.922 [info] <0.1531.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed >> 2021-03-18 15:20:16.080 [info] <0.2196.51> accepting AMQP connection <0.2196.51> (x.x.x.111:34826 -> x.x.x.100:5672) >> 2021-03-18 15:20:16.080 [info] <0.2199.51> accepting AMQP connection <0.2199.51> (x.x.x.111:34828 -> x.x.x.100:5672) >> >> I’ve set heartbeat = 600 in rabbitmq.conf and still get disconnections after 60s timeout… How to set proper timeout to avoid disconnections? > > Hi Adam, > > I have seen similar messages in the past, but haven't really looked > into it. It seems to happen during some intensive processes like > encrypted cinder volume creation. > > Have you tried configuring oslo.messaging? For example, > [oslo_messaging_rabbitmq] heartbeat_timeout_threshold. > > https://docs.openstack.org/oslo.messaging/latest/configuration/opts.html > > Mark > >> >> Best regards >> Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Mon Mar 22 13:33:00 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 22 Mar 2021 09:33:00 -0400 Subject: [neutron] Bug deputy report for week of March 15th Message-ID: <482d6100-d21c-7d2b-6a99-91c53a547377@gmail.com> Hi, I was Neutron bug deputy last week. Below is a short summary about the reported bugs. -Brian Critical bugs ------------- * UT are failing when runs with neutron-lib master - https://bugs.launchpad.net/neutron/+bug/1919280 - https://review.opendev.org/c/openstack/neutron/+/780802 * Functional tests failing due to timeout while spawning metadata proxy - https://bugs.launchpad.net/neutron/+bug/1919344 * Functional tests failing due to timeout while killing process - https://bugs.launchpad.net/neutron/+bug/1919345 * [OVN] "OVS_SYSCONFDIR" should be created before writing the OVS system-id file - https://bugs.launchpad.net/neutron/+bug/1920634 - Devstack patch proposed by Slawek High bugs --------- * [OVN] Do not enable send_periodic on router ports which are connected to provider networks - https://bugs.launchpad.net/neutron/+bug/1919347 - https://review.opendev.org/c/openstack/neutron/+/780916 * Project administrators are allowed to view networks across projects - https://bugs.launchpad.net/neutron/+bug/1919386 - https://review.opendev.org/c/openstack/neutron-lib/+/781075 * Creation of floating ip with new rbac policies fails - https://bugs.launchpad.net/neutron/+bug/1920001 Low bugs -------- * [L3][L2 pop] Is it necessary to enforce enabling the option arp_responder and l2_population? - https://bugs.launchpad.net/neutron/+bug/1919107 - Can be reverted now that other DVR changes are being reverted * [OVN][FT] FT tests log a SQL exception - https://bugs.launchpad.net/neutron/+bug/1919352 - https://review.opendev.org/c/openstack/neutron/+/780926 Wishlist bugs ------------- * Automatic rescheduling of BGP speakers on DrAgents - https://bugs.launchpad.net/neutron/+bug/1920065 - RFE From mark at stackhpc.com Mon Mar 22 14:45:51 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 22 Mar 2021 14:45:51 +0000 Subject: [rabbitmq][kolla-ansible] - RabbitMQ disconnects - 60s timeout In-Reply-To: <95CE1C1C-1818-4199-A926-C357A14F1D86@poczta.onet.pl> References: <21A9D0D4-31CF-4B08-B358-5A7F63C44E3A@poczta.onet.pl> <95CE1C1C-1818-4199-A926-C357A14F1D86@poczta.onet.pl> Message-ID: On Mon, 22 Mar 2021 at 13:22, Adam Tomas wrote: > > Hi Mark, > this happens without any noticeable load on Openstack. Where should I put [oslo_messaging_rabbitmq] heartbeat_timeout_threshold in kolla-ansible? I can’t find any oslo.config file… Should it be in .conf file of every service? > Best regards > Adam Yes, any service in which you are seeing heartbeat timeouts. > > > Wiadomość napisana przez Mark Goddard w dniu 22.03.2021, o godz. 10:49: > > On Thu, 18 Mar 2021 at 16:05, Adam Tomas wrote: > > > Hi, > I have a problem with rabbitmq heartbeats timeout (described also here: https://bugzilla.redhat.com/show_bug.cgi?id=1711794). Although I have threads=1 the problem still persists, generating a lot of messages in logs: > > 2021-03-18 15:17:17.482 [error] <0.122.51> closing AMQP connection <0.122.51> (x.x.x.100:60456 -> x.x.x.100:5672 - mod_wsgi:699:9a813fcb-c29f-4886-82bc-00bf478b6b64): > missed heartbeats from client, timeout: 60s > 2021-03-18 15:17:17.484 [info] <0.846.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed > 2021-03-18 15:18:15.918 [error] <0.150.51> closing AMQP connection <0.150.51> (x.x.x.111:41934 -> x.x.x.100:5672 - mod_wsgi:697:b608c7b8-9644-434e-93af-c00222c0a700): > missed heartbeats from client, timeout: 60s > 2021-03-18 15:18:15.920 [error] <0.153.51> closing AMQP connection <0.153.51> (x.x.x.111:41936 -> x.x.x.100:5672 - mod_wsgi:697:77348197-148b-41a6-928f-c5eddfab57c9): > missed heartbeats from client, timeout: 60s > 2021-03-18 15:18:15.920 [info] <0.1527.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed > 2021-03-18 15:18:15.922 [info] <0.1531.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed > 2021-03-18 15:20:16.080 [info] <0.2196.51> accepting AMQP connection <0.2196.51> (x.x.x.111:34826 -> x.x.x.100:5672) > 2021-03-18 15:20:16.080 [info] <0.2199.51> accepting AMQP connection <0.2199.51> (x.x.x.111:34828 -> x.x.x.100:5672) > > I’ve set heartbeat = 600 in rabbitmq.conf and still get disconnections after 60s timeout… How to set proper timeout to avoid disconnections? > > > Hi Adam, > > I have seen similar messages in the past, but haven't really looked > into it. It seems to happen during some intensive processes like > encrypted cinder volume creation. > > Have you tried configuring oslo.messaging? For example, > [oslo_messaging_rabbitmq] heartbeat_timeout_threshold. > > https://docs.openstack.org/oslo.messaging/latest/configuration/opts.html > > Mark > > > Best regards > Adam > > From oliver.wenz at dhbw-mannheim.de Mon Mar 22 15:09:01 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 22 Mar 2021 16:09:01 +0100 (CET) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1964173398.27911.1616425741574@ox.dhbw-mannheim.de> Hi Dmitriy, I tried the fix anyways, i.e. I replaced the value of swift_fallocate_reserve as shown here: https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/782117/1/defaults/main.yml Now I'm getting the same error with '%%%' instead of '%%': Mar 22 15:04:49 bc1bl12 systemd[1]: swift-account-reaper.service: Main process exited, code=exited, status=1/FAILURE Mar 22 15:04:49 bc1bl12 systemd[1]: swift-account-reaper.service: Failed with result 'exit-code'. Mar 22 15:04:51 bc1bl12 systemd[1]: swift-account-reaper.service: Scheduled restart job, restart counter is at 171887. Mar 22 15:04:51 bc1bl12 systemd[1]: Stopped swift-account-reaper service. Mar 22 15:04:51 bc1bl12 systemd[1]: Started swift-account-reaper service. Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: Traceback (most recent call last): Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: File "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", line 803, in config_fallocate_val> Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: reserve_value = float(reserve_value[:-1]) Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: ValueError: could not convert string to float: '1%%' Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: During handling of the above exception, another exception occurred: Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: Traceback (most recent call last): Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: File "/openstack/venvs/swift-22.1.0/bin/swift-account-reaper", line 23, in Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: run_daemon(AccountReaper, conf_file, **options) Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: File "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/daemon.py", line 304, in run_daemon Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: utils.config_fallocate_value(conf.get('fallocate_reserve', '1%')) Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: File "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", line 809, in config_fallocate_val> Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: raise ValueError('Error: %s is an invalid value for fallocate' Mar 22 15:04:52 bc1bl12 swift-account-reaper[322101]: ValueError: Error: 1%%% is an invalid value for fallocate_reserve. Kind regards, Oliver > Message: 3 > Date: Mon, 22 Mar 2021 12:06:19 +0200 > From: Dmitriy Rabotyagov > To: "openstack-discuss at lists.openstack.org" > > Subject: Re: [glance][openstack-ansible] Snapshots disappear during > saving > Message-ID: <984511616407178 at mail.yandex.ru> > Content-Type: text/plain; charset=utf-8 > > Well, looking into the fix I suggested, I'm not sure if it's valid one. > There's really be a mess in patches, and according to the log provided, config > needs just `1%` instead of the `1%%` you currently have. > > And it feels like that's what default behaviour should do with [1]. But I'm > pretty sure that this error was making swift fail and thus having weird issues > while operating. > > So I'm not sure what specifily wrong with value of swift_fallocate_reserve - > maybe we've missed some config to define it or it has been overriden > somewhere, but it feels like current default should cover issue you see in > swift... > > [1] > https://opendev.org/openstack/openstack-ansible-os_swift/src/branch/stable/victoria/defaults/main.yml#L162 > > 22.03.2021, 11:56, "Dmitriy Rabotyagov" : > > Yes, 1%% is smth we're fighting for years, as this setting changes on the > > swift side from time to time, and I really lost account which one is valid > > at the moment. > > > > Here's related SWIFT bug: > > > > https://bugs.launchpad.net/swift/+bug/1844368 > > > > I've just pushed > > https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/782117 to > > cover this issue. Can you try applying this change manually to see if this > > works? > > > > 22.03.2021, 11:25, "Oliver Wenz" : > >> Hi Dmitriy, > >> thanks for your answer! Yes, we do use swift and its use as glance backend > >> is > >> intentional. > >> > >> I got the following from the swift-proxy-server logs in the swift container > >> on > >> the infra host after taking a snapshot: > >> > >> Mar 22 08:43:43 infra1-swift-proxy-container-27169fa7 proxy-server[87]: > >> Client > >> disconnected without sending last chunk (txn: > >> txa7c64547baf0450eb0034-006058588b) (client_ip: 192.168.110. > >> 106) > >> Mar 22 08:43:43 infra1-swift-proxy-container-27169fa7 proxy-server[87]: > >> 192.168.110.106 192.168.110.211 22/Mar/2021/08/43/43 PUT > >> /v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images/3ec63ec2-aa3b-4c3b-a904-b55d1a6ec878 > >> -00001 HTTP/1.0 499 - python-swiftclient-3.10.1 gAAAAABgWFiKMz9R... > >> 204800000 89 > >> - txa7c64547baf0450eb0034-006058588b - 52.2991 - - 1616402571.623875856 > >> 1616402623.922997952 0 > >> > >> On the swift host some services logs contain errors. E.g. the > >> swift-container-updater service: > >> > >> Mar 22 08:42:19 bc1bl12 systemd[1]: swift-container-updater.service: Main > >> process exited, code=exited, status=1/FAILURE > >> Mar 22 08:42:19 bc1bl12 systemd[1]: swift-container-updater.service: Failed > >> with > >> result 'exit-code'. > >> Mar 22 08:42:21 bc1bl12 systemd[1]: swift-container-updater.service: > >> Scheduled > >> restart job, restart counter is at 162982. > >> Mar 22 08:42:21 bc1bl12 systemd[1]: Stopped swift-container-updater > >> service. > >> Mar 22 08:42:21 bc1bl12 systemd[1]: Started swift-container-updater > >> service. > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: Traceback (most > >> recent > >> call last): > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File > >> "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", > >> line 803, in config_fallocate_value > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: reserve_value = > >> float(reserve_value[:-1]) > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: ValueError: could > >> not > >> convert string to float: '1%' > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: During handling of > >> the > >> above exception, another exception occurred: > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: Traceback (most > >> recent > >> call last): > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File > >> "/openstack/venvs/swift-22.1.0/bin/swift-container-updater", line 23, in > >> > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: > >>     run_daemon(ContainerUpdater, conf_file, **options) > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File > >> "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/daemon.py", > >> line 304, in run_daemon > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: > >>     utils.config_fallocate_value(conf.get('fallocate_reserve', '1%')) > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: File > >> "/openstack/venvs/swift-22.1.0/lib/python3.8/site-packages/swift/common/utils.py", > >> line 809, in config_fallocate_value > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: raise > >> ValueError('Error: %s is an invalid value for fallocate' > >> Mar 22 08:42:22 bc1bl12 swift-container-updater[50699]: ValueError: Error: > >> 1%% > >> is an invalid value for fallocate_reserve. > >> > >> The same exit-code and traceback shows in the logs of > >> swift-container-auditor, > >> swift-account-auditor and swift-account-reaper services. Does this tell you > >> anything useful? > >> > >> We didn't experience any problems when uploading files to containers, only > >> when > >> taking snapshots of instances. > >> > >> Kind regards, > >> Oliver > >> > >>>  ------------------------------ > >>> > >>>  Message: 4 > >>>  Date: Thu, 18 Mar 2021 12:44:47 +0200 > >>>  From: Dmitriy Rabotyagov > >>>  To: "openstack-discuss at lists.openstack.org" > >>>           > >>>  Subject: Re: [glance][openstack-ansible] Snapshots disappear during > >>>          saving > >>>  Message-ID: <374941616064157 at mail.yandex.ru> > >>>  Content-Type: text/plain; charset=utf-8 > >>> > >>>  Hi Olver, > >>> > >>>  Am I right that you're also using OpenStack Swift and it's intentional to > >>>  store images there? > >>>  Since the issue is related to the upload process into the Swift. So also > >>>  checking Swift logs > >>>  be usefull as well. > > > > -- > > Kind Regards, > > Dmitriy Rabotyagov > > > --  > Kind Regards, > Dmitriy Rabotyagov > > > > ------------------------------ > > Message: 4 > Date: Mon, 22 Mar 2021 12:35:37 +0100 > From: Bernard Cafarelli > To: OpenStack Discuss > Subject: Re: [neutron][stable] neutron-lib stable branches core > reviewers > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Thu, 18 Mar 2021 at 14:48, Slawek Kaplonski wrote: > > > Hi, > > > > I just noticed that neutron-lib project has got own group > > "neutron-lib-stable-maint" which has +2 powers in neutron-lib stable > > branches [1]. As I see now in gerrit that group don't have any members. > > Would it be maybe possible to remove that group and add > > "neutron-stable-maint" to the neutron-lib stable branches instead? If yes, > > should I simply propose patch to change [1] or is there any other way which > > I should do it? > > > I guess we never spotted that "neutron-lib-stable-maint" before, as it > seems neutron cores had +2 powers before: > https://review.opendev.org/c/openstack/neutron-lib/+/717093 > > But yes it could be useful to have either of these groups (neutron cores or > stable cores) in, backports to neutron-lib are rare but some happen - I > remember one some time ago where it was valid to have a patch backported. > > > > [1] > > https://github.com/openstack/project-config/blob/master/gerrit/acls/openstack/neutron-lib.config > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > -- > Bernard Cafarelli > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > ------------------------------ > > End of openstack-discuss Digest, Vol 29, Issue 126 > ************************************************** From openstack at nemebean.com Mon Mar 22 15:17:25 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 22 Mar 2021 10:17:25 -0500 Subject: [oslo] Wallaby branches are now cut for oslo world In-Reply-To: References: Message-ID: While this means the master branches are technically open for feature work again, please don't merge that (hypothetical) giant refactoring patch that touches 80% of the code until after the overall release ships. If we need to do a last-minute bug backport we don't want to have a bunch of unrelated changes that make backports messy. On 3/19/21 9:09 AM, Herve Beraud wrote: > Hello Osloers, > > As $subject (FYI) > > Thanks for your attention > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From moataz.chouchen.1 at ens.etsmtl.ca Mon Mar 22 15:00:03 2021 From: moataz.chouchen.1 at ens.etsmtl.ca (Chouchen, Moataz) Date: Mon, 22 Mar 2021 15:00:03 +0000 Subject: Extracting commits data for the first and the last code review revisions Message-ID: Dear Openstack community, Hope you are doing well. I am Moataz Chouchen a Ph.D. student from ETS Montreal. I am interested in studying the impact of code review on the quality of puppet files. For this reason, I am interested in studying the code reviews containing puppet files (We investigate puppet projects.) and comparing puppet files in the first revision and the last revision for each code review. To achieve this, I extracted the revision IDs for the first and the last revisions (supposed to be first and last commit hashes) for each code review in puppet projects. However, when we have more than one revision, we were unable to find the commits for the first revision in the git for all the projects and we only have commits data for the last revision. I would like to know if the commit of the first revision does exist in the CVS, and it's possible to extract? In case these commits do not exist, how could, we retrieve the files in the first version of a review? Thank you very much, Best regards, Moataz -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Mar 22 15:41:34 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 22 Mar 2021 15:41:34 +0000 Subject: [infra][tact-sig] Extracting commits data for the first and the last code review revisions In-Reply-To: References: Message-ID: <20210322153845.vppbknhy4vwejjfh@yuggoth.org> [I'm keeping you in Cc since you don't appear to be subscribed to this mailing list.] On 2021-03-22 15:00:03 +0000 (+0000), Chouchen, Moataz wrote: [...] > I extracted the revision IDs for the first and the last revisions > (supposed to be first and last commit hashes) for each code review > in puppet projects. However, when we have more than one revision, > we were unable to find the commits for the first revision in the > git for all the projects and we only have commits data for the > last revision. I would like to know if the commit of the first > revision does exist in the CVS, and it's possible to extract? [...] Conveniently, Gerrit stores all of this in the repository and we replicate it to our Git server farm. The complication is that, because of the amend/rebase-oriented nature of Gerrit's code review workflow, those earlier revisions don't appear in the branch history. Instead, they are tracked with named references which look like changes/XX/YYYY/Z where YYYY is the numeric index identifier for the change, XX is the last two digits of YYYY (for sharding purposes), and Z is the revision number starting from 1. Let's take this puppet-nova change as an example: https://opendev.org/openstack/puppet-nova/commit/d385ca8 There you can see Gitea is displaying an indicator that there is a named ref of changes/63/764763/5 associated with the commit. That means it was revision 5 of change 764763 in Gerrit. You can fetch that ref like so: git fetch origin changes/63/764763/5 That will fill FETCH_HEAD which you can then use with checkout, show, or any other relevant subcommands. Conveniently, if you instead want the first revision of 764763, all you have to do differently is adjust the revision number: git fetch origin changes/63/764763/1 Hopefully that's what you're looking for? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From moataz.chouchen.1 at ens.etsmtl.ca Mon Mar 22 15:57:13 2021 From: moataz.chouchen.1 at ens.etsmtl.ca (Chouchen, Moataz) Date: Mon, 22 Mar 2021 15:57:13 +0000 Subject: [infra][tact-sig] Extracting commits data for the first and the last code review revisions Message-ID: Hi Jermey, Thank you for your quick answer! That's what I am looking for and I am very grateful to you and to all the community. Best regards, Moataz. ________________________________ From: Jeremy Stanley Sent: Monday, March 22, 2021 8:41 AM To: openstack-discuss at lists.openstack.org Cc: Chouchen, Moataz Subject: Re: [infra][tact-sig] Extracting commits data for the first and the last code review revisions [I'm keeping you in Cc since you don't appear to be subscribed to this mailing list.] On 2021-03-22 15:00:03 +0000 (+0000), Chouchen, Moataz wrote: [...] > I extracted the revision IDs for the first and the last revisions > (supposed to be first and last commit hashes) for each code review > in puppet projects. However, when we have more than one revision, > we were unable to find the commits for the first revision in the > git for all the projects and we only have commits data for the > last revision. I would like to know if the commit of the first > revision does exist in the CVS, and it's possible to extract? [...] Conveniently, Gerrit stores all of this in the repository and we replicate it to our Git server farm. The complication is that, because of the amend/rebase-oriented nature of Gerrit's code review workflow, those earlier revisions don't appear in the branch history. Instead, they are tracked with named references which look like changes/XX/YYYY/Z where YYYY is the numeric index identifier for the change, XX is the last two digits of YYYY (for sharding purposes), and Z is the revision number starting from 1. Let's take this puppet-nova change as an example: https://opendev.org/openstack/puppet-nova/commit/d385ca8 There you can see Gitea is displaying an indicator that there is a named ref of changes/63/764763/5 associated with the commit. That means it was revision 5 of change 764763 in Gerrit. You can fetch that ref like so: git fetch origin changes/63/764763/5 That will fill FETCH_HEAD which you can then use with checkout, show, or any other relevant subcommands. Conveniently, if you instead want the first revision of 764763, all you have to do differently is adjust the revision number: git fetch origin changes/63/764763/1 Hopefully that's what you're looking for? -- Jeremy Stanley -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Mar 22 16:09:10 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 22 Mar 2021 13:09:10 -0300 Subject: [CloudKitty] vPTG meeting dates Message-ID: Hello guys, We need to pick a date for our vPTG meeting. The suggested date and time is Monday April 19 13UTC - 14UTC (https://ethercalc.net/oz7q0gds9zfi), which would fit exactly in one of our Monday meetings. Does everybody else agree with this date and time? Let me know if that works for you. If it does, I will book the room. -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 22 17:03:55 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 22 Mar 2021 18:03:55 +0100 Subject: [heat-translator] Ask new release 2.2.1 to fix Zuul jobs - RFE In-Reply-To: <0201d24a-f137-fb06-34b2-2c5a04c3a40b@hco.ntt.co.jp_1> References: <3c44044b-487b-0e6f-1889-358e5eafb74d@hco.ntt.co.jp_1> <0201d24a-f137-fb06-34b2-2c5a04c3a40b@hco.ntt.co.jp_1> Message-ID: @prometheanfire (Matthew): Do you agree? Le jeu. 18 mars 2021 à 01:06, Yoshito Ito a écrit : > Hi heat-translator core members, > > Thank you for your cooperation! We could merge the target patches to > master branch. Now we need to release 2.2.1 to fix the bug in Wallaby > release, so please check the release patch [1]. > > [1] https://review.opendev.org/c/openstack/releases/+/781175 > > Best regards, > > Yoshito Ito > > > On 2021/03/10 16:00, Yoshito Ito wrote: > > Hi heat-translator core members, > > > > I'd like you to review the following patches [1][2], and to ask if we > > can release 2.2.1 with these commits to fix our Zuul jobs. > > > > In release 2.2.0, our Zuul jobs are broken [3] because of new release of > > tosca-parser 2.3.0, which provides strict validation of required > > attributes in [4]. The patch [1] fix this issue by updating our wrong > > test samples. The other [2] is now blocked by this issue and was better > > to be merged in 2.2.0. > > > > I missed the 2.2.0 release patch [5] because none of us was added as > > reviewers. So after merging [1] and [2], I will submit a patch to make > > new 2.2.1 release. > > > > [1] https://review.opendev.org/c/openstack/heat-translator/+/779642 > > [2] https://review.opendev.org/c/openstack/heat-translator/+/778612 > > [3] https://bugs.launchpad.net/heat-translator/+bug/1918360 > > [4] > > > https://opendev.org/openstack/tosca-parser/commit/00d3a394d5a3bc13ed7d2f1d71affd9ab71e4318 > > > > [5] https://review.opendev.org/c/openstack/releases/+/777964 > > > > > > Best regards, > > > > Yoshito Ito > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Mar 22 18:22:27 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 22 Mar 2021 11:22:27 -0700 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: References: Message-ID: Hey Everyone! Friendly reminder that the deadline to sign your team up for the upcoming PTG is this *Thursday, March 25 at 7:00 UTC.* To signup your team, you must complete *BOTH* the survey[1] AND reserve time in the ethercalc[2]. Once your team is signed up, please register! And remind your team to register! Registration is free, but since it is how we contact you with passwords, event details, etc. it is important! Continue to check back for updates at openstack.org/ptg. -the Kendalls (diablo_rojo & wendallkaters) [1] Team Survey: https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey [2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi [3] PTG Registration: https://april2021-ptg.eventbrite.com On Mon, Mar 8, 2021 at 10:08 AM Kendall Nelson wrote: > Greetings! > > As you hopefully already know, our next PTG will be virtual again, and > held from Monday, April 19 to Friday, April 23. We will have the same > schedule set up available as last time with three windows of time spread > across the day to cover all timezones with breaks in between. > > *To signup your team, you must complete **BOTH** the survey[1] AND > reserve time in the ethercalc[2] by March 25 at 7:00 UTC.* > > We ask that the PTL/SIG Chair/Team lead sign up for time to have their > discussions in with 4 rules/guidelines. > > 1. Cross project discussions (like SIGs or support project teams) should > be scheduled towards the start of the week so that any discussions that > might shape those of other teams happen first. > 2. No team should sign up for more than 4 hours per UTC day to help keep > participants actively engaged. > 3. No team should sign up for more than 16 hours across all time slots to > avoid burning out our contributors and to enable participation in multiple > teams discussions. > > Again, you need to fill out BOTH the ethercalc AND the survey to complete > your team's sign up. > > If you have any issues with signing up your team, due to conflict or > otherwise, please let me know! While we are trying to empower you to make > your own decisions as to when you meet and for how long (after all, you > know your needs and teams timezones better than we do), we are here to help! > > Once your team is signed up, please register! And remind your team to > register! Registration is free, but since it will be how we contact you > with passwords, event details, etc. it is still important! > > Continue to check back for updates at openstack.org/ptg. > > -the Kendalls (diablo_rojo & wendallkaters) > > > [1] Team Survey: > https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey > [2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi > [3] PTG Registration: https://april2021-ptg.eventbrite.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Stefan.Kelber at gmx.de Mon Mar 22 18:28:55 2021 From: Stefan.Kelber at gmx.de (Stefan Kelber) Date: Mon, 22 Mar 2021 19:28:55 +0100 Subject: OpenStack Victoria - Kolla-Ansible - Cinder - iSCSI-backend - Update driver status failed: config is uninitialized. Message-ID: An HTML attachment was scrubbed... URL: From mdemaced at redhat.com Mon Mar 22 18:40:04 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 22 Mar 2021 15:40:04 -0300 Subject: [kuryr] vPTG April 2021 Message-ID: Hello, Two sessions were scheduled for Kuryr on the upcoming PTG: - 7-8 UTC on April 20 - 2-3 UTC on April 22 Everyone is more than welcome to join the sessions and check our future plans, give feedback or discuss anything regarding Kuryr. Even though participation is free registration is needed[1]. Regards, Maysa Macedo. [1] https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon Mar 22 23:08:03 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 22 Mar 2021 18:08:03 -0500 Subject: [openstack-helm] No meeting tomorrow March 23rd Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting tomorrow, March 23rd, we will cancel it. Our next IRC meeting will be March 30th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Mar 23 02:55:04 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 22 Mar 2021 20:55:04 -0600 Subject: [tripleo][ci] quay.io is down Message-ID: 0/ We can't have nice things. quay.io is currently down. quay.io is utilized for non-tripleo containers like grafana, etc. This has caused jobs to fail. We'll keep you posted. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Tue Mar 23 06:28:24 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 23 Mar 2021 11:58:24 +0530 Subject: [tripleo][ci] quay.io is down In-Reply-To: References: Message-ID: On Tue, Mar 23, 2021 at 8:29 AM Wesley Hayutin wrote: > 0/ > > We can't have nice things. quay.io is currently down. quay.io is > utilized for non-tripleo containers like grafana, etc. This has caused > jobs to fail. We'll keep you posted. > > Thanks! > Looks like it's back, but quay.io/app-sre/grafana:6.7.4 image is missing[1]. [1] https://447f476af5555fa473a2-ba0bbef8fa5bd9d33ddbd8694210833c.ssl.cf5.rackcdn.com/781622/2/check/tripleo-ci-centos-8-content-provider/e902500/logs/quickstart_install.log -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar at redhat.com Tue Mar 23 06:58:52 2021 From: chkumar at redhat.com (Chandan Kumar) Date: Tue, 23 Mar 2021 12:28:52 +0530 Subject: [tripleo][ci] quay.io is down In-Reply-To: References: Message-ID: Hello, On Tue, Mar 23, 2021 at 12:11 PM Rabi Mishra wrote: > > > > On Tue, Mar 23, 2021 at 8:29 AM Wesley Hayutin wrote: >> >> 0/ >> >> We can't have nice things. quay.io is currently down. quay.io is utilized for non-tripleo containers like grafana, etc. This has caused jobs to fail. We'll keep you posted. >> I have pulled the image from docker.io/grafana/grafana:6.7.4 and pushed to quay.io/tripleo/grafana:6.7.4. Here is the patch to update the names for the same: https://review.opendev.org/c/openstack/tripleo-common/+/782366 Waiting for the Ci job to give a green light. I hope it will bring content provider back to normal. Thanks, Chandan Kumar From fpantano at redhat.com Tue Mar 23 08:23:20 2021 From: fpantano at redhat.com (Francesco Pantano) Date: Tue, 23 Mar 2021 09:23:20 +0100 Subject: [tripleo][ci] quay.io is down In-Reply-To: References: Message-ID: Thanks all for the work and for pushing the grafana image under the TripleO namespace. I don't know why that tag is missing but I'm pretty sure ceph-ansible still uses the docker.io content. For the long term vision, do you think we should push the prometheus/node-exporter/alertmanager images under the tripleo namespace? @Giulio Fidente @John Fulton fyi On Tue, Mar 23, 2021 at 8:04 AM Chandan Kumar wrote: > Hello, > > On Tue, Mar 23, 2021 at 12:11 PM Rabi Mishra wrote: > > > > > > > > On Tue, Mar 23, 2021 at 8:29 AM Wesley Hayutin > wrote: > >> > >> 0/ > >> > >> We can't have nice things. quay.io is currently down. quay.io is > utilized for non-tripleo containers like grafana, etc. This has caused > jobs to fail. We'll keep you posted. > >> > > I have pulled the image from docker.io/grafana/grafana:6.7.4 and > pushed to quay.io/tripleo/grafana:6.7.4. > Here is the patch to update the names for the same: > https://review.opendev.org/c/openstack/tripleo-common/+/782366 > Waiting for the Ci job to give a green light. I hope it will bring > content provider back to normal. > > Thanks, > > Chandan Kumar > > > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Mar 23 09:09:32 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 23 Mar 2021 10:09:32 +0100 Subject: [CloudKitty] vPTG meeting dates In-Reply-To: References: Message-ID: Hi Rafael, Given the time booked on the spreadsheet, I think you meant 13 - 15 UTC, with the second hour being the time slot of our IRC meeting. This works fine for me, thanks! Pierre On Mon, 22 Mar 2021 at 17:19, Rafael Weingärtner wrote: > > Hello guys, > We need to pick a date for our vPTG meeting. The suggested date and time is Monday April 19 13UTC - 14UTC (https://ethercalc.net/oz7q0gds9zfi), which would fit exactly in one of our Monday meetings. Does everybody else agree with this date and time? > > Let me know if that works for you. If it does, I will book the room. > > -- > Rafael Weingärtner From fpantano at redhat.com Tue Mar 23 09:42:46 2021 From: fpantano at redhat.com (Francesco Pantano) Date: Tue, 23 Mar 2021 10:42:46 +0100 Subject: [tripleo][ci] quay.io is down In-Reply-To: References: Message-ID: Adding +Guillaume Abrioux who is helping us mirroring the Ceph Dashboard containers to quay.ceph.io. After we have all the containers available on the related namespaces on quay.ceph.io, we can use the review [1] to test that everything works as expected and switch on the new location. Thanks, Francesco [1] https://review.opendev.org/c/openstack/tripleo-common/+/782385 On Tue, Mar 23, 2021 at 9:23 AM Francesco Pantano wrote: > Thanks all for the work and for pushing the grafana image under the > TripleO namespace. > I don't know why that tag is missing but I'm pretty sure ceph-ansible > still uses the docker.io content. > For the long term vision, do you think we should push the > prometheus/node-exporter/alertmanager images > under the tripleo namespace? > > @Giulio Fidente @John Fulton > fyi > > On Tue, Mar 23, 2021 at 8:04 AM Chandan Kumar wrote: > >> Hello, >> >> On Tue, Mar 23, 2021 at 12:11 PM Rabi Mishra wrote: >> > >> > >> > >> > On Tue, Mar 23, 2021 at 8:29 AM Wesley Hayutin >> wrote: >> >> >> >> 0/ >> >> >> >> We can't have nice things. quay.io is currently down. quay.io is >> utilized for non-tripleo containers like grafana, etc. This has caused >> jobs to fail. We'll keep you posted. >> >> >> >> I have pulled the image from docker.io/grafana/grafana:6.7.4 and >> pushed to quay.io/tripleo/grafana:6.7.4. >> Here is the patch to update the names for the same: >> https://review.opendev.org/c/openstack/tripleo-common/+/782366 >> Waiting for the Ci job to give a green light. I hope it will bring >> content provider back to normal. >> >> Thanks, >> >> Chandan Kumar >> >> >> > > -- > Francesco Pantano > GPG KEY: F41BD75C > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Mar 23 09:56:22 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 23 Mar 2021 10:56:22 +0100 Subject: [nova][placement] Xena PTG In-Reply-To: <929PPQ.V3OOB6GJ47RA3@est.tech> References: <929PPQ.V3OOB6GJ47RA3@est.tech> Message-ID: Hi, I booked slots for us in the ethercal on Wed, Thu, Fri, 14:00 - 17:00 UTC. I know that 14:00 UTC is pretty early in the west coast, so I will make sure that we will start the day slowly or on simple topics. In the other hand the 17:00 UTC is officially the end of the EU friendly slot, but I expect that for EU folks it is better to continue a bit longer there if needed than to keep awake until the 21:00 UTC US slot. So I will try to keep the exact ending time open. Any feedback is welcome. Cheers, gibi On Tue, Mar 9, 2021 at 12:25, Balazs Gibizer wrote: > Hi, > > As you probably already know that the next PTG will be held between > Apr 19 - 23. To organize the gathering I need your help: > 1) Please fill the doodle[1] with timeslots when you have time to > join to our sessions. Please do this before end of 21st of March. > 2) Please add your PTG topic in the etherpad[2]. If you feel your > topic needs a cross project cooperation please note that in the > etherpad which other teams are needed. > > Cheers, > gibi > > [1] https://doodle.com/poll/ib2eu3c4346iqii3 > [2] https://etherpad.opendev.org/p/nova-xena-ptg > > > From rafaelweingartner at gmail.com Tue Mar 23 10:31:20 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 23 Mar 2021 07:31:20 -0300 Subject: [CloudKitty] vPTG meeting dates In-Reply-To: References: Message-ID: Yes, exactly On Tue, Mar 23, 2021 at 6:10 AM Pierre Riteau wrote: > Hi Rafael, > > Given the time booked on the spreadsheet, I think you meant 13 - 15 > UTC, with the second hour being the time slot of our IRC meeting. > > This works fine for me, thanks! > > Pierre > > On Mon, 22 Mar 2021 at 17:19, Rafael Weingärtner > wrote: > > > > Hello guys, > > We need to pick a date for our vPTG meeting. The suggested date and time > is Monday April 19 13UTC - 14UTC (https://ethercalc.net/oz7q0gds9zfi), > which would fit exactly in one of our Monday meetings. Does everybody else > agree with this date and time? > > > > Let me know if that works for you. If it does, I will book the room. > > > > -- > > Rafael Weingärtner > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Mar 23 11:00:42 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 23 Mar 2021 12:00:42 +0100 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby Message-ID: Hello Osloers, As you surely recently observed patches proposed by the openstack bot to configure wallaby fail with a flake8/hacking issue. Here are some guideline to fix the problem the inconsistency: - Patch pre-commit on master (now xena) (here is the patch: https://gist.github.com/4383/6e96c836bb1b0e1e0e599a5106f43f1a) - Once Xena is patched cherry-pick and backport these changes on Wallaby - Rebase the openstack bot patches on the top of this cherry-pick (or wait for the merge of the previous one patch) You can copy the patch on oslo.db with the related commit message [1]. The root cause of the issue was that with the introduction of pre-commit we started to define the version of flake8 to use. Previously this version was defined by hacking's requirements. Indeed a few months ago we added pre-commit to allow us to run checks with git hooks and reduce the usage of our gates. These changes were standardized and spread on all the scope of oslo [2]. However, during the design of these changes [3] and after some discussion we decided to pin the version of flake8 to use, hence by doing this we short circuited hacking on its management of flake8. The solution to solve this issue is simply to trust hacking on its flake8 management. Hacking will pull the right version of flake8 and the inconsistency will disappear. flake8 provides a pre-commit hook so it could be seen and called as a local target. [1] https://review.opendev.org/c/openstack/oslo.db/+/781470 [2] https://review.opendev.org/q/topic:%22oslo-pre-commit%22 [3] https://review.opendev.org/q/topic:%22pre-commit%22+(status:open%20OR%20status:merged)+project:openstack/oslo.cache -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Tue Mar 23 11:43:32 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 23 Mar 2021 12:43:32 +0100 Subject: [kolla] does someone uses AArch64 images? Message-ID: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> AArch64 support in Kolla(-Ansible) project is present since Ocata (iirc). For most of that time we supported all three distributions: CentOS, Debian and Ubuntu. # Distribution coverage CentOS has binary (with packages from RDO project) and source images. We supported version 7, 8 and now we are using CentOS Stream 8 in Wallaby. Debian is source only as there never was any source of up-to-date OpenStack packages (Debian OpenStack team builds only x86-64 packages and they do it after release). Ubuntu has both binary (packages from UCA) and source images. # My interest I take care of Debian/source ones as we use them at Linaro in our developer cloud setups. It often means backporting of fixes when we upgrade from one previous release to something newer. # External repos problem The problem in all cases is external repositories and their lack of support for AArch64 architecture. ## RabbitMQ Biggest problem is with RabbitMQ and Erlang. For x86-64 RMQ developers build packages for CentOS/Debian/Ubuntu. We use them. On AArch64 it differ: - CentOS uses erlang from CentOS distribution - Debian uses erlang from my Linaro OBS repository (where I do builds of backport of Debian 'testing' package). - Ubuntu uses erlang from Ubuntu distribution Ubuntu one is too old - 22.2 while current rabbitmq needs 22.3 or newer. I do not plan to work on getting it working. # Final question I would like to know does anyone use AArch64 images built with Kolla. Please respond which branches you use, distribution used as host OS and as images OS. From zigo at debian.org Tue Mar 23 13:36:44 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 23 Mar 2021 14:36:44 +0100 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> Message-ID: <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> On 3/23/21 12:43 PM, Marcin Juszkiewicz wrote: > AArch64 support in Kolla(-Ansible) project is present since Ocata > (iirc). For most of that time we supported all three distributions: > CentOS, Debian and Ubuntu. > > > # Distribution coverage > > CentOS has binary (with packages from RDO project) and source images. We > supported version 7, 8 and now we are using CentOS Stream 8 in Wallaby. > > Debian is source only as there never was any source of up-to-date > OpenStack packages (Debian OpenStack team builds only x86-64 packages > and they do it after release). > > Ubuntu has both binary (packages from UCA) and source images. > > > # My interest > > I take care of Debian/source ones as we use them at Linaro in our > developer cloud setups. It often means backporting of fixes when we > upgrade from one previous release to something newer. Getting binary support is just a mater of rebuilding packages for arm64. I once did that for you setting-up a Jenkins machine just for rebuilding. It's really a shame that you guys aren't following that road. You'd have all of my support if you did. I personally don't have access to the necessary hardware for it, and wont use the arm64 repos... Cheers, Thomas Goirand (zigo) From martin at chaconpiza.com Tue Mar 23 12:45:12 2021 From: martin at chaconpiza.com (Martin Chacon Piza) Date: Tue, 23 Mar 2021 13:45:12 +0100 Subject: [Release-job-failures] Release of openstack/monasca-grafana-datasource for ref refs/tags/1.3.0 failed In-Reply-To: References: <20210319165909.7xiey7glo4mep5nq@yuggoth.org> <20210320144554.t7qvrsyjtqku5pe3@yuggoth.org> Message-ID: Hi, Thanks for the fix! It would have taken us longer to solve it. I can confirm that works, new release of monasca-grafana-datasource 1.3.0 is published: https://www.npmjs.com/package/monasca-grafana-datasource Best regards, Martin (chaconpiza) El lun, 22 de mar. de 2021 a la(s) 09:19, Herve Beraud (hberaud at redhat.com) escribió: > I confirm that the latest changes helped to fix the issue. > > The latest job has been successfully executed > https://zuul.openstack.org/build/042af4adb20c472583c278cd365a12ab > > I think that we can consider that this topic is now closed. > > > > Le sam. 20 mars 2021 à 15:49, Jeremy Stanley a écrit : > >> On 2021-03-19 16:59:09 +0000 (+0000), Jeremy Stanley wrote: >> > On 2021-03-19 07:51:14 +0100 (+0100), Herve Beraud wrote: >> > [...] >> > > @fungi: Can we try to reenqueue the job? >> > [...] >> > >> > In doing so, we discovered that Ubuntu Focal doesn't like and isn't >> > covered by the https://deb.nodesource.com/node_8.x/ package >> > repository, so I've proposed a pair of changes to try using Node 10 >> > instead, which should (in theory) work on Focal: >> > >> > https://review.opendev.org/781825 >> > https://review.opendev.org/781826 >> >> That's gotten farther, hopefully https://review.opendev.org/781966 >> will address the next bug we've exposed. >> -- >> Jeremy Stanley >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Stefan.Kelber at gmx.de Tue Mar 23 14:08:40 2021 From: Stefan.Kelber at gmx.de (Stefan Kelber) Date: Tue, 23 Mar 2021 15:08:40 +0100 Subject: [kolla] Feedback request: removing OracleLinux support Message-ID: Hello, i just found this thread, in the situation to apparently have to move to ORACLE Linux, away from CentOS, as long as potential licening implications are up in the air, if i got all that right so far. My questions actually are now: - Would i have trouble today, if using ORACLE Linux for implementing Victoria via Kolla-Ansible? - And/or would it be feasible for the mid term to reactivate support for ORACLE LINUX, until the dust has settled? - On the long: will streaming change how OpenStack will be deployed via Kolla-Ansible? Unfortunately i have to implement a project these days, not allowing me to wait until then. Unfortunate timing... Best Stefan From rosmaita.fossdev at gmail.com Tue Mar 23 14:56:27 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 23 Mar 2021 10:56:27 -0400 Subject: [cinder] review priorities right now Message-ID: <821c23e9-3b35-30cb-9431-7af7c8f7b118@gmail.com> Please focus on the following two items in preparing for RC-1 on Thursday. 1. There are a few follow-up patches fixing bugs in driver features listed at the top of this etherpad: https://etherpad.opendev.org/p/cinder-wallaby-features 2. There's also a series of patches fixing bugs in Cinder quotas, which as you all know, have been a perennial pain point for operators. Most of these already have at least one +2. Start here: https://review.opendev.org/c/openstack/cinder/+/778182 and work your way up the "Relation chain" displayed in gerrit. Here's the complete list: https://review.opendev.org/q/topic:%22unused-quota-classes%22+(status:open%20OR%20status:merged) Happy Reviewing! brian From thierry at openstack.org Tue Mar 23 16:20:40 2021 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 23 Mar 2021 17:20:40 +0100 Subject: [largescale-sig] Video meeting Mar 24: "Scaling RabbitMQ Clusters" In-Reply-To: References: Message-ID: Thierry Carrez wrote: > The Large Scale SIG organizes video meetings around specific scaling > topics, just one hour of discussion bootstrapped by a short presentation > from operators of large scale deployments of OpenStack. > > Last month we did one on "Regions vs. Cells" which was pretty popular. > Our next video meeting will be next Wednesday, March 24, at 1500utc > on Zoom (link/password will be provided on this thread the day before > the event). The theme will be: > > "Scaling RabbitMQ Clusters" > quickstarted by a presentation from Gene Kuo (LINE). Here is the Zoom link for the meeting (which will be recorded): https://zoom.us/j/93691892028 The meeting password will be: "largescale" Please join us! -- Thierry Carrez (ttx) From geguileo at redhat.com Tue Mar 23 17:46:21 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 23 Mar 2021 18:46:21 +0100 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: References: Message-ID: <20210323174621.yf6ifhvkpmv5fw4m@localhost> On 09/03, Lee Yarwood wrote: > Hello all, > > I reported the following bug last week but I've yet to get any real > feedback after asking a few times in irc. > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > https://bugs.launchpad.net/cinder/+bug/1917750 > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > with Fedora 32 and lioadm I don't see the WWN conflict between the two > backends. Does anyone know if using lioadm is an option on Focal? > > Thanks in advance, > > Lee > > Hi Lee, Sorry for the late reply. I started looking at the case some time ago but got "distracted" with some other issue. I am no expert on STGT, since I always work with LIO, but from I could gather this seems to be caused by the conjunction of us: - Using the tgtadm helper - Having 2 different cinder-volume services running on 2 different hosts (one in compute and another on controller). - Using the same volume_backend_name for both LVM backends. If we were running a single cinder-volume service with 2 backends this issue wouldn't happen (I checked). If we used a different volume_backend_name for each of the 2 services and used a volume type picking one of them for the operations, this wouldn't happen either. If we used LIO instead, this wouldn't happen. The cause is the automatic generation of serial/wwn for volumes by the STGT, that seems to be deterministic. First target created on a host will be have a 60000000000000000e0000000001 prefix and then the LUN number (the 3 before it that we see in the connection_info is just to state that the WWN is of NAA type). This means that the first volume exposed by STGT on any host will ALWAYS have the same WWN and will mess things up if we attach them to the same host, because the premise of a WWN is its uniqueness and everything in Cinder and OS-Brick assumes this and will not be changed. For LIO it seems that the generation of the seria/wwn is non deterministic (or at least not the same on all hosts) so the issue won't happen in this specific deployment configuration. So the options to prevent this issue are to run both backends on the controller node, use different volume_backend_name and a volume type, or use LIO. Cheers, Gorka. From lyarwood at redhat.com Tue Mar 23 18:44:55 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 23 Mar 2021 18:44:55 +0000 Subject: [cinder][nova] Running parallel iSCSI/LVM c-vol backends is causing random failures in CI In-Reply-To: <20210323174621.yf6ifhvkpmv5fw4m@localhost> References: <20210323174621.yf6ifhvkpmv5fw4m@localhost> Message-ID: On Tue, 23 Mar 2021 at 17:46, Gorka Eguileor wrote: > > On 09/03, Lee Yarwood wrote: > > Hello all, > > > > I reported the following bug last week but I've yet to get any real > > feedback after asking a few times in irc. > > > > Running parallel iSCSI/LVM c-vol backends is causing random failures in CI > > https://bugs.launchpad.net/cinder/+bug/1917750 > > > > AFAICT tgtadm is causing this behaviour. As I've stated in the bug > > with Fedora 32 and lioadm I don't see the WWN conflict between the two > > backends. Does anyone know if using lioadm is an option on Focal? > > > > Thanks in advance, > > > > Lee > > > > > > Hi Lee, > > Sorry for the late reply. > > I started looking at the case some time ago but got "distracted" with > some other issue. > > I am no expert on STGT, since I always work with LIO, but from I could > gather this seems to be caused by the conjunction of us: > > - Using the tgtadm helper > - Having 2 different cinder-volume services running on 2 different hosts > (one in compute and another on controller). > - Using the same volume_backend_name for both LVM backends. > > If we were running a single cinder-volume service with 2 backends this > issue wouldn't happen (I checked). > > If we used a different volume_backend_name for each of the 2 services > and used a volume type picking one of them for the operations, this > wouldn't happen either. > > If we used LIO instead, this wouldn't happen. > > The cause is the automatic generation of serial/wwn for volumes by the > STGT, that seems to be deterministic. First target created on a host > will be have a 60000000000000000e0000000001 prefix and then the LUN > number (the 3 before it that we see in the connection_info is just to > state that the WWN is of NAA type). > > This means that the first volume exposed by STGT on any host will ALWAYS > have the same WWN and will mess things up if we attach them to the same > host, because the premise of a WWN is its uniqueness and everything in > Cinder and OS-Brick assumes this and will not be changed. > > For LIO it seems that the generation of the seria/wwn is non > deterministic (or at least not the same on all hosts) so the issue won't > happen in this specific deployment configuration. > > So the options to prevent this issue are to run both backends on the > controller node, use different volume_backend_name and a volume type, or > use LIO. Thanks Gorka, Just to copy my reply from the bug here. I'm not entirely sure how using a different volume_backend_name would help? As you say above the first target on both hosts would still have the 60000000000000000e0000000001 prefix regardless of the name right? Moving to a single service multibackend approach would be best but given required job changes etc isn't something I think we can do in the short term. Moving to lioadm is still my preferred short term solution to this with the following devstack change awaiting reviews below: cinder: Default CINDER_ISCSI_HELPER to lioadm on Ubuntu https://review.opendev.org/c/openstack/devstack/+/779624 Cheers, Lee From ikatzir at infinidat.com Tue Mar 23 22:09:33 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Wed, 24 Mar 2021 00:09:33 +0200 Subject: [ironic] How to move nodes from a 'clean failed' state into 'Available' Message-ID: Hello Team, I had a situation where my undercloud-node had a problem with it’s disk and has disconnected from overcloud. I couldn’t restore the undercloud controller and ended up re-installing it (running 'openstack undercloud install’). The installation ended successfully but now I’m in a situation where Cleanup of the overcloud deployed nodes fails: (undercloud) [stack at interop010 ~]$ openstack baremetal node list +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean failed | True | | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | clean failed | True | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ I’ve tried to move node to available state but cannot: (undercloud) [stack at interop010 ~]$ openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 The requested action "provide" can not be performed on node "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". (HTTP 400) My question is: How do I make the nodes available again? as the deployment of overcloud fails with: ERROR due to "Message: No valid host was found. , Code: 500” Thanks, Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Mar 24 03:07:14 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 24 Mar 2021 00:07:14 -0300 Subject: [cinder] Bug deputy report for week of 2021-03-15 Message-ID: Hello, This is a bug report from 2021-03-16 to 2021-03-24. You're welcome to join the next Cinder Bug Meeting tomorrow. - Weekly on Wednesday at 1500 UTC in #openstack-cinder - Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting Critical:- High: - https://bugs.launchpad.net/oslo.db/+bug/1814199 :" soft_delete is wrong". Assigned to Gorka Eguileor. Medium: - https://bugs.launchpad.net/cinder/+bug/1298135 :" Cinder should handle token expiration for long ops". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1920870 :" [Storwize] Fix issue while extending a volume with replication enabled". Assigned to Venkata krishna Thumu (venkatakt). - https://bugs.launchpad.net/cinder/+bug/1920237 :" race in backup manager remove_export call". Assigned to Eric Harney. Low:- Incomplete: - https://bugs.launchpad.net/cinder/+bug/1920912. " Add volume to a group which was created from a group snapshot of another consistency group fails and the group goes into error state". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1920890: " Retype of in use Hyperswap volume failing". Unassigned. Regards, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From raubvogel at gmail.com Tue Mar 23 19:12:25 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Tue, 23 Mar 2021 15:12:25 -0400 Subject: [nova] Cannot use --fields in one of our headnodes Message-ID: Easy question of the day: One of our headnodes is barking when I use --fields with nova. nova list --all --fields=host,instance_name,status ERROR (TypeError): '<' not supported between instances of 'str' and 'NoneType' This works fine with the other headnodes, and all 3 are supposed to be the same (hardware, software). Would that indicate a bad installation or something else? Where should I be looking for answers? From skaplons at redhat.com Tue Mar 23 20:55:22 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 23 Mar 2021 21:55:22 +0100 Subject: [neutron] vPTG meetings Message-ID: <4293638.1034814kQe@p1> Hi, Based on infomation from doodle, I just booked for us 4 slots, 3h each: * Monday 1300 - 1600 UTC * Tuesday 1300 - 1600 UTC * Thursday 1300 - 1600 UTC * Friday 1300 - 1600 UTC I think that this should be enough. If not we can try to set some ad-hoc meeting during the PTG week if that will be needed. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bkslash at poczta.onet.pl Wed Mar 24 08:39:53 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Wed, 24 Mar 2021 09:39:53 +0100 Subject: [rabbitmq][kolla-ansible] - RabbitMQ disconnects - 60s timeout In-Reply-To: References: <21A9D0D4-31CF-4B08-B358-5A7F63C44E3A@poczta.onet.pl> <95CE1C1C-1818-4199-A926-C357A14F1D86@poczta.onet.pl> Message-ID: <1D8FE1C8-AAB4-4A58-819B-C4CCBBA7DDF6@poczta.onet.pl> Hi Mark, I did it (heartbeat_timeout_threshold set to 360s, 4 heartbeats in this time instead of default two), restarted all containers and… still got /var/log/kolla/rabbitmq/rabbit at os-ppu-controller1.log:2021-03-24 08:01:55.622 [error] <0.24155.9> closing AMQP connection <0.24155.9> (x.x.x.100:40812 -> x.x.x.100:5672 - mod_wsgi:698:950db474-56b6-434c-80fa-6827f0a1ba34): /var/log/kolla/rabbitmq/rabbit at os-ppu-controller1.log:2021-03-24 08:01:55.625 [info] <0.24977.9> Closing all channels from connection ‚<<"x.x.x.100:40812 -> x.x.x.100:5672">>' because it has been closedsimilar errors, but this time without „60s timeout”. /var/log/kolla/nova/nova-api-wsgi.log:2021-03-24 06:00:39.453 695 ERROR oslo.messaging._drivers.impl_rabbit [-] [899a88f9-d779-4747-8861-6734a2eb2924] AMQP server on x.x.x.100:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: /var/log/kolla/nova/nova-api-wsgi.log:2021-03-24 06:00:39.454 695 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Too many heartbeats missed /var/log/kolla/nova/nova-api-wsgi.log:2021-03-24 07:53:55.611 696 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Server unexpectedly closed connection /var/log/kolla/nova/nova-api-wsgi.log:2021-03-24 07:53:55.975 698 INFO oslo.messaging._drivers.impl_rabbit [-] [a8a0d91e-697d-4c7d-83c4-c753559cadf3] Reconnected to AMQP server on x.x.x.111:5672 via [amqp] client with port 33694. In addition another problem came up after restarting Neutron - DHCP port in Octavia management network went down (so it blocks creating new loadbalancers, because amphora can’t get IP address) and I can’t change it’s status to ACTIVE: | 778c7248-2224-43aa-a0cf-3c4a8e48e55a | | fa:16:3e:2b:38:d7 | ip_address=y.y.y.21', subnet_id='0aa85b81-b5d8-457a-85ab-51b7b6374e02' | DOWN $ neutron port-update 778c7248-2224-43aa-a0cf-3c4a8e48e55a --status ACTIVE neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Cannot update read-only attribute status Neutron server returns request_ids: ['req-5682582a-6db7-4cdc-b86a-f703cedf83fd’] I can ping this IP while it shows status DOWN! Best regards, Adam > Wiadomość napisana przez Mark Goddard w dniu 22.03.2021, o godz. 15:45: > > On Mon, 22 Mar 2021 at 13:22, Adam Tomas wrote: >> >> Hi Mark, >> this happens without any noticeable load on Openstack. Where should I put [oslo_messaging_rabbitmq] heartbeat_timeout_threshold in kolla-ansible? I can’t find any oslo.config file… Should it be in .conf file of every service? >> Best regards >> Adam > > Yes, any service in which you are seeing heartbeat timeouts. >> >> >> Wiadomość napisana przez Mark Goddard w dniu 22.03.2021, o godz. 10:49: >> >> On Thu, 18 Mar 2021 at 16:05, Adam Tomas wrote: >> >> >> Hi, >> I have a problem with rabbitmq heartbeats timeout (described also here: https://bugzilla.redhat.com/show_bug.cgi?id=1711794). Although I have threads=1 the problem still persists, generating a lot of messages in logs: >> >> 2021-03-18 15:17:17.482 [error] <0.122.51> closing AMQP connection <0.122.51> (x.x.x.100:60456 -> x.x.x.100:5672 - mod_wsgi:699:9a813fcb-c29f-4886-82bc-00bf478b6b64): >> missed heartbeats from client, timeout: 60s >> 2021-03-18 15:17:17.484 [info] <0.846.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed >> 2021-03-18 15:18:15.918 [error] <0.150.51> closing AMQP connection <0.150.51> (x.x.x.111:41934 -> x.x.x.100:5672 - mod_wsgi:697:b608c7b8-9644-434e-93af-c00222c0a700): >> missed heartbeats from client, timeout: 60s >> 2021-03-18 15:18:15.920 [error] <0.153.51> closing AMQP connection <0.153.51> (x.x.x.111:41936 -> x.x.x.100:5672 - mod_wsgi:697:77348197-148b-41a6-928f-c5eddfab57c9): >> missed heartbeats from client, timeout: 60s >> 2021-03-18 15:18:15.920 [info] <0.1527.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed >> 2021-03-18 15:18:15.922 [info] <0.1531.51> Closing all channels from connection '< x.x.x.100:5672">>' because it has been closed >> 2021-03-18 15:20:16.080 [info] <0.2196.51> accepting AMQP connection <0.2196.51> (x.x.x.111:34826 -> x.x.x.100:5672) >> 2021-03-18 15:20:16.080 [info] <0.2199.51> accepting AMQP connection <0.2199.51> (x.x.x.111:34828 -> x.x.x.100:5672) >> >> I’ve set heartbeat = 600 in rabbitmq.conf and still get disconnections after 60s timeout… How to set proper timeout to avoid disconnections? >> >> >> Hi Adam, >> >> I have seen similar messages in the past, but haven't really looked >> into it. It seems to happen during some intensive processes like >> encrypted cinder volume creation. >> >> Have you tried configuring oslo.messaging? For example, >> [oslo_messaging_rabbitmq] heartbeat_timeout_threshold. >> >> https://docs.openstack.org/oslo.messaging/latest/configuration/opts.html >> >> Mark >> >> >> Best regards >> Adam >> >> From ikatzir at infinidat.com Wed Mar 24 09:09:42 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Wed, 24 Mar 2021 11:09:42 +0200 Subject: [ironic] Cannot move nodes from state 'clean failed' into provisioning state 'Available' In-Reply-To: References: Message-ID: <72446832-3F0C-4D88-AE08-766BA5FA1A6A@infinidat.com> Hello all, While troubleshooting this, another observation I see is that when I run put the node in state provide: 'openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6’ It starts the cleaning process, then the node boots into PXE but the undercloud ignores it. When I tap the port I see that requests reach its interface: (undercloud) [stack at interop010 ~]$ sudo tcpdump -i br-ctlplane 10:43:10.600421 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from a0:36:9f:95:dd:e2 (oui Unknown), length 548 But on the same time the dnsmasq ignores it: (undercloud) [stack at interop010 ~]$ sudo tail -f /var/log/containers/ironic-inspector/dnsmasq.log Mar 24 10:39:43 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored Mar 24 10:40:36 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored Mar 24 10:40:39 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored Mar 24 10:40:48 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored Mar 24 10:41:52 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored Mar 24 10:42:57 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored Mar 24 10:43:06 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored Mar 24 10:43:10 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored Mar 24 10:43:14 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored Why is that? What is needed for the cleanup to start? Thanks, Igal > On 24 Mar 2021, at 0:09, Igal Katzir wrote: > > Hello Team, > > I had a situation where my undercloud-node had a problem with it’s disk and has disconnected from overcloud. > I couldn’t restore the undercloud controller and ended up re-installing it (running 'openstack undercloud install’). > The installation ended successfully but now I’m in a situation where Cleanup of the overcloud deployed nodes fails: > > (undercloud) [stack at interop010 ~]$ openstack baremetal node list > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean failed | True | > | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | clean failed | True | > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > I’ve tried to move node to available state but cannot: > (undercloud) [stack at interop010 ~]$ openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 > The requested action "provide" can not be performed on node "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". (HTTP 400) > > My question is: > How do I make the nodes available again? > as the deployment of overcloud fails with: > ERROR due to "Message: No valid host was found. , Code: 500” > > Thanks, > Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Mar 24 09:36:34 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 24 Mar 2021 09:36:34 +0000 Subject: [kolla] Feedback request: removing OracleLinux support In-Reply-To: References: Message-ID: On Tue, 23 Mar 2021 at 14:15, Stefan Kelber wrote: > > Hello, Hi Stefan, > > i just found this thread, in the situation to apparently have to move to ORACLE Linux, away from CentOS, as long as potential licening implications are up in the air, if i got all that right so far. > My questions actually are now: > - Would i have trouble today, if using ORACLE Linux for implementing Victoria via Kolla-Ansible? We dropped support for OracleLinux in the Train cycle [1]. We have a large support matrix and a small team in Kolla, and after Oracle left the community, we looked for feedback on usage of OracleLinux and did not find any users. [1] https://docs.openstack.org/releasenotes/kolla/train.html#relnotes-9-0-0-stable-train-upgrade-notes > - And/or would it be feasible for the mid term to reactivate support for ORACLE LINUX, until the dust has settled? I don't expect to see much support within the community for that. > - On the long: will streaming change how OpenStack will be deployed via Kolla-Ansible? I will assume you mean CentOS stream here. We are switching to a CentOS stream base container image in the Wallaby release. It remains to be seen how stable that will be. Typically a CentOS Linux minor release will cause us some breakage, and we will expect to see that occurring with CentOS stream, possibly on a more frequent basis. If you are not confident in CentOS stream, you might consider Ubuntu or Debian which are also supported by Kolla. > > Unfortunately i have to implement a project these days, not allowing me to wait until then. > Unfortunate timing... > > Best > > Stefan > From xin-ran.wang at intel.com Wed Mar 24 10:19:44 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Wed, 24 Mar 2021 10:19:44 +0000 Subject: [cyborg] No meeting tomorrow March 25th Message-ID: Hi guys, The Cyborg IRC meeting tomorrow has been cancelled. We will meet again at the regular time next week. Please feel free to contact the team on IRC channel, or send out mail here. Thanks for your understanding. Thanks, Xin-Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Wed Mar 24 11:08:35 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 24 Mar 2021 12:08:35 +0100 Subject: [vpnaas-neutron][Victoria] - could not load neutron_vpnaas Message-ID: We are configuring Neutron's vpnaas extension and although the guide seems to be quite short, it looks like that there are some dependencies that we are not aware of since we are receiving this error when we start the neutron server: WARNING stevedore.named [req-dca2e9a8-1799-45c7-9380-f247510af7e5 - - - - -] Could not load neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver We have followed this guide https://docs.openstack.org/neutron/latest/admin/vpnaas-scenario.html and before that we have also installed the below packages from the repos: openstack-neutron-vpnaas-17.0.0-2.el8.noarch libreswan-3.32-7.el8_3.x86_64 our config files are the below: l3_agent.ini [DEFAULT] interface_driver = linuxbridge debug = true [AGENT] extensions = vpnaas [vpnagent] vpn_device_driver = neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver neutron_vpnaas.conf [service_providers] service_provider = VPN:openswan:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default vpn_agent.ini is empty since the guide documents the changes in the l3_agent.ini file in neutron.conf we have also added the below: service_plugins = router,vpnaas At this point, my main question would be, is VPNaaS supported with linux bridge or just with OpenvSwitch? Thanks in advance From ssbarnea at redhat.com Wed Mar 24 12:44:59 2021 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Wed, 24 Mar 2021 05:44:59 -0700 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby In-Reply-To: References: Message-ID: As one of the early adopters and supporters of pre-commit *tool* I should mention few things I observed. pre-commit tool by design does pin all the hooks/linters/checks, so it is mainly making use of 'hacking' package irrelevant. On many tripleo repositories we ended up removing hacking and I do not remember getting into any problems due to flake8 or its plugins so far.. My personal recommendation is to avoid the use of "repo: local", especial for calling an external tools (flake8). This makes you loose the ability to upgrade the linter using `pre-commit auto-update`. Local hooks are designed for running custom checks hosted in-repo. Sidenote: when using additional dependencies with pre-commit, there is no pinning, so there is still a chance you may want to use hacking or at least manually control the version of the extra packages you want to inject inside the "hook". If we still have a strong case for using hacking, I think it should be converted to be usable as a hook itself, one that calls flake8. If doing this it will be possible to make use of pre-commit pin to tag and fully control the bumping of flake8 with all the deps involved. -- /zbr On 23 Mar 2021 at 11:00:42, Herve Beraud wrote: > Hello Osloers, > > As you surely recently observed patches proposed by the openstack bot to > configure wallaby fail with a flake8/hacking issue. > > Here are some guideline to fix the problem the inconsistency: > - Patch pre-commit on master (now xena) (here is the patch: > https://gist.github.com/4383/6e96c836bb1b0e1e0e599a5106f43f1a) > - Once Xena is patched cherry-pick and backport these changes on Wallaby > - Rebase the openstack bot patches on the top of this cherry-pick (or wait > for the merge of the previous one patch) > > You can copy the patch on oslo.db with the related commit message [1]. > > The root cause of the issue was that with the introduction of pre-commit > we started to define the version of flake8 to use. Previously this version > was defined by hacking's requirements. > > Indeed a few months ago we added pre-commit to allow us to run checks with > git hooks and reduce the usage of our gates. These changes were > standardized and spread on all the scope of oslo [2]. > > However, during the design of these changes [3] and after some discussion > we decided to pin the version of flake8 to use, hence by doing this we > short circuited hacking on its management of flake8. > > The solution to solve this issue is simply to trust hacking on its flake8 > management. Hacking will pull the right version of flake8 and the > inconsistency will disappear. flake8 provides a pre-commit hook so it could > be seen and called as a local target. > > [1] https://review.opendev.org/c/openstack/oslo.db/+/781470 > [2] https://review.opendev.org/q/topic:%22oslo-pre-commit%22 > [3] > https://review.opendev.org/q/topic:%22pre-commit%22+(status:open%20OR%20status:merged)+project:openstack/oslo.cache > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar at redhat.com Wed Mar 24 13:45:52 2021 From: chkumar at redhat.com (Chandan Kumar) Date: Wed, 24 Mar 2021 19:15:52 +0530 Subject: [tripleo][ci] quay.io is down In-Reply-To: References: Message-ID: On Tue, Mar 23, 2021 at 3:13 PM Francesco Pantano wrote: > > Adding +Guillaume Abrioux who is helping us mirroring the Ceph Dashboard containers to quay.ceph.io. > > After we have all the containers available on the related namespaces on quay.ceph.io, we can use the > review [1] to test that everything works as expected and switch on the new location. That's great, Francesco! > Thank you everyone for the patience. All the patches are merged now: https://review.opendev.org/q/I6999e683edd82ef61fba768013b12408851f6f09 The line is clear. Please workflow the patches with caution :-) Happy Hacking. Thanks, Chandan Kumar From hberaud at redhat.com Wed Mar 24 14:31:50 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 24 Mar 2021 15:31:50 +0100 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby In-Reply-To: References: Message-ID: Hello Sorin, Thanks for your feedback, they are much appreciated. Le mer. 24 mars 2021 à 13:45, Sorin Sbarnea a écrit : > As one of the early adopters and supporters of pre-commit *tool* I should > mention few things I observed. > > pre-commit tool by design does pin all the hooks/linters/checks, so it is > mainly making use of 'hacking' package irrelevant. On many tripleo > repositories we ended up removing hacking and I do not remember getting > into any problems due to flake8 or its plugins so far.. > Here our problem was that we recently started to define the version of flake8 to use and that version wasn't compatible with the one supported by hacking. The solution is simply to let hacking manage these own requirements. We don't want to drop hacking. > My personal recommendation is to avoid the use of "repo: local", especial > for calling an external tools (flake8). This makes you loose the ability to > upgrade the linter using `pre-commit auto-update`. Local hooks are designed > for running custom checks hosted in-repo. > We don't want to use auto-update. We want to control the flow of supported hacking versions and its supported rules. We don't want to suffer each new version of hacking. > Sidenote: when using additional dependencies with pre-commit, there is no > pinning, so there is still a chance you may want to use hacking or at least > manually control the version of the extra packages you want to inject > inside the "hook". > Please can you share docs or links that refer to this point? I didn't find that point in the official documentation. https://pre-commit.com/#config-additional_dependencies > > If we still have a strong case for using hacking, I think it should be > converted to be usable as a hook itself, one that calls flake8. If doing > this it will be possible to make use of pre-commit pin to tag and fully > control the bumping of flake8 with all the deps involved. > Indeed, it's a good idea. -- > /zbr > > > On 23 Mar 2021 at 11:00:42, Herve Beraud wrote: > >> Hello Osloers, >> >> As you surely recently observed patches proposed by the openstack bot to >> configure wallaby fail with a flake8/hacking issue. >> >> Here are some guideline to fix the problem the inconsistency: >> - Patch pre-commit on master (now xena) (here is the patch: >> https://gist.github.com/4383/6e96c836bb1b0e1e0e599a5106f43f1a) >> - Once Xena is patched cherry-pick and backport these changes on Wallaby >> - Rebase the openstack bot patches on the top of this cherry-pick (or >> wait for the merge of the previous one patch) >> >> You can copy the patch on oslo.db with the related commit message [1]. >> >> The root cause of the issue was that with the introduction of pre-commit >> we started to define the version of flake8 to use. Previously this version >> was defined by hacking's requirements. >> >> Indeed a few months ago we added pre-commit to allow us to run checks >> with git hooks and reduce the usage of our gates. These changes were >> standardized and spread on all the scope of oslo [2]. >> >> However, during the design of these changes [3] and after some discussion >> we decided to pin the version of flake8 to use, hence by doing this we >> short circuited hacking on its management of flake8. >> >> The solution to solve this issue is simply to trust hacking on its flake8 >> management. Hacking will pull the right version of flake8 and the >> inconsistency will disappear. flake8 provides a pre-commit hook so it could >> be seen and called as a local target. >> >> [1] https://review.opendev.org/c/openstack/oslo.db/+/781470 >> [2] https://review.opendev.org/q/topic:%22oslo-pre-commit%22 >> [3] >> https://review.opendev.org/q/topic:%22pre-commit%22+(status:open%20OR%20status:merged)+project:openstack/oslo.cache >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Wed Mar 24 15:25:04 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 24 Mar 2021 15:25:04 +0000 Subject: [nova] Cannot use --fields in one of our headnodes In-Reply-To: References: Message-ID: On Wed, 24 Mar 2021 at 15:17, Mauricio Tavares wrote: > > Easy question of the day: One of our headnodes is barking when I use > --fields with nova. > > nova list --all --fields=host,instance_name,status > ERROR (TypeError): '<' not supported between instances of 'str' and 'NoneType' > > This works fine with the other headnodes, and all 3 are supposed to be > the same (hardware, software). Would that indicate a bad installation > or something else? Where should I be looking for answers? Looks like a simple bug in the version of python-novaclient on this host, can you try using --debug to get some more information and comparing the version between a working and non-working host? From jay.faulkner at verizonmedia.com Wed Mar 24 15:26:25 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Wed, 24 Mar 2021 08:26:25 -0700 Subject: [E] [ironic] How to move nodes from a 'clean failed' state into 'Available' In-Reply-To: References: Message-ID: A node in CLEAN FAILED must be moved to MANAGEABLE state before it can be told to "provide" (which eventually puts it back in AVAILABLE). Try this: `openstack baremetal node manage UUID`, then run the command with "provide" as you did before. The available states and their transitions are documented here: https://docs.openstack.org/ironic/latest/contributor/states.html I'll note that if cleaning failed, it's possible the node is misconfigured in such a way that will cause all deployments and cleanings to fail (e.g.; if you're using Ironic with Nova, and you attempt to provision a machine and it errors during deploy; Nova will by default attempt to clean that node, which may be why you see it end up in clean failed). So I strongly suggest you look at the last_error field on the node and attempt to determine why the failure happened before retrying. Good luck! -Jay Faulkner On Wed, Mar 24, 2021 at 8:20 AM Igal Katzir wrote: > Hello Team, > > I had a situation where my *undercloud-node *had a problem with it’s disk > and has disconnected from overcloud. > I couldn’t restore the undercloud controller and ended up re-installing it > (running 'openstack undercloud install’). > The installation ended successfully but now I’m in a situation where > Cleanup of the overcloud deployed nodes fails: > > (undercloud) [stack at interop010 ~]$ openstack baremetal node list > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance > UUID | Power State | Provisioning State | Maintenance | > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | > power on | clean failed | True | > | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | > power on | clean failed | True | > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > I’ve tried to move node to available state but cannot: > (undercloud) [stack at interop010 ~]$ openstack baremetal node provide > 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 > The requested action "provide" can not be performed on node > "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". > (HTTP 400) > > My question is: > *How do I make the nodes available again?* > as the deployment of overcloud fails with: > ERROR due to "Message: No valid host was found. , Code: 500” > > Thanks, > Igal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 24 16:09:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 24 Mar 2021 16:09:17 +0000 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby In-Reply-To: References: Message-ID: <20210324160917.nzzfcuanytmrbujh@yuggoth.org> On 2021-03-24 05:44:59 -0700 (-0700), Sorin Sbarnea wrote: [...] > If we still have a strong case for using hacking, I think it > should be converted to be usable as a hook itself, one that calls > flake8. [...] Can you expand on how you would expect that to work? Quoting from hacking's README: "hacking is a set of flake8 plugins that test and enforce the OpenStack StyleGuide" How would you turn a set of flake8 plugins into a pre-commit hook which calls flake8? That seems rather circular. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Wed Mar 24 16:34:09 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 24 Mar 2021 17:34:09 +0100 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby In-Reply-To: <20210324160917.nzzfcuanytmrbujh@yuggoth.org> References: <20210324160917.nzzfcuanytmrbujh@yuggoth.org> Message-ID: Le mer. 24 mars 2021 à 17:13, Jeremy Stanley a écrit : > On 2021-03-24 05:44:59 -0700 (-0700), Sorin Sbarnea wrote: > [...] > > If we still have a strong case for using hacking, I think it > > should be converted to be usable as a hook itself, one that calls > > flake8. > [...] > > Can you expand on how you would expect that to work? Quoting from > hacking's README: > > "hacking is a set of flake8 plugins that test and enforce the > OpenStack StyleGuide" > > How would you turn a set of flake8 plugins into a pre-commit hook > which calls flake8? That seems rather circular. > Excellent point! -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbengt at redhat.com Tue Mar 23 15:00:01 2021 From: dbengt at redhat.com (Daniel Bengtsson) Date: Tue, 23 Mar 2021 16:00:01 +0100 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby In-Reply-To: References: Message-ID: <54e21bfd-915b-dc9d-3a38-84e828bb755b@redhat.com> Le 23/03/2021 à 12:00, Herve Beraud a écrit : > Here are some guideline to fix the problem the inconsistency: > - Patch pre-commit on master (now xena) (here is the patch: > https://gist.github.com/4383/6e96c836bb1b0e1e0e599a5106f43f1a > ) > - Once Xena is patched cherry-pick and backport these changes on Wallaby > - Rebase the openstack bot patches on the top of this cherry-pick (or > wait for the merge of the previous one patch) I have made the first part at the moment. The topic is move_flake8_local for it. I need to clean several patches and I waiting the merged before cherry-pick in wallaby. I saw also several patches failed because we have an issue in the lower-constraints file. Sometimes we have already a patch to remove this file, sometimes no. I will remove it from all repositories. From juliaashleykreger at gmail.com Wed Mar 24 17:31:01 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 24 Mar 2021 10:31:01 -0700 Subject: [ironic] Cannot move nodes from state 'clean failed' into provisioning state 'Available' In-Reply-To: <72446832-3F0C-4D88-AE08-766BA5FA1A6A@infinidat.com> References: <72446832-3F0C-4D88-AE08-766BA5FA1A6A@infinidat.com> Message-ID: So versions and overall configuration might help, *but* often these issues are just a typo with a MAC address or the wrong port. Can you verify that the MAC address your seeing DHCP requests for matchs what is recorded for the node in the `openstack baremetal port list` output? On Wed, Mar 24, 2021 at 8:18 AM Igal Katzir wrote: > > Hello all, > > While troubleshooting this, another observation I see is that when I run put the node in state provide: > 'openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6’ > It starts the cleaning process, then the node boots into PXE but the undercloud ignores it. > When I tap the port I see that requests reach its interface: > > (undercloud) [stack at interop010 ~]$ sudo tcpdump -i br-ctlplane > 10:43:10.600421 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from a0:36:9f:95:dd:e2 (oui Unknown), length 548 > > But on the same time the dnsmasq ignores it: > (undercloud) [stack at interop010 ~]$ sudo tail -f /var/log/containers/ironic-inspector/dnsmasq.log > Mar 24 10:39:43 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored > Mar 24 10:40:36 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored > Mar 24 10:40:39 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored > Mar 24 10:40:48 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored > Mar 24 10:41:52 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored > Mar 24 10:42:57 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) 6c:ae:8b:69:ee:80 ignored > Mar 24 10:43:06 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored > Mar 24 10:43:10 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored > Mar 24 10:43:14 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) a0:36:9f:95:dd:e2 ignored > > Why is that? > What is needed for the cleanup to start? > > Thanks, > Igal > > On 24 Mar 2021, at 0:09, Igal Katzir wrote: > > Hello Team, > > I had a situation where my undercloud-node had a problem with it’s disk and has disconnected from overcloud. > I couldn’t restore the undercloud controller and ended up re-installing it (running 'openstack undercloud install’). > The installation ended successfully but now I’m in a situation where Cleanup of the overcloud deployed nodes fails: > > (undercloud) [stack at interop010 ~]$ openstack baremetal node list > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean failed | True | > | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | clean failed | True | > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > I’ve tried to move node to available state but cannot: > (undercloud) [stack at interop010 ~]$ openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 > The requested action "provide" can not be performed on node "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". (HTTP 400) > > My question is: > How do I make the nodes available again? > as the deployment of overcloud fails with: > ERROR due to "Message: No valid host was found. , Code: 500” > > Thanks, > Igal > > From helena at openstack.org Wed Mar 24 18:36:59 2021 From: helena at openstack.org (helena at openstack.org) Date: Wed, 24 Mar 2021 14:36:59 -0400 (EDT) Subject: [VOLUNTEER NEEDED] Wallaby Release Demo Message-ID: <1616611019.99051532@apps.rackspace.com> Hi everyone! We are closely approaching the Wallaby release (Yay!) and I would like to put together a demo video for the OpenInfra YouTube channel. The demo videos we have for Havana [1] and Juno [2] are our top-performing videos of all time. I am wondering if there is anyone in the community who would like to film a Wallaby demo for the channel. I would need the video by April 9th. In addition to the demo video, we will also have a community meeting. I will be reaching out to ptls soon on content for this. [1] [ https://www.youtube.com/watch?v=vm7aHJtQMQE ]( https://www.youtube.com/watch?v=vm7aHJtQMQE ) [2] [ https://www.youtube.com/watch?v=TgPTjrf1y0A ]( https://www.youtube.com/watch?v=TgPTjrf1y0A ) Cheers, Helena Open Infrastructure Foundation helens at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Mar 24 18:41:42 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 24 Mar 2021 19:41:42 +0100 Subject: [kolla] Plan to deprecate and finally drop the chrony container In-Reply-To: References: Message-ID: I have not received any more feedback on this so I have started the deprecation process. [1] [1] https://review.opendev.org/q/topic:deprecate-chrony -yoctozepto On Thu, Feb 25, 2021 at 7:02 PM Radosław Piliszek wrote: > > Hello Fellow OpenStackers interested in Kolla's way. > > The topic should be pretty self-explanatory but let me provide the > details and background. > > We are currently working on the Wallaby release and, since we are > cycle-trailing, it is the right moment for us to reflect upon the > various cleanup plans we had (and have). > One of those is the drop of the chrony container. > Chrony serves the time synchronization purpose which is required for > healthy operations (I hope I don't need to explain this point further, > time sync is essential for most services nowadays). > It is still deployed by default by Kolla Ansible. > The point is, most default (even minimal ones) host OS installations > come with some form of NTP client, be it chrony, ntpd, timesyncd or > whatever else there is. :-) This applies also to provisioning > solutions like MAAS or Foreman. > And Kolla Ansible actually stands on its head to detect and disable > those to avoid multiple time sync daemons hell as it may disrupt > normal operations (mostly due to having different time sources but > also due to different compensation algorithms [or rather just their > actual configuration] and the fact that each tool assumes to be the > only source of time changes). > We seem to have placed a relatively sane precheck to detect this but > effective disabling may prove tricky, especially since it is very easy > to get it installed at any point. > The NTP client is considered a very basic service these days. > It also seems that several users known to us do not use the chrony > container and instead orchestrate timesync elsewhere. > > Hence, our current plan is to: > 1) Deprecate the chrony container in the Wallaby cycle. > - in Kolla Ansible this would mean also warning users that they use it > and defaulting to not using it (with docs and a reno); docs would > suggest an Ansible way to coordinate timesync > - in Kolla docs and a reno > 2) Drop the chrony container from both the places > > I know Kayobe (the "highest-level" member of the Kolla project) is > planning to incorporate timesync handling by external Ansible role > means so their users should be covered. > > Please let us know if you have any comments on the plan above. > Or generally any suggestions you would like us to hear. > > Cheers! > > -yoctozepto From sshnaidm at redhat.com Wed Mar 24 19:30:06 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 24 Mar 2021 21:30:06 +0200 Subject: [tripleo][ansible] Openstack Ansible collections (modules) Xena PTG Message-ID: Hi, all The Xena PTG is scheduled for Apr 19-23 We are organizing 2 hours in Wed 12 Apr for Openstack Ansible collections project talks.[1] Please add your PTG topics in the etherpad: https://etherpad.opendev.org/p/xena-ptg-os-ansible-collections [1] https://opendev.org/openstack/ansible-collections-openstack Thanks -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikatzir at infinidat.com Wed Mar 24 22:39:36 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Thu, 25 Mar 2021 00:39:36 +0200 Subject: [ironic] Cannot move nodes from state 'clean failed' into provisioning state 'Available' In-Reply-To: References: <72446832-3F0C-4D88-AE08-766BA5FA1A6A@infinidat.com> Message-ID: Hello Julia, Thanks for your response. I am using a RedHat Openstack Platform 16.1, which is running on RHEL 8.2. All are physical servers; - One Undercloud Director. - Overcloud consists of two nodes. (This is for Certification purposes) It is unlikely that it's a mac addr. mismatch (I wish...) since I've already deployed these nodes several times, using the same nodes.json Just for reference , here is the output: (undercloud) [stack at interop010 ~]$ openstack baremetal port list +--------------------------------------+-------------------+ | UUID | Address | +--------------------------------------+-------------------+ | 2d404695-f236-4d32-8b65-5ca1fa6b756a | a0:36:9f:95:dd:e2 | | 32669178-0408-4ff1-b4b4-df65fc7643c9 | 6c:ae:8b:69:ee:80 | +--------------------------------------+-------------------+ The operation was working well until I have 'lost' the undercloud node, but overcloud stayed working. I might need to delete these nodes and run introspection again. Igal On Wed, Mar 24, 2021 at 7:31 PM Julia Kreger wrote: > So versions and overall configuration might help, *but* often these > issues are just a typo with a MAC address or the wrong port. Can you > verify that the MAC address your seeing DHCP requests for matchs what > is recorded for the node in the `openstack baremetal port list` > output? > > On Wed, Mar 24, 2021 at 8:18 AM Igal Katzir wrote: > > > > Hello all, > > > > While troubleshooting this, another observation I see is that when I run > put the node in state provide: > > 'openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6’ > > It starts the cleaning process, then the node boots into PXE but the > undercloud ignores it. > > When I tap the port I see that requests reach its interface: > > > > (undercloud) [stack at interop010 ~]$ sudo tcpdump -i br-ctlplane > > 10:43:10.600421 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, > Request from a0:36:9f:95:dd:e2 (oui Unknown), length 548 > > > > But on the same time the dnsmasq ignores it: > > (undercloud) [stack at interop010 ~]$ sudo tail -f > /var/log/containers/ironic-inspector/dnsmasq.log > > Mar 24 10:39:43 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > 6c:ae:8b:69:ee:80 ignored > > Mar 24 10:40:36 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > a0:36:9f:95:dd:e2 ignored > > Mar 24 10:40:39 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > a0:36:9f:95:dd:e2 ignored > > Mar 24 10:40:48 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > 6c:ae:8b:69:ee:80 ignored > > Mar 24 10:41:52 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > 6c:ae:8b:69:ee:80 ignored > > Mar 24 10:42:57 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > 6c:ae:8b:69:ee:80 ignored > > Mar 24 10:43:06 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > a0:36:9f:95:dd:e2 ignored > > Mar 24 10:43:10 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > a0:36:9f:95:dd:e2 ignored > > Mar 24 10:43:14 dnsmasq-dhcp[7]: DHCPDISCOVER(br-ctlplane) > a0:36:9f:95:dd:e2 ignored > > > > Why is that? > > What is needed for the cleanup to start? > > > > Thanks, > > Igal > > > > On 24 Mar 2021, at 0:09, Igal Katzir wrote: > > > > Hello Team, > > > > I had a situation where my undercloud-node had a problem with it’s disk > and has disconnected from overcloud. > > I couldn’t restore the undercloud controller and ended up re-installing > it (running 'openstack undercloud install’). > > The installation ended successfully but now I’m in a situation where > Cleanup of the overcloud deployed nodes fails: > > > > (undercloud) [stack at interop010 ~]$ openstack baremetal node list > > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance > UUID | Power State | Provisioning State | Maintenance | > > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | > power on | clean failed | True | > > | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | > power on | clean failed | True | > > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > > > I’ve tried to move node to available state but cannot: > > (undercloud) [stack at interop010 ~]$ openstack baremetal node provide > 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 > > The requested action "provide" can not be performed on node > "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". > (HTTP 400) > > > > My question is: > > How do I make the nodes available again? > > as the deployment of overcloud fails with: > > ERROR due to "Message: No valid host was found. , Code: 500” > > > > Thanks, > > Igal > > > > > -- Regards, *Igal Katzir* Cell +972-54-5597086 Interoperability Team *INFINIDAT* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikatzir at infinidat.com Wed Mar 24 22:48:58 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Thu, 25 Mar 2021 00:48:58 +0200 Subject: [E] [ironic] How to move nodes from a 'clean failed' state into 'Available' In-Reply-To: References: Message-ID: Thanks Jay, It gets into 'clean failed' state because it fails to boot into PXE mode. I don't understand why the DHCP does not respond to the clients request, it's like it remembers that the same client already received an IP in the past. Is there a way to clear the dnsmasq database of reservations? Igal On Wed, Mar 24, 2021 at 5:26 PM Jay Faulkner wrote: > A node in CLEAN FAILED must be moved to MANAGEABLE state before it can be > told to "provide" (which eventually puts it back in AVAILABLE). > > Try this: > `openstack baremetal node manage UUID`, then run the command with > "provide" as you did before. > > The available states and their transitions are documented here: > https://docs.openstack.org/ironic/latest/contributor/states.html > > I'll note that if cleaning failed, it's possible the node is misconfigured > in such a way that will cause all deployments and cleanings to fail (e.g.; > if you're using Ironic with Nova, and you attempt to provision a machine > and it errors during deploy; Nova will by default attempt to clean that > node, which may be why you see it end up in clean failed). So I strongly > suggest you look at the last_error field on the node and attempt to > determine why the failure happened before retrying. > > Good luck! > > -Jay Faulkner > > On Wed, Mar 24, 2021 at 8:20 AM Igal Katzir wrote: > >> Hello Team, >> >> I had a situation where my *undercloud-node *had a problem with it’s >> disk and has disconnected from overcloud. >> I couldn’t restore the undercloud controller and ended up re-installing >> it (running 'openstack undercloud install’). >> The installation ended successfully but now I’m in a situation where >> Cleanup of the overcloud deployed nodes fails: >> >> (undercloud) [stack at interop010 ~]$ openstack baremetal node list >> >> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> | UUID | Name | Instance >> UUID | Power State | Provisioning State | Maintenance | >> >> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | >> power on | clean failed | True | >> | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | >> power on | clean failed | True | >> >> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> >> I’ve tried to move node to available state but cannot: >> (undercloud) [stack at interop010 ~]$ openstack baremetal node provide >> 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 >> The requested action "provide" can not be performed on node >> "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". >> (HTTP 400) >> >> My question is: >> *How do I make the nodes available again?* >> as the deployment of overcloud fails with: >> ERROR due to "Message: No valid host was found. , Code: 500” >> >> Thanks, >> Igal >> > -- Regards, *Igal Katzir* Cell +972-54-5597086 Interoperability Team *INFINIDAT* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Mar 25 01:16:46 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 24 Mar 2021 20:16:46 -0500 Subject: [all][tc] Xena PTG Planning Message-ID: <17866f48e53.1285108df1037866.5958750053657313110@ghanshyammann.com> Hello Everyone, As you already know that the Xena cycle virtual PTG will be held between 19th - 23rd April[1]. To plan the Technical Committee PTG planning, please do the following: 1. Fill the doodle poll as per your availability. Please fill it soon as we need to book the slot by 25th March(which is tomorrow). - https://doodle.com/poll/2zy8hex4r6wvidqk?utm_source=poll&utm_medium=link 2. Add the topics you would like to discuss to the below etherpad. - https://etherpad.opendev.org/p/tc-xena-ptg NOTE: this is not limited to TC members only; I would like all community members to fill the doodle poll and, add the topics you would like or want TC members to discuss in PTG. -gmann From manchandavishal143 at gmail.com Thu Mar 25 06:03:17 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Thu, 25 Mar 2021 11:33:17 +0530 Subject: [horizon] [ptg] Xena PTG Message-ID: Hi everyone, As discussed in yesterday horizon weekly meeting and based on the doodle poll I have booked available slots for Horizon Xena PTG. Please find below the schedule for the same: Monday April 19, 14:00 - 17:00 UTC Tuesday April 20, 05:00 - 07:00 UTC and 16:00 - 17:00 UTC Wednesday April 21, 16:00 - 17:00 UTC I have also created Etherpad to collect topics for ptg discussion [1]. Feel free to add your topics. Please Let me know and Ivan in case you have any topics to discuss and above schedule doesn't work for you and We will see how to manage that. Thank you, Vishal Manchanda(irc: vishalmanchanda) [1] https://etherpad.opendev.org/p/xena-ptg-horizon-planning -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Thu Mar 25 06:53:58 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 24 Mar 2021 23:53:58 -0700 Subject: [manila][ptg] Xena PTG Planning Message-ID: Hello Zorillas and Interested Stackers, As you're aware, the virtual PTG for the Xena release cycle is between April 19-23, 2021. If you haven't registered yet, you must do so as soon as possible! [1]. We've signed up for some slots on the PTG timeslots ethercalc [2]. The PTG Planning etherpad [3] is now live. Please go ahead and add your name/irc nick and propose any topics. You may propose topics even if you wouldn't like to moderate the discussion. Thanks, and hope to see you all there! Goutham [1] https://april2021-ptg.eventbrite.com/ [2] https://ethercalc.net/oz7q0gds9zfi [3] https://etherpad.opendev.org/p/xena-ptg-manila-planning From artem.goncharov at gmail.com Thu Mar 25 07:50:19 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 25 Mar 2021 08:50:19 +0100 Subject: [sdk][cli] OpenStackSDK/CLI Xena PTG Message-ID: <5DA8A905-619F-4115-A4F8-DD6226F578D4@gmail.com> Hi, all The Xena PTG is scheduled for Apr 19-23 (15:00-17:00 UTC) We are organizing 2 hours in Wed 12 Apr for SDK/CLI project talks. Please add your PTG topics in the etherpad: https://etherpad.opendev.org/p/xena-ptg-sdk-cli P.S. And don’t forget to register for PTG participation Regards, Artem (gtema) -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Mar 25 08:04:19 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 25 Mar 2021 09:04:19 +0100 Subject: [neutron] Drivers meeting agenda for 26.03.2021 Message-ID: <20210325080419.pcbh2qrxqolwiec3@p1.localdomain> Hi, Agenda for tomorrow's drivers meeting is at [1]. We have 2 new RFEs to discuss: * https://bugs.launchpad.net/neutron/+bug/1920065 - Automatic rescheduling of BGP speakers on DrAgents Patch for that is already proposed too https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/780675 * https://bugs.launchpad.net/neutron/+bug/1921126 -[RFE] Allow explicit management of default routes Spec for that one is also proposed already https://review.opendev.org/c/openstack/neutron-specs/+/781475 Please read through them so we can discuss them tomorrow on the meeting. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From sxmatch1986 at gmail.com Thu Mar 25 08:29:12 2021 From: sxmatch1986 at gmail.com (hao wang) Date: Thu, 25 Mar 2021 16:29:12 +0800 Subject: [oslo.db][Zaqar] test_models_sync failed because oslo.db raise "AttributeError: can't set attribute" Message-ID: Hi, There is an issue about test_models_sync test when openstack-tox-lower-constraints job running that I can't figure out why it's here, could anyone help me about this? zaqar.tests.unit.storage.sqlalchemy_migration.test_migrations.TestMigrationsMySQL.test_models_sync [0.306333s] ... FAILED Captured traceback: ~~~~~~~~~~~~~~~~~~~ Traceback (most recent call last): File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/test_base.py", line 178, in setUp self.useFixture( File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testtools/testcase.py", line 760, in useFixture reraise(*exc_info) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testtools/_compat3x.py", line 16, in reraise raise exc_obj.with_traceback(exc_tb) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testtools/testcase.py", line 735, in useFixture fixture.setUp() File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/test_base.py", line 65, in setUp testresources.setUpResources( File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", line 786, in setUpResources setattr(test, resource[0], resource[1].getResource(result)) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", line 522, in getResource self._setResource(self._make_all(result)) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", line 551, in _make_all dependency_resources[name] = resource.getResource() File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", line 522, in getResource self._setResource(self._make_all(result)) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", line 552, in _make_all resource = self.make(dependency_resources) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/provision.py", line 133, in make url = backend.provisioned_database_url(db_token) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/provision.py", line 337, in provisioned_database_url return self.impl.provisioned_database_url(self.url, ident) File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/provision.py", line 498, in provisioned_database_url url.database = ident AttributeError: can't set attribute There is the test log: https://zuul.opendev.org/t/openstack/build/57eb103379b144abaa2517d41ef67610/log/job-output.txt From oliver.wenz at dhbw-mannheim.de Thu Mar 25 09:13:24 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Thu, 25 Mar 2021 10:13:24 +0100 (CET) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1282417248.54816.1616663604380@ox.dhbw-mannheim.de> I manually set the fallocate_reserve values to 1% on the swift host and the services don't show errors anymore (I also tried with higher values than 1%). However, when trying to take the snapshot it still disappears and I still get the following for the swift-proxy service: Mar 25 08:55:53 infra1-swift-proxy-container-27169fa7 proxy-server[84]: Client disconnected without sending last chunk (txn: tx5590a6d463f645caaaf67-00605c4fe4) (client_ip: 192.168.110. 106) 192.168.110.106 is the IP of eth1 on my glance container so maybe that indicates that this is a glance problem after all? Because of the following log from the glance-api service: Mar 25 08:54:59 infra1-glance-container-99614ac2 glance-wsgi-api[6177]: 2021-03-25 08:54:59.046 6177 INFO glance.api.v2.image_data [req-42a0c48b-895c-4776-9e9c-586eb596b540 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. Mar 25 08:55:54 infra1-glance-container-99614ac2 uwsgi[6177]: Thu Mar 25 08:55:54 2021 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /v2/images/16f50adc-ebec-4812-96bd-2bbf6d2014b5/file (ip 192.168.110.214) !!! I checked the logs on the compute host (192.168.110.214) where the instance I'm trying to take a snapshot of is running and I found this in the logs: Mar 25 08:55:15 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:15.050 2252 INFO nova.virt.libvirt.imagecache [req-24323bc9-65bc-47b6-8ef1-81fc88b2b267 - - - - -] image c27e2055-8c3c-49d7-9c12-44999b1e7e0f at (/var/lib/nova/instances/_base/4102ded7765e7306705acece9b2b1c4e88087478): checking Mar 25 08:55:15 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:15.052 2252 INFO nova.virt.libvirt.imagecache [req-24323bc9-65bc-47b6-8ef1-81fc88b2b267 - - - - -] image e73512b8-7099-4691-aa48-966aa35b59ff at (/var/lib/nova/instances/_base/573a84fb0420a038702c616b848169b07e2bb7f3): checking Mar 25 08:55:15 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:15.759 2252 INFO nova.virt.libvirt.imagecache [req-24323bc9-65bc-47b6-8ef1-81fc88b2b267 - - - - -] Active base files: /var/lib/nova/instances/_base/4102ded7765e7306705acece9b2b1c4e88087478 /var/lib/nova/instances/_base/573a84fb0420a038702c616b848169b07e2bb7f3 Mar 25 08:55:57 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:57.102 2252 INFO nova.compute.manager [req-42a0c48b-895c-4776-9e9c-586eb596b540 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] [instance: 46d45c54-eba1-4624-8e5a-19dc157484ae] Successfully reverted task state from image_uploading on failure for instance. Mar 25 08:55:57 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server [req-42a0c48b-895c-4776-9e9c-586eb596b540 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Exception during message handling: glanceclient.exc.HTTPInternalServerError: HTTP 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation. 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", line 2478, in snapshot 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server metadata['location'] = root_disk.direct_snapshot( 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/virt/libvirt/imagebackend.py", line 452, in direct_snapshot 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise NotImplementedError(_('direct_snapshot() is not implemented')) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server NotImplementedError: direct_snapshot() is not implemented 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred: 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/exception_wrapper.py", line 76, in wrapped 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server _emit_exception_notification( 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise value 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/exception_wrapper.py", line 69, in wrapped 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 188, in decorated_function 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server LOG.warning("Failed to revert task state for instance. " 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise value 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 159, in decorated_function 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/utils.py", line 1434, in decorated_function 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 216, in decorated_function 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise value 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 205, in decorated_function 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 236, in decorated_function 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server compute_utils.delete_image( 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise value 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 232, in decorated_function 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return function(self, context, image_id, instance, 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 3908, in snapshot_instance 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server self._snapshot_instance(context, image_id, instance, 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", line 3941, in _snapshot_instance 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server self.driver.snapshot(context, instance, image_id, 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", line 2549, in snapshot 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server self._image_api.update(context, 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", line 1247, in update 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return session.update(context, image_id, image_info, data=data, 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", line 696, in update 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server _reraise_translated_image_exception(image_id) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", line 1035, in _reraise_translated_image_exception 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise new_exc.with_traceback(exc_trace) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", line 694, in update 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server image = self._update_v2(context, sent_service_image_meta, data) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", line 713, in _update_v2 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server image = self._upload_data(context, image_id, data) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", line 589, in _upload_data 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server utils.tpool_execute(self._client.call, 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/utils.py", line 694, in tpool_execute 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server eventlet.tpool.execute(func, *args, **kwargs) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/eventlet/tpool.py", line 129, in execute 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server six.reraise(c, e, tb) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise value 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/eventlet/tpool.py", line 83, in tworker 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server rv = meth(*args, **kwargs) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", line 192, in call 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server result = getattr(controller, method)(*args, **kwargs) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/common/utils.py", line 600, in inner 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return RequestIdProxy(wrapped(*args, **kwargs)) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/v2/images.py", line 289, in upload 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server resp, body = self.http_client.put(url, headers=hdrs, data=body) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/keystoneauth1/adapter.py", line 404, in put 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return self.request(url, 'PUT', **kwargs) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/common/http.py", line 380, in request 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server return self._handle_response(resp) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/common/http.py", line 120, in _handle_response 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server raise exc.from_response(resp, resp.content) 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server glanceclient.exc.HTTPInternalServerError: HTTP 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation. 2021-03-25 08:55:57.106 2252 ERROR oslo_messaging.rpc.server Any hints where the error might be coming from are much appreciated! Kind regards, Oliver From elod.illes at est.tech Thu Mar 25 09:54:54 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 25 Mar 2021 10:54:54 +0100 Subject: [neutron][stable][infra] neutron-lib stable branches core reviewers In-Reply-To: References: <2771831.n0TQhf9oRt@p1> Message-ID: I guess maybe the +2 rights got lost during the Gerrit upgrade in November. Maybe this happened due to something was missing from the acl config (what Slawek linked). I've added the [infra] to the subject as infra team has the power to act in this matter. (unfortunately stable-maint-core doesn't have) (note: until that happens: feel free to ping me if a stable patch there needs +2, since as stable-maint-core I have the right to +2/+W there) Cheers, Előd On 2021. 03. 22. 12:35, Bernard Cafarelli wrote: > On Thu, 18 Mar 2021 at 14:48, Slawek Kaplonski > wrote: > > Hi, > > I just noticed that neutron-lib project has got own group > "neutron-lib-stable-maint" which has +2 powers in neutron-lib > stable branches [1]. As I see now in gerrit that group don't have > any members. Would it be maybe possible to remove that group and > add "neutron-stable-maint" to the neutron-lib stable branches > instead? If yes, should I simply propose patch to change [1] or is > there any other way which I should do it? > > I guess we never spotted that "neutron-lib-stable-maint" before, as it > seems neutron cores had +2 powers before: > https://review.opendev.org/c/openstack/neutron-lib/+/717093 > > > But yes it could be useful to have either of these groups (neutron > cores or stable cores) in, backports to neutron-lib are rare but some > happen - I remember one some time ago where it was valid to have a > patch backported. > > > [1] > https://github.com/openstack/project-config/blob/master/gerrit/acls/openstack/neutron-lib.config > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > > > -- > Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Thu Mar 25 10:01:54 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Thu, 25 Mar 2021 12:01:54 +0200 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <1282417248.54816.1616663604380@ox.dhbw-mannheim.de> References: <1282417248.54816.1616663604380@ox.dhbw-mannheim.de> Message-ID: <245241616666152@mail.yandex.ru> Oh, I have a guess what this might actually be. During snapshot upload process user token that is used for the upload might get expired. If that's the case, following changes in user_variables might help to resolve the issue: glance_glance_api_conf_overrides: keystone_authtoken: service_token_roles_required: True service_token_roles: service 25.03.2021, 11:20, "Oliver Wenz" : > I manually set the fallocate_reserve values to 1% on the swift host and the > services don't show errors anymore (I also tried with higher values than 1%). > > However, when trying to take the snapshot it still disappears and I still get > the following for the swift-proxy service: > > Mar 25 08:55:53 infra1-swift-proxy-container-27169fa7 proxy-server[84]: Client > disconnected without sending last chunk (txn: > tx5590a6d463f645caaaf67-00605c4fe4) (client_ip: 192.168.110. > 106) > > 192.168.110.106 is the IP of eth1 on my glance container so maybe that indicates > that this is a glance problem after all? > > Because of the following log from the glance-api service: > > Mar 25 08:54:59 infra1-glance-container-99614ac2 glance-wsgi-api[6177]: > 2021-03-25 08:54:59.046 6177 INFO glance.api.v2.image_data > [req-42a0c48b-895c-4776-9e9c-586eb596b540 956806468e9f43dbaad1807a5208de52 > ebe0fe5f3893495e82598c07716f5d45 - default default] Unable to create trust: no > such option collect_timing in group [keystone_authtoken] Use the existing user > token. > Mar 25 08:55:54 infra1-glance-container-99614ac2 uwsgi[6177]: Thu Mar 25 > 08:55:54 2021 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client > disconnected) on request /v2/images/16f50adc-ebec-4812-96bd-2bbf6d2014b5/file > (ip 192.168.110.214) !!! > > I checked the logs on the compute host (192.168.110.214) where the instance I'm > trying to take a snapshot of is running and I found this in the logs: > > Mar 25 08:55:15 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:15.050 2252 INFO > nova.virt.libvirt.imagecache [req-24323bc9-65bc-47b6-8ef1-81fc88b2b267 - - - - > -] image c27e2055-8c3c-49d7-9c12-44999b1e7e0f at > (/var/lib/nova/instances/_base/4102ded7765e7306705acece9b2b1c4e88087478): > checking > Mar 25 08:55:15 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:15.052 2252 INFO > nova.virt.libvirt.imagecache [req-24323bc9-65bc-47b6-8ef1-81fc88b2b267 - - - - > -] image e73512b8-7099-4691-aa48-966aa35b59ff at > (/var/lib/nova/instances/_base/573a84fb0420a038702c616b848169b07e2bb7f3): > checking > Mar 25 08:55:15 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:15.759 2252 INFO > nova.virt.libvirt.imagecache [req-24323bc9-65bc-47b6-8ef1-81fc88b2b267 - - - - > -] Active base files: > /var/lib/nova/instances/_base/4102ded7765e7306705acece9b2b1c4e88087478 > /var/lib/nova/instances/_base/573a84fb0420a038702c616b848169b07e2bb7f3 > Mar 25 08:55:57 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:57.102 2252 INFO > nova.compute.manager [req-42a0c48b-895c-4776-9e9c-586eb596b540 > 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default > default] [instance: 46d45c54-eba1-4624-8e5a-19dc157484ae] Successfully reverted > task state from image_uploading on failure for instance. > Mar 25 08:55:57 bc1bl14 nova-compute[2252]: 2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server [req-42a0c48b-895c-4776-9e9c-586eb596b540 > 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default > default] Exception during message handling: > glanceclient.exc.HTTPInternalServerError: HTTP 500 Internal Server Error: The > server has either erred or is incapable of performing the requested operation. >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server Traceback (most recent call last): >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", > line 2478, in snapshot >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server metadata['location'] = root_disk.direct_snapshot( >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/virt/libvirt/imagebackend.py", > line 452, in direct_snapshot >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise NotImplementedError(_('direct_snapshot() is > not implemented')) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server NotImplementedError: direct_snapshot() is not > implemented >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server During handling of the above exception, another > exception occurred: >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server Traceback (most recent call last): >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_messaging/rpc/server.py", > line 165, in _process_incoming >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", > line 309, in dispatch >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, > args) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", > line 229, in _do_dispatch >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server result = func(ctxt, **new_args) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/exception_wrapper.py", > line 76, in wrapped >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server _emit_exception_notification( >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server self.force_reraise() >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in > reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise value >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/exception_wrapper.py", > line 69, in wrapped >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return f(self, context, *args, **kw) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 188, in decorated_function >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server LOG.warning("Failed to revert task state for > instance. " >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server self.force_reraise() >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in > reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise value >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 159, in decorated_function >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return function(self, context, *args, **kwargs) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/utils.py", > line 1434, in decorated_function >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return function(self, context, *args, **kwargs) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 216, in decorated_function >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server self.force_reraise() >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in > reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise value >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 205, in decorated_function >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return function(self, context, *args, **kwargs) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 236, in decorated_function >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server compute_utils.delete_image( >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server self.force_reraise() >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in > reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise value >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 232, in decorated_function >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return function(self, context, image_id, instance, >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 3908, in snapshot_instance >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server self._snapshot_instance(context, image_id, > instance, >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/compute/manager.py", > line 3941, in _snapshot_instance >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server self.driver.snapshot(context, instance, image_id, >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", > line 2549, in snapshot >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server self._image_api.update(context, >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", > line 1247, in update >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return session.update(context, image_id, > image_info, data=data, >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", > line 696, in update >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server _reraise_translated_image_exception(image_id) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", > line 1035, in _reraise_translated_image_exception >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise new_exc.with_traceback(exc_trace) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", > line 694, in update >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server image = self._update_v2(context, > sent_service_image_meta, data) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", > line 713, in _update_v2 >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server image = self._upload_data(context, image_id, data) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", > line 589, in _upload_data >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server utils.tpool_execute(self._client.call, >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/utils.py", line > 694, in tpool_execute >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server eventlet.tpool.execute(func, *args, **kwargs) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/eventlet/tpool.py", > line 129, in execute >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server six.reraise(c, e, tb) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/six.py", line 703, in > reraise >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise value >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/eventlet/tpool.py", > line 83, in tworker >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server rv = meth(*args, **kwargs) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/nova/image/glance.py", > line 192, in call >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server result = getattr(controller, method)(*args, > **kwargs) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/common/utils.py", > line 600, in inner >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return RequestIdProxy(wrapped(*args, **kwargs)) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/v2/images.py", > line 289, in upload >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server resp, body = self.http_client.put(url, > headers=hdrs, data=body) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/keystoneauth1/adapter.py", > line 404, in put >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return self.request(url, 'PUT', **kwargs) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/common/http.py", > line 380, in request >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server return self._handle_response(resp) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server File > "/openstack/venvs/nova-22.1.0/lib/python3.8/site-packages/glanceclient/common/http.py", > line 120, in _handle_response >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server raise exc.from_response(resp, resp.content) >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server glanceclient.exc.HTTPInternalServerError: HTTP 500 > Internal Server Error: The server has either erred or is incapable of performing > the requested operation. >                                             2021-03-25 08:55:57.106 2252 ERROR > oslo_messaging.rpc.server > > Any hints where the error might be coming from are much appreciated! > > Kind regards, > Oliver --  Kind Regards, Dmitriy Rabotyagov From bkslash at poczta.onet.pl Thu Mar 25 11:12:31 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Thu, 25 Mar 2021 12:12:31 +0100 Subject: [kolla-ansible][Neutron]Limiting bandwidth per Project/Domain/Tenant? Message-ID: <1D42417A-8F1F-4F55-A663-6E151FCA70DE@poczta.onet.pl> Hi, Is there any way to limit bandwidth per project, domain or tenant? According to the neutron documentation limit can be set per port (router,FIP etc.) but what if I need to limit whole project (which can have multiple routers, FIPs, etc.) - incoming and outgoing traffic? Best regards, Adam From skaplons at redhat.com Thu Mar 25 11:30:46 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 25 Mar 2021 12:30:46 +0100 Subject: [neutron][stable][infra] neutron-lib stable branches core reviewers In-Reply-To: References: <2771831.n0TQhf9oRt@p1> Message-ID: <3317236.F0y3B4ANne@p1> Hi, Dnia czwartek, 25 marca 2021 10:54:54 CET Előd Illés pisze: > I guess maybe the +2 rights got lost during the Gerrit upgrade in > November. Maybe this happened due to something was missing from the acl > config (what Slawek linked). > I've added the [infra] to the subject as infra team has the power to act > in this matter. (unfortunately stable-maint-core doesn't have) > > (note: until that happens: feel free to ping me if a stable patch there > needs +2, since as stable-maint-core I have the right to +2/+W there) Thx a lot Előd :) > > Cheers, > > Előd > > On 2021. 03. 22. 12:35, Bernard Cafarelli wrote: > > On Thu, 18 Mar 2021 at 14:48, Slawek Kaplonski > > > > wrote: > > Hi, > > > > I just noticed that neutron-lib project has got own group > > "neutron-lib-stable-maint" which has +2 powers in neutron-lib > > stable branches [1]. As I see now in gerrit that group don't have > > any members. Would it be maybe possible to remove that group and > > add "neutron-stable-maint" to the neutron-lib stable branches > > instead? If yes, should I simply propose patch to change [1] or is > > there any other way which I should do it? > > > > I guess we never spotted that "neutron-lib-stable-maint" before, as it > > seems neutron cores had +2 powers before: > > https://review.opendev.org/c/openstack/neutron-lib/+/717093 > > > > > > But yes it could be useful to have either of these groups (neutron > > cores or stable cores) in, backports to neutron-lib are rare but some > > happen - I remember one some time ago where it was valid to have a > > patch backported. > > > > [1] > > https://github.com/openstack/project-config/blob/master/gerrit/acls/ open > > stack/neutron-lib.config > > > enstack/neutron-lib.config> > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > -- > > Bernard Cafarelli -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From oliver.wenz at dhbw-mannheim.de Thu Mar 25 11:40:45 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Thu, 25 Mar 2021 12:40:45 +0100 (CET) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1935786299.57900.1616672445945@ox.dhbw-mannheim.de> Hi Dmitriy, thanks for your answer! I tried inserting the options into user_variables.yml and running the installation playbooks again but glance still shows 'Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token.' and the snapshot keeps disappearing. In addition, I'm seeing the following swift error: Mar 25 11:19:56 infra1-swift-proxy-container-27169fa7 proxy-server[90]: Pipeline was modified. New pipeline is "catch_errors gatekeeper healthcheck proxy-logging cache listing_formats container_sync bulk tempurl ratelimit authtoken keystoneauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server". Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: Starting Keystone auth_token middleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "use_stderr" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "bind_ip" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "bind_port" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "workers" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "user" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "log_name" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "auth_url" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "project_domain_id" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "user_domain_id" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "project_name" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "username" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "password" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "revocation_cache_time" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: STDERR: The option "__name__" is not known to keystonemiddleware Mar 25 11:19:57 infra1-swift-proxy-container-27169fa7 proxy-server[90]: AuthToken midd Kind regards Oliver > Oh, I have a guess what this might actually be. During snapshot upload process > user token that is used for the upload might get expired. If that's the case, > following changes in user_variables might help to resolve the issue: > > glance_glance_api_conf_overrides: > keystone_authtoken: > service_token_roles_required: True > service_token_roles: service > From fungi at yuggoth.org Thu Mar 25 12:10:13 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 25 Mar 2021 12:10:13 +0000 Subject: [neutron][stable][infra] neutron-lib stable branches core reviewers In-Reply-To: References: <2771831.n0TQhf9oRt@p1> Message-ID: <20210325121012.fg6l7iv3diznkhea@yuggoth.org> On 2021-03-25 10:54:54 +0100 (+0100), Előd Illés wrote: > I guess maybe the +2 rights got lost during the Gerrit upgrade in November. [...] Nope, it's all a result of https://review.opendev.org/750643 because of the exclusiveGroupPermissions on refs/heads/stable/* access. Propose a change to that file to do whatever it is you expect there. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openinfradn at gmail.com Thu Mar 25 12:21:50 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 25 Mar 2021 17:51:50 +0530 Subject: Create OpenStack VMs in few seconds Message-ID: Hi, I was looking for a way to provision and connect to VMs within a few seconds and ended up in the OpenStack Summit demo on 'OpenStack Virtual Machine Quickly Available in One Second'. I highly appreciate if someone can provide a tutorial or related documentation on how to achieve this? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Thu Mar 25 13:28:29 2021 From: ricolin at ricolky.com (Rico Lin) Date: Thu, 25 Mar 2021 21:28:29 +0800 Subject: [Containers SIG][tc]Is any one interested on Containers SIG? Or should we retire Containers SIG? Message-ID: Dear all, I just propose a patch to openstack/governance-sigs to officially retire containers SIG [1]. In past months, there's no activity in the SIG, and Jean-Philippe didn't get time for the SIG chair role too. IMO, The goal from Containers SIG definitely provide great value to Openstack community because we didn't have much work on checking whether we reach great container support or not. Hance don't really wish it got archived and retired. Still if no one interested enough and have time to join and chair the SIG, we have no option but to move it to retired state ("archived"). So if you're interested in chair the SIG, please raise your hand now. For sure SIGs might be able to bring back to live after we retired it. But I guess will be even better if we got someone take it over and work on it. Again if no one volunteer to chair it, the retire patch [1] will be the way to go. [1] https://review.opendev.org/c/openstack/governance-sigs/+/783001 *Rico Lin* OIF Board director, OpenSack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack *Email: ricolin at ricolky.com * *Phone: +886-963-612-021* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Mar 25 13:51:03 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 25 Mar 2021 13:51:03 +0000 Subject: [Containers SIG][tc]Is any one interested on Containers SIG? Or should we retire Containers SIG? In-Reply-To: References: Message-ID: <20210325135103.y5wtadzdl4q3pmu3@yuggoth.org> On 2021-03-25 21:28:29 +0800 (+0800), Rico Lin wrote: [...] > IMO, The goal from Containers SIG definitely provide great value > to Openstack community because we didn't have much work on > checking whether we reach great container support or not. Hance > don't really wish it got archived and retired. Still if no one > interested enough and have time to join and chair the SIG, we have > no option but to move it to retired state ("archived"). [...] If there's a set goal for the group, and you can define what you think the end state would be like for "great container support" in OpenStack, then this may be better suited as a pop-up team (that model didn't actually exist when this SIG was originally created). Still, if there's nobody interested in working toward that goal, I agree there's not much point in keeping the group defined either as a SIG or pop-up. https://governance.openstack.org/tc/reference/popup-teams.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From skaplons at redhat.com Thu Mar 25 14:14:56 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 25 Mar 2021 15:14:56 +0100 Subject: [neutron][stable][infra] neutron-lib stable branches core reviewers In-Reply-To: <20210325121012.fg6l7iv3diznkhea@yuggoth.org> References: <2771831.n0TQhf9oRt@p1> <20210325121012.fg6l7iv3diznkhea@yuggoth.org> Message-ID: <3141978.uIFsmd8HOl@p1> Hi, Dnia czwartek, 25 marca 2021 13:10:13 CET Jeremy Stanley pisze: > On 2021-03-25 10:54:54 +0100 (+0100), Előd Illés wrote: > > I guess maybe the +2 rights got lost during the Gerrit upgrade in November. > > [...] > > Nope, it's all a result of https://review.opendev.org/750643 because > of the exclusiveGroupPermissions on refs/heads/stable/* access. > > Propose a change to that file to do whatever it is you expect there. Thx Jeremy for pointing to that. I just proposed: https://review.opendev.org/ c/openstack/project-config/+/783011 I hope it can be accepted. > -- > Jeremy Stanley -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Thu Mar 25 14:47:09 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 25 Mar 2021 14:47:09 +0000 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: Message-ID: This is a demo of a third party extention that was never upstreamed. nova does not support create a base vm and then doing a local live migration or restore for memory snapshots to create another vm. this approch likely has several security implciations that would not be accpeatable in a multi tenant enviornment. we have disucssed this type of vm creation in the past and determined that it is not a valid implematnion of spawn. a virt driver that precreate vms or copys an existing instance can be faster but that virt driver is not considered a compliant implementation. so in short there is no way to achive this today in a compliant openstack powered cloud. On 25/03/2021 12:21, open infra wrote: > Hi, > > I was looking for a way to provision and connect to VMs within a few > seconds and ended up in the OpenStack Summit demo on 'OpenStack > Virtual Machine Quickly Available in One Second'. > > I highly appreciate if someone can provide a tutorial or related > documentation on how to achieve this? > > Regards, > Danishka > From smooney at redhat.com Thu Mar 25 14:53:29 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 25 Mar 2021 14:53:29 +0000 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: Message-ID: <89264fbb-c3d8-816f-d2c7-f1122212ac8f@redhat.com> On 25/03/2021 14:47, Sean Mooney wrote: > This is a demo of a third party extention that was never upstreamed. > > nova does not support create a base vm and then doing a local live > migration or restore for memory snapshots to create another vm. > > this approch likely has several security implciations that would not > be accpeatable in a multi tenant enviornment. > > we have disucssed this type of vm creation in the past and determined > that it is not a valid implematnion of spawn. a virt driver that > precreate vms or copys an existing instance can be faster but that > virt driver is not considered a compliant implementation. just to follow up on this see the note on vm launch https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_launch *Launch instance* *Status: mandatory. * *Notes: *Importing pre-existing running virtual machines on a host is considered out of scope of the cloud paradigm. Therefore this operation is mandatory to support in drivers. the type of fast boot descibed in that video is not a valid implemation of creating a  server. instance. > > so in short there is no way to achive this today in a compliant > openstack powered cloud. > > On 25/03/2021 12:21, open infra wrote: >> Hi, >> >> I was looking for a way to provision and connect to VMs within a few >> seconds and ended up in the OpenStack Summit demo on 'OpenStack >> Virtual Machine Quickly Available in One Second'. >> >> I highly appreciate if someone can provide a tutorial or related >> documentation on how to achieve this? >> >> Regards, >> Danishka >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Mar 25 14:57:12 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 25 Mar 2021 14:57:12 +0000 Subject: [kolla-ansible][Neutron]Limiting bandwidth per Project/Domain/Tenant? In-Reply-To: <1D42417A-8F1F-4F55-A663-6E151FCA70DE@poczta.onet.pl> References: <1D42417A-8F1F-4F55-A663-6E151FCA70DE@poczta.onet.pl> Message-ID: On 25/03/2021 11:12, Adam Tomas wrote: > Hi, > Is there any way to limit bandwidth per project, domain or tenant? According to the neutron documentation limit can be set per port (router,FIP etc.) but what if I need to limit whole project (which can have multiple routers, FIPs, etc.) - incoming and outgoing traffic? today there is no quota on bandwith and there is no way to enforece that all network/ports have a qos min/max bandwith policy applied. both would be required to have project wide limits. > > Best regards, > Adam > From openstack at nemebean.com Thu Mar 25 15:03:44 2021 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 25 Mar 2021 10:03:44 -0500 Subject: [oslo.db][Zaqar] test_models_sync failed because oslo.db raise "AttributeError: can't set attribute" In-Reply-To: References: Message-ID: <892c8cd2-a990-a40f-1e17-fd359b69616a@nemebean.com> On 3/25/21 3:29 AM, hao wang wrote: > Hi, > There is an issue about test_models_sync test when > openstack-tox-lower-constraints job running that I can't figure out > why it's here, could anyone help me about this? > > zaqar.tests.unit.storage.sqlalchemy_migration.test_migrations.TestMigrationsMySQL.test_models_sync > [0.306333s] ... FAILED > > Captured traceback: > ~~~~~~~~~~~~~~~~~~~ > Traceback (most recent call last): > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/test_base.py", > line 178, in setUp > self.useFixture( > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testtools/testcase.py", > line 760, in useFixture > reraise(*exc_info) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testtools/_compat3x.py", > line 16, in reraise > raise exc_obj.with_traceback(exc_tb) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testtools/testcase.py", > line 735, in useFixture > fixture.setUp() > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/test_base.py", > line 65, in setUp > testresources.setUpResources( > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", > line 786, in setUpResources > setattr(test, resource[0], resource[1].getResource(result)) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", > line 522, in getResource > self._setResource(self._make_all(result)) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", > line 551, in _make_all > dependency_resources[name] = resource.getResource() > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", > line 522, in getResource > self._setResource(self._make_all(result)) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/testresources/__init__.py", > line 552, in _make_all > resource = self.make(dependency_resources) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/provision.py", > line 133, in make > url = backend.provisioned_database_url(db_token) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/provision.py", > line 337, in provisioned_database_url > return self.impl.provisioned_database_url(self.url, ident) > > File "/home/zuul/src/opendev.org/openstack/zaqar/.tox/lower-constraints/lib/python3.8/site-packages/oslo_db/sqlalchemy/provision.py", > line 498, in provisioned_database_url > url.database = ident > > AttributeError: can't set attribute Recent versions of sqlalchemy made the url immutable. There's a patch[0] open against oslo.db to make it compatible, but it hasn't merged yet. My guess would be that the reason you're hitting this in lower-constraints is that sqlalchemy isn't being correctly constrained and you're getting a newer one than is actually supported. That's just a guess though. 0: https://review.opendev.org/c/openstack/oslo.db/+/747762 > > > There is the test log: > https://zuul.opendev.org/t/openstack/build/57eb103379b144abaa2517d41ef67610/log/job-output.txt > From fungi at yuggoth.org Thu Mar 25 15:04:40 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 25 Mar 2021 15:04:40 +0000 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: Message-ID: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> On 2021-03-25 14:47:09 +0000 (+0000), Sean Mooney wrote: > nova does not support create a base vm and then doing a local live > migration or restore for memory snapshots to create another vm. > > this approch likely has several security implciations that would > not be accpeatable in a multi tenant enviornment. > > we have disucssed this type of vm creation in the past and > determined that it is not a valid implematnion of spawn. a virt > driver that precreate vms or copys an existing instance can be > faster but that virt driver is not considered a compliant > implementation. > > so in short there is no way to achive this today in a compliant > openstack powered cloud. [...] The next best thing is basically what Nodepool[*] does: start new virtual machines ahead of time and keep them available in the tenant. This does of course mean you're occupying additional quota for whatever base "ready" capacity you've set for your various images/flavors, and that you need to be able to predict how many of what kinds of virtual machines you're going to need in advance. [*] https://zuul-ci.org/docs/nodepool/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Thu Mar 25 15:10:22 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 25 Mar 2021 16:10:22 +0100 Subject: [oslo.db][Zaqar] test_models_sync failed because oslo.db raise "AttributeError: can't set attribute" In-Reply-To: References: Message-ID: On Thu, Mar 25, 2021 at 9:29 AM hao wang wrote: > > Hi, > There is an issue about test_models_sync test when > openstack-tox-lower-constraints job running that I can't figure out I don't know the answer to your question but wanted to let you know that lower-constraints testing is entirely optional [1]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021204.html -yoctozepto From smooney at redhat.com Thu Mar 25 15:11:24 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 25 Mar 2021 15:11:24 +0000 Subject: Create OpenStack VMs in few seconds In-Reply-To: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> Message-ID: On 25/03/2021 15:04, Jeremy Stanley wrote: > On 2021-03-25 14:47:09 +0000 (+0000), Sean Mooney wrote: >> nova does not support create a base vm and then doing a local live >> migration or restore for memory snapshots to create another vm. >> >> this approch likely has several security implciations that would >> not be accpeatable in a multi tenant enviornment. >> >> we have disucssed this type of vm creation in the past and >> determined that it is not a valid implematnion of spawn. a virt >> driver that precreate vms or copys an existing instance can be >> faster but that virt driver is not considered a compliant >> implementation. >> >> so in short there is no way to achive this today in a compliant >> openstack powered cloud. > [...] > > The next best thing is basically what Nodepool[*] does: start new > virtual machines ahead of time and keep them available in the > tenant. This does of course mean you're occupying additional quota > for whatever base "ready" capacity you've set for your various > images/flavors, and that you need to be able to predict how many of > what kinds of virtual machines you're going to need in advance. > > [*] https://zuul-ci.org/docs/nodepool/ ya if you had premtible instances that would make that model a little more compeeling especially if you could move them form premtible to non premtable wehn you acutlly use them. unfortunetly we also dont have the feature in nova today. if the extra cost of precreating the instances was acceptable then yes using something like node pool could help but it requried a different way of interacting with  openstack as the consuming applicaiton and it somewhat limits you in terems for your network/volumn config unless you modify them after teh fact you have rretrived a reserved instance form nodepool. From DHilsbos at performair.com Thu Mar 25 16:02:59 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 25 Mar 2021 16:02:59 +0000 Subject: [ops][victoria] DNS for Tenants Message-ID: <0670B960225633449A24709C291A52524FBAA1B2@COM01.performair.local> All; How do we provide DNS services to tenant networks? We're running Victoria, and have Designate installed and configured, but it doesn't seem to have a presence on the tenant networks. I assume if the instance can route to an IP on the network node then the installed Bind can be used. I'd rather not have to deal with this, however. I would like the DNS to operate in a relay fashion to our previously configured DNS servers, though I assume that standard Bind configuration for relay should take care of this. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From johnsomor at gmail.com Thu Mar 25 16:12:45 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 25 Mar 2021 09:12:45 -0700 Subject: [ops][victoria] DNS for Tenants In-Reply-To: <0670B960225633449A24709C291A52524FBAA1B2@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBAA1B2@COM01.performair.local> Message-ID: There is a document in the neutron networking guide that discusses how to provide DNS resolution for instances that may help you: https://docs.openstack.org/neutron/victoria/admin/config-dns-res.html Michael On Thu, Mar 25, 2021 at 9:07 AM wrote: > > All; > > How do we provide DNS services to tenant networks? > > We're running Victoria, and have Designate installed and configured, but it doesn't seem to have a presence on the tenant networks. I assume if the instance can route to an IP on the network node then the installed Bind can be used. I'd rather not have to deal with this, however. I would like the DNS to operate in a relay fashion to our previously configured DNS servers, though I assume that standard Bind configuration for relay should take care of this. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From zigo at debian.org Thu Mar 25 16:24:55 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 25 Mar 2021 17:24:55 +0100 Subject: [puppet] Artificially inflated dependencies in metadata.json of all modules Message-ID: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> Hi, Each time there's a new release, all of the modules are getting a new set of lower bounds for: - openstack/keystone - openstack/openstacklib - openstack/oslo I just did a quick check and this really doesn't make sense. For example, for going from 18.2.0 to 18.3.0, there's no breakage of the API that would require a bump of version in all modules. So, could we please stop this non-sense and restore some kind of sanity in our dependency management ? FYI, I'm expressing these in the packaged version of the puppet modules, and it increase complexity for no reasons. A lower bound of a dependency should be increased *only* when really mandatory (ie: a backward compatibility breakage, a change in the API, etc.). Your thoughts? Cheers, Thomas Goirand (zigo) From DHilsbos at performair.com Thu Mar 25 16:30:56 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 25 Mar 2021 16:30:56 +0000 Subject: [ops][victoria] DNS for Tenants In-Reply-To: References: <0670B960225633449A24709C291A52524FBAA1B2@COM01.performair.local> Message-ID: <0670B960225633449A24709C291A52524FBAA3A0@COM01.performair.local> Michael; Thank you for the response. That tells me how to make the tenant instances aware of DNS servers. It also provides information on how to get the DHCP providers (dnsmasq) to act as a relay. How do I control where dnsmasq relays to? Would standard dnsmasq tutorials be sufficient for finding this? Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Michael Johnson [mailto:johnsomor at gmail.com] Sent: Thursday, March 25, 2021 9:13 AM To: Dominic Hilsbos Cc: openstack-discuss Subject: Re: [ops][victoria] DNS for Tenants There is a document in the neutron networking guide that discusses how to provide DNS resolution for instances that may help you: https://docs.openstack.org/neutron/victoria/admin/config-dns-res.html Michael On Thu, Mar 25, 2021 at 9:07 AM wrote: > > All; > > How do we provide DNS services to tenant networks? > > We're running Victoria, and have Designate installed and configured, but it doesn't seem to have a presence on the tenant networks. I assume if the instance can route to an IP on the network node then the installed Bind can be used. I'd rather not have to deal with this, however. I would like the DNS to operate in a relay fashion to our previously configured DNS servers, though I assume that standard Bind configuration for relay should take care of this. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From aschultz at redhat.com Thu Mar 25 16:39:36 2021 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 25 Mar 2021 10:39:36 -0600 Subject: [puppet] Artificially inflated dependencies in metadata.json of all modules In-Reply-To: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> References: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> Message-ID: On Thu, Mar 25, 2021 at 10:30 AM Thomas Goirand wrote: > Hi, > > Each time there's a new release, all of the modules are getting a new > set of lower bounds for: > - openstack/keystone > - openstack/openstacklib > - openstack/oslo > > I just did a quick check and this really doesn't make sense. For > example, for going from 18.2.0 to 18.3.0, there's no breakage of the API > that would require a bump of version in all modules. > > These are milestone releases and not tracked independently across all the modules. I believe you are basically asking for the modules to go independent and not have milestone releases. While technically the version interactions you are describing may be true for the most part in that it isn't necessary, we assume the deployment as a whole in case something lands that actually warrants this. The reality is this only affects master and you likely want to not track the latest versions until GA. > So, could we please stop this non-sense and restore some kind of sanity > in our dependency management ? FYI, I'm expressing these in the packaged > version of the puppet modules, and it increase complexity for no reasons. > > It feels like the ask is for more manual version management on the Puppet OpenStack team (because we have to manually manage metadata.json before releasing), rather than just automating version updates for your packaging. This existing release model has been in place for at least 5 years now if not longer so reworking the tooling/process to understand your requirement seems a bit much given the lack of contributors. If you feel that you could get away with requiring >= 18 rather than a minor version, perhaps you could add that logic in your packaging tooling instead of asking us to stop releasing versions. > A lower bound of a dependency should be increased *only* when really > mandatory (ie: a backward compatibility breakage, a change in the API, > etc.). > > Your thoughts? > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Thu Mar 25 16:56:25 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Thu, 25 Mar 2021 17:56:25 +0100 Subject: [telemetry] OpenStack Telemetry Xena PTG planning Message-ID: <48632860-1778-340b-4926-512c7221b6d7@matthias-runge.de> Hi there, The next virtual PTG is getting near very quickly. If you haven't registered yet, please do so as soon as possible[1]. We have signed up for slots on Tuesday and Wednesday April 20 and 21 for the slots between 13 and 16 UTC. If there is anything you would like to talk about or something you'd like to address/to work on, please feel free to add it to the etherpad[2]. Thank you and see you there! Matthias [1] https://april2021-ptg.eventbrite.com/ [2] https://etherpad.opendev.org/p/telemetry-xena-ptg From marios at redhat.com Thu Mar 25 16:58:30 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 25 Mar 2021 18:58:30 +0200 Subject: [TripleO] Xena PTG is coming 19 April Message-ID: Hello tripleo friends o/ It's PTG time again \o/ and I have started reaching out to/harassing folks this week about topics. I want to send a *HUGE* thanks to all the usual suspects who have already proposed some sessions. Like last time let's collect all the topics into the topic etherpad at [1] and then around Monday 5th April or so we can start to work out the schedule (2 weeks before PTG). I have reserved 1300-1700 UTC on Monday/Tue/Wed/Thu as noted there [2] - we may not need all of those days it depends how many sessions we will have proposed. So over the next few days please think about the Wallaby or Xena tripleo related things you'd like to discuss/provide updates on/socialise with the wider TripleO team. Then go add them into [1] using the template there (Topic/Description/Etherpad Link/Proposer). BUT before you do any of that make sure you go and register for PTG at [3]. I hope we get a great turn out like last time! regards, marios [1] https://etherpad.opendev.org/p/tripleo-xena-topics [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020953.html [3] https://april2021-ptg.eventbrite.com/ From whayutin at redhat.com Thu Mar 25 17:28:56 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 25 Mar 2021 11:28:56 -0600 Subject: [tripleo][ci] tripleo gate is hung Message-ID: Greetings, https://zuul.openstack.org/status top of the gate patch is hung 774500,1 Infra is looking into it. See #openstack-infra for details Thanks fungi!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Mar 25 17:39:16 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 25 Mar 2021 10:39:16 -0700 Subject: [infra][stable] Request for Designate stable core status Message-ID: I would like to request stable core status for the Designate repositories. As PTL I am currently unable to approve even the release bot patches for some Designate stable branches[1]. I have been a stable reviewer for Octavia for years and if there are concerns about my stable policy track record I hope that my work on Octavia would highlight my respect for the stable release policy. Thanks, Michael [1] https://review.opendev.org/c/openstack/designate-dashboard/+/783056 From DHilsbos at performair.com Thu Mar 25 17:44:07 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 25 Mar 2021 17:44:07 +0000 Subject: [ops][victoria] DNS for Tenants In-Reply-To: <0670B960225633449A24709C291A52524FBAA3A0@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBAA1B2@COM01.performair.local> <0670B960225633449A24709C291A52524FBAA3A0@COM01.performair.local> Message-ID: <0670B960225633449A24709C291A52524FBAA72D@COM01.performair.local> Michael; My apologies, that answer to that is in that page. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: DHilsbos at performair.com [mailto:DHilsbos at performair.com] Sent: Thursday, March 25, 2021 9:31 AM To: johnsomor at gmail.com; openstack-discuss at lists.openstack.org Subject: RE: [ops][victoria] DNS for Tenants Michael; Thank you for the response. That tells me how to make the tenant instances aware of DNS servers. It also provides information on how to get the DHCP providers (dnsmasq) to act as a relay. How do I control where dnsmasq relays to? Would standard dnsmasq tutorials be sufficient for finding this? Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Michael Johnson [mailto:johnsomor at gmail.com] Sent: Thursday, March 25, 2021 9:13 AM To: Dominic Hilsbos Cc: openstack-discuss Subject: Re: [ops][victoria] DNS for Tenants There is a document in the neutron networking guide that discusses how to provide DNS resolution for instances that may help you: https://docs.openstack.org/neutron/victoria/admin/config-dns-res.html Michael On Thu, Mar 25, 2021 at 9:07 AM wrote: > > All; > > How do we provide DNS services to tenant networks? > > We're running Victoria, and have Designate installed and configured, but it doesn't seem to have a presence on the tenant networks. I assume if the instance can route to an IP on the network node then the installed Bind can be used. I'd rather not have to deal with this, however. I would like the DNS to operate in a relay fashion to our previously configured DNS servers, though I assume that standard Bind configuration for relay should take care of this. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From gmann at ghanshyammann.com Thu Mar 25 17:51:28 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 25 Mar 2021 12:51:28 -0500 Subject: [policy pop-up] Xena PTG planning Message-ID: <1786a833bda.11f880c521090888.17385574951869455@ghanshyammann.com> Hello Everyone, We have booked 2 hrs slot for the policy popup team in the Xena virtual PTG. The time for the slot is 15.00-17.00 UTC Monday, April 19 (2 hours; Essex Room). I have created the etherpad to collect the discussion point. Please add the topic you would like to discuss - https://etherpad.opendev.org/p/policy-popup-xena-ptg We request more and more projects to participate in this and discuss the various queries or plan to switch to the new policy in a single place instead of duplicating the discussion in each project slot. -gmann & lbragstad From mkopec at redhat.com Thu Mar 25 18:01:38 2021 From: mkopec at redhat.com (Martin Kopec) Date: Thu, 25 Mar 2021 19:01:38 +0100 Subject: [qa][ptg] Xena PTG Message-ID: Hello, we signed up for the following slots [1] at Cactus room: * Mon Apr. 19th 13 - 15 UTC (2 slots) * Tue Apr. 20th 13 - 14 UTC (1 slot) * Wed Apr. 21th 13 - 14 UTC (1 slot) The PTG etherpad [2] with the topics we have so far. If you have something else you wanna discuss, feel free to add it to the agenda [2]. [1] https://etherpad.opendev.org/p/qa-xena-ptg [2[ https://etherpad.opendev.org/p/qa-xena-ptg See you at PTG, -- *Martin Kopec* -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu Mar 25 18:07:33 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 25 Mar 2021 18:07:33 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari Message-ID: I've created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 Everything seems to work fine until I try to deploy the first replacement controller into the cluster. I upgrade RMQ, ES and Kibana, then follow the "remove existing controller" process to remove control0, create a Centos 8 VM, bootstrap the cluster, pull containers to the new control0, and everything is still working. Then I type the last command "kolla-ansible -i multinode deploy --limit control" The RMQ install works and I see all 3 nodes up in the RMQ admin, but it takes a long time to complete "TASK [service-ks-register : masakari | Creating users] " and then hangs on "TASK [service-ks-register : masakari | Creating roles]". At this time the new control0 becomes unreachable and drops out of the RMQ cluster. I can still ping it but console hangs along with new and existing ssh sessions. It appears that the CPU may be maxed out and not allowing interrupts. Eventually I see error "fatal: [control0]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}" RMQ seems fine on the 2 old controllers; they just don't see the new control0 active: (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', 'rabbit at chrnc-void-testupgrade-control-0-replace', 'rabbit at chrnc-void-testupgrade-control-1', 'rabbit at chrnc-void-testupgrade-control-2']}]}, {running_nodes,['rabbit at chrnc-void-testupgrade-control-1', 'rabbit at chrnc-void-testupgrade-control-2']}, {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, {partitions,[]}, {alarms,[{'rabbit at chrnc-void-testupgrade-control-1',[]}, {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] After this the HAProxy IP is pingable but openstack commands are failing: (openstack) [root at chrnc-void-testupgrade-build openstack]# osl Failed to discover available identity versions when contacting http://172.16.0.100:35357/v3. Attempting to parse version from URL. Gateway Timeout (HTTP 504) After about an hour my open ssh session on the new control0 responded and confirmed that the CPU is maxed out: [root at chrnc-void-testupgrade-control-0-replace /]# uptime 17:41:55 up 1:55, 0 users, load average: 157.87, 299.75, 388.69 I built new heat stacks and tried it a few times, and it consistently fails on masakari. Do I need to change something in my masakari config before upgrading Train from Centos 7 to Centos 8? I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Mar 25 18:16:34 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 25 Mar 2021 19:16:34 +0100 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari In-Reply-To: References: Message-ID: Hi Albert, I can assure you this is unrelated to Masakari. As you have observed, it's the RabbitMQ and Keystone (perhaps due to MariaDB?) that failed. Something is abusing the CPU there. What is that process? -yoctozepto On Thu, Mar 25, 2021 at 7:09 PM Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > Everything seems to work fine until I try to deploy the first replacement controller into the cluster. I upgrade RMQ, ES and Kibana, then follow the “remove existing controller” process to remove control0, create a Centos 8 VM, bootstrap the cluster, pull containers to the new control0, and everything is still working. Then I type the last command “kolla-ansible -i multinode deploy --limit control” > > > > The RMQ install works and I see all 3 nodes up in the RMQ admin, but it takes a long time to complete “TASK [service-ks-register : masakari | Creating users] “ and then hangs on “TASK [service-ks-register : masakari | Creating roles]”. At this time the new control0 becomes unreachable and drops out of the RMQ cluster. I can still ping it but console hangs along with new and existing ssh sessions. It appears that the CPU may be maxed out and not allowing interrupts. Eventually I see error “fatal: [control0]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}” > > > > RMQ seems fine on the 2 old controllers; they just don’t see the new control0 active: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After this the HAProxy IP is pingable but openstack commands are failing: > > > > (openstack) [root at chrnc-void-testupgrade-build openstack]# osl > > Failed to discover available identity versions when contacting http://172.16.0.100:35357/v3. Attempting to parse version from URL. > > Gateway Timeout (HTTP 504) > > > > After about an hour my open ssh session on the new control0 responded and confirmed that the CPU is maxed out: > > > > [root at chrnc-void-testupgrade-control-0-replace /]# uptime > > 17:41:55 up 1:55, 0 users, load average: 157.87, 299.75, 388.69 > > > > I built new heat stacks and tried it a few times, and it consistently fails on masakari. Do I need to change something in my masakari config before upgrading Train from Centos 7 to Centos 8? > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From fungi at yuggoth.org Thu Mar 25 18:39:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 25 Mar 2021 18:39:35 +0000 Subject: [tripleo][ci][infra] tripleo gate is hung In-Reply-To: References: Message-ID: <20210325183935.bnyesfxbfahlfo7g@yuggoth.org> On 2021-03-25 11:28:56 -0600 (-0600), Wesley Hayutin wrote: > Greetings, > > https://zuul.openstack.org/status > top of the gate patch is hung 774500,1 > > Infra is looking into it. See #openstack-infra for details > Thanks fungi!! Things are moving again. We had some node requests in a weird state where the launcher did not confirm completion or rejection of the request, all for the same provider within the same 15-minute window. Restarting the launcher unlocked those requests and they were picked up by other launchers, so already tested changes in the queue should not end up with their builds restarted unless the stuck change fails one of its remaining jobs for some unrelated reason. Sorry about that! I'm still working to try and identify what caused that, but things should be back to normal. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From Vladislav.Belogrudov at dell.com Thu Mar 25 18:45:03 2021 From: Vladislav.Belogrudov at dell.com (Belogrudov, Vladislav) Date: Thu, 25 Mar 2021 18:45:03 +0000 Subject: [cinder] need core reviewers for small driver fix Message-ID: Dear core reviewers, I would like to ask for help in reviewing a small bug fix in Cinder driver for Dell EMC PowerStore. It's just 2 chars, the driver wrongly used "eq" operator in PostgREST request instead of "cs" for finding a value in an array. The fix is under review [1]. Official API guide for PowerStore [2] says that ip_pool_address endpoint returns address instances with array of purposes, where the driver searches for iSCSI purpose. If there are more purposes configured for an iSCSI target address, the driver fails to operate. It would be great if the fix could go to Wallaby. Best regards, Vladislav Belogrudov [1] https://review.opendev.org/c/openstack/cinder/+/782087 [2] https://downloads.dell.com/manuals/common/pwrstr-apig_en-us.pdf -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Mar 25 18:50:58 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 25 Mar 2021 12:50:58 -0600 Subject: [tripleo][ci][infra] tripleo gate is hung In-Reply-To: <20210325183935.bnyesfxbfahlfo7g@yuggoth.org> References: <20210325183935.bnyesfxbfahlfo7g@yuggoth.org> Message-ID: On Thu, Mar 25, 2021 at 12:41 PM Jeremy Stanley wrote: > On 2021-03-25 11:28:56 -0600 (-0600), Wesley Hayutin wrote: > > Greetings, > > > > https://zuul.openstack.org/status > > top of the gate patch is hung 774500,1 > > > > Infra is looking into it. See #openstack-infra for details > > Thanks fungi!! > > Things are moving again. We had some node requests in a weird state > where the launcher did not confirm completion or rejection of the > request, all for the same provider within the same 15-minute window. > Restarting the launcher unlocked those requests and they were picked > up by other launchers, so already tested changes in the queue should > not end up with their builds restarted unless the stuck change fails > one of its remaining jobs for some unrelated reason. > > Sorry about that! I'm still working to try and identify what caused > that, but things should be back to normal. > -- > Jeremy Stanley > Thanks Jeremy!! Happy hunting for the root cause :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Thu Mar 25 20:10:27 2021 From: mike.carden at gmail.com (Mike Carden) Date: Fri, 26 Mar 2021 07:10:27 +1100 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: Message-ID: You could look at the work of a former Nova PTL that tries to solve this problem: https://shakenfist.com/ -- MC On Thu, Mar 25, 2021 at 11:23 PM open infra wrote: > Hi, > > I was looking for a way to provision and connect to VMs within a few > seconds and ended up in the OpenStack Summit demo on 'OpenStack Virtual > Machine Quickly Available in One Second'. > > I highly appreciate if someone can provide a tutorial or related > documentation on how to achieve this? > > Regards, > Danishka > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Mar 25 20:22:02 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 25 Mar 2021 21:22:02 +0100 Subject: [puppet] Artificially inflated dependencies in metadata.json of all modules In-Reply-To: References: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> Message-ID: <7d07721a-722d-5c53-4aee-396a35a68a88@debian.org> Hi Alex, Thanks for your time replying to my original post. On 3/25/21 5:39 PM, Alex Schultz wrote: > It feels like the ask is for more manual version management on the > Puppet OpenStack team (because we have to manually manage metadata.json > before releasing), rather than just automating version updates for your > packaging. Not at all. I'm asking for dependency to vaguely reflect reality, like we've been doing this for years in the Python world of OpenStack. > This existing release model has been in place for at least 5 > years now if not longer Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 years then! :) And if I'm being vocal now, it's because it's been annoying me for that long. > so reworking the tooling/process to understand your requirement That's not for *my* requirement, but for dependencies to really mean what they should: express incompatibility with earlier versions when this happens. > seems a bit much given the lack of contributors. The above sentence is IMO the only valid one of your argumentation: I can understand "not enough time", no problem! :) > If you feel that you could get away with requiring >= 18 rather than a > minor version, perhaps you could add that logic in your packaging > tooling instead of asking us to stop releasing versions. I could get away with no version relationship at all, but that's really not the right thing to do. I'd like the dependencies to vaguely express some kind of reality, which isn't possible the current way. There's no way to get away from that problem with tooling: the tooling will not understand that an API has changed in a module or a puppet provider. It's only the authors of the patches that will know. Perhaps we should also try to have some kind of CI testing to validate lower bounds to solve that problem? Cheers, Thomas Goirand (zigo) From zigo at debian.org Thu Mar 25 20:42:07 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 25 Mar 2021 21:42:07 +0100 Subject: [puppet] Artificially inflated dependencies in metadata.json of all modules In-Reply-To: <7d07721a-722d-5c53-4aee-396a35a68a88@debian.org> References: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> <7d07721a-722d-5c53-4aee-396a35a68a88@debian.org> Message-ID: <329974b0-ef8a-c7d8-687e-592a4fe97596@debian.org> On 3/25/21 9:22 PM, Thomas Goirand wrote: > Hi Alex, > > Thanks for your time replying to my original post. > > On 3/25/21 5:39 PM, Alex Schultz wrote: >> It feels like the ask is for more manual version management on the >> Puppet OpenStack team (because we have to manually manage metadata.json >> before releasing), rather than just automating version updates for your >> packaging. > > Not at all. I'm asking for dependency to vaguely reflect reality, like > we've been doing this for years in the Python world of OpenStack. > >> This existing release model has been in place for at least 5 >> years now if not longer > > Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 > years then! :) Let me give an example. Today, puppet-ironic got released in version 18.3.0. The only thing that changed in it since 18.2.0 is a bunch of metadata bumping to 18.2.0... Why haven't we just kept version 18.2.0? It's the exact same content... Cheers, Thomas Goirand (zigo) From sean.mcginnis at gmx.com Thu Mar 25 20:45:40 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 25 Mar 2021 15:45:40 -0500 Subject: [infra][stable] Request for Designate stable core status In-Reply-To: References: Message-ID: <20210325204540.GA3490724@sm-workstation> On Thu, Mar 25, 2021 at 10:39:16AM -0700, Michael Johnson wrote: > I would like to request stable core status for the Designate repositories. > > As PTL I am currently unable to approve even the release bot patches > for some Designate stable branches[1]. > > I have been a stable reviewer for Octavia for years and if there are > concerns about my stable policy track record I hope that my work on > Octavia would highlight my respect for the stable release policy. > > Thanks, > Michael Hey Micheal, I have added you to designate-stable-maint. Thanks for paying attention to stable branches. Sean From aschultz at redhat.com Thu Mar 25 20:49:03 2021 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 25 Mar 2021 14:49:03 -0600 Subject: [puppet] Artificially inflated dependencies in metadata.json of all modules In-Reply-To: <7d07721a-722d-5c53-4aee-396a35a68a88@debian.org> References: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> <7d07721a-722d-5c53-4aee-396a35a68a88@debian.org> Message-ID: On Thu, Mar 25, 2021 at 2:26 PM Thomas Goirand wrote: > Hi Alex, > > Thanks for your time replying to my original post. > > On 3/25/21 5:39 PM, Alex Schultz wrote: > > It feels like the ask is for more manual version management on the > > Puppet OpenStack team (because we have to manually manage metadata.json > > before releasing), rather than just automating version updates for your > > packaging. > > Not at all. I'm asking for dependency to vaguely reflect reality, like > we've been doing this for years in the Python world of OpenStack. > > > This existing release model has been in place for at least 5 > > years now if not longer > > Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 > years then! :) > > And if I'm being vocal now, it's because it's been annoying me for that > long. > > Versions in puppet have always been problematic because of the incompatibilities between how python can handle milestones vs puppet in the openstack ecosystem. Puppet being purely semver means we can't do any of the pre-release (there is no 13.3.0.0a1) things that the other openstack projects can do. So in order to do this, we're releasing minor versions as points during the main development to allow for folks to match up to the upstream milestones. Is it ideal? no. Does it really matter? no. Puppet modules inherently aren't package friendly. metadata.json is for the forge where the module dependencies will automagically be sorted out and you want the modules with the correct versions. > > so reworking the tooling/process to understand your requirement > > That's not for *my* requirement, but for dependencies to really mean > what they should: express incompatibility with earlier versions when > this happens. > > It is your requirement because you're putting these constraints in place in your packaging while tracking the current development. As I mentioned this is only really an issue until GA. Once GA hits, we don't unnecessarily rev these versions which means there isn't the churn you don't particularly care for. > > seems a bit much given the lack of contributors. > > The above sentence is IMO the only valid one of your argumentation: I > can understand "not enough time", no problem! :) > Given the number of modules, trying to maintain the versions with the super strict semver reasons causes more issues than following a looser strategy to handle versions knowing that there are likely going to be breaking changes between major releases. In puppet we try to maintain N and N-1 backwards compatibilities but given the number of modules it makes it really really hard to track and the overall benefit to do so is minimal. If we follow the patterns usually a major version bump hits on m1 and X.3 is the GA or RC version. > > > If you feel that you could get away with requiring >= 18 rather than a > > minor version, perhaps you could add that logic in your packaging > > tooling instead of asking us to stop releasing versions. > > I could get away with no version relationship at all, but that's really > not the right thing to do. I'd like the dependencies to vaguely express > some kind of reality, which isn't possible the current way. There's no > way to get away from that problem with tooling: the tooling will not > understand that an API has changed in a module or a puppet provider. > It's only the authors of the patches that will know. > > There is, don't add a strict requirement beyond the major version or wait until GA before implementing minimum versions. Perhaps we should also try to have some kind of CI testing to validate > lower bounds to solve that problem? > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Mar 25 20:56:56 2021 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 25 Mar 2021 14:56:56 -0600 Subject: [puppet] Artificially inflated dependencies in metadata.json of all modules In-Reply-To: <329974b0-ef8a-c7d8-687e-592a4fe97596@debian.org> References: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> <7d07721a-722d-5c53-4aee-396a35a68a88@debian.org> <329974b0-ef8a-c7d8-687e-592a4fe97596@debian.org> Message-ID: On Thu, Mar 25, 2021 at 2:48 PM Thomas Goirand wrote: > On 3/25/21 9:22 PM, Thomas Goirand wrote: > > Hi Alex, > > > > Thanks for your time replying to my original post. > > > > On 3/25/21 5:39 PM, Alex Schultz wrote: > >> It feels like the ask is for more manual version management on the > >> Puppet OpenStack team (because we have to manually manage metadata.json > >> before releasing), rather than just automating version updates for your > >> packaging. > > > > Not at all. I'm asking for dependency to vaguely reflect reality, like > > we've been doing this for years in the Python world of OpenStack. > > > >> This existing release model has been in place for at least 5 > >> years now if not longer > > > > Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 > > years then! :) > > Let me give an example. Today, puppet-ironic got released in version > 18.3.0. The only thing that changed in it since 18.2.0 is a bunch of > metadata bumping to 18.2.0... > > Why haven't we just kept version 18.2.0? It's the exact same content... > > Release due to milestone 3. Like I said, we could switch to independent or just stop doing milestone releases, but then that causes other problems and overhead. Given the lower amount of changes in the more recent releases, it might make sense to switch but I think that's a conversation that isn't necessarily puppet specific but could be expanded to openstack releases in general. From a RDO standpoint, we build the packages in dlrn which include dates/hashes and so the versions only matter for upgrades (we don't enforce the metadata.json requirements). Dropping milestones wouldn't affect us too badly, but we'd still want an initial metadata.json rev at the start of a cycle. We could hold off on releasing until much later and you wouldn't get the churn. You'd also not be able to match the puppet modules to any milestone release during the current development cycle. > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu Mar 25 21:01:09 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 25 Mar 2021 21:01:09 +0000 Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari In-Reply-To: References: Message-ID: After about 2 hours the CPU settles down and then control0 joins the RMQ cluster and the admin display looks normal. The mysql container on control0 is stopped and the elasticsearch container is restarting every 60 seconds. acf52c003292 kolla/centos-source-mariadb:train-centos8 "dumb-init -- kolla_…" 5 hours ago Exited (128) 3 hours ago mariadb 9bc064cf9b2b kolla/centos-source-elasticsearch6:train-centos8 "dumb-init --single-…" 5 hours ago Restarting (1) 20 seconds ago elasticsearch The mariadb container refuses to start: [root at chrnc-void-testupgrade-control-0-replace keystone]# docker start mariadb Error response from daemon: OCI runtime create failed: container with id exists: acf52c003292e4841af15bb3c2894b983e37de5a65fc726ae2db2049f0e6774c: unknown I see a lot of this in mariadb.log on control0: 2021-03-25 17:29:16 13436 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-2.dev.chtrse.com' (using password: NO) 2021-03-25 17:29:16 13437 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-1.dev.chtrse.com' (using password: NO) 2021-03-25 17:29:17 13438 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-0-replace.dev.chtrse.com' (using password: NO) 2021-03-25 17:29:18 13441 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-2.dev.chtrse.com' (using password: NO) 2021-03-25 17:29:18 13442 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-1.dev.chtrse.com' (using password: NO) Here's the entire mariadb.log starting when I created the cluster and ending when the container died: https://paste.ubuntu.com/p/FCn9pB6zV4/ The CPU is no longer consumed, but the times might be a clue: 60 root 20 0 0 0 0 S 0.0 0.0 104:07.36 kswapd0 1 root 20 0 253576 9756 4580 S 0.0 0.1 42:22.65 systemd 28670 42472 20 0 180416 14140 5812 S 0.0 0.2 20:00.75 memcached_expor 28515 42472 20 0 332512 29968 3408 S 0.0 0.4 12:27.02 mysqld_exporter 28436 42472 20 0 251896 21584 6956 S 0.0 0.3 12:21.69 node_exporter 14857 root 20 0 980460 45684 6592 S 0.0 0.6 9:55.42 containerd 34608 42425 20 0 732436 106800 9012 S 0.0 1.4 9:03.50 httpd 34609 42425 20 0 732436 106812 8736 S 0.0 1.4 9:01.90 httpd 29123 42472 20 0 49360 9840 1288 S 0.3 0.1 8:00.38 elasticsearch_e 15034 root 20 0 2793200 68112 0 S 0.0 0.9 6:36.63 dockerd 23161 root 20 0 113120 6056 0 S 0.0 0.1 6:25.05 containerd-shim 28592 42472 20 0 112820 16988 6348 S 0.0 0.2 6:00.40 haproxy_exporte 57248 42472 20 0 253568 74152 0 S 0.0 1.0 5:47.56 openstack-expor 28950 42472 20 0 120256 24664 9456 S 0.0 0.3 5:22.03 alertmanager 31847 root 20 0 235156 2760 2196 S 0.0 0.0 5:00.97 bash I'll build another cluster and watch top during the upgrade to see what is consuming the CPU. -----Original Message----- From: Radosław Piliszek Sent: Thursday, March 25, 2021 2:17 PM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi Albert, I can assure you this is unrelated to Masakari. As you have observed, it's the RabbitMQ and Keystone (perhaps due to MariaDB?) that failed. Something is abusing the CPU there. What is that process? -yoctozepto On Thu, Mar 25, 2021 at 7:09 PM Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > Everything seems to work fine until I try to deploy the first replacement controller into the cluster. I upgrade RMQ, ES and Kibana, then follow the “remove existing controller” process to remove control0, create a Centos 8 VM, bootstrap the cluster, pull containers to the new control0, and everything is still working. Then I type the last command “kolla-ansible -i multinode deploy --limit control” > > > > The RMQ install works and I see all 3 nodes up in the RMQ admin, but it takes a long time to complete “TASK [service-ks-register : masakari | Creating users] “ and then hangs on “TASK [service-ks-register : masakari | Creating roles]”. At this time the new control0 becomes unreachable and drops out of the RMQ cluster. I can still ping it but console hangs along with new and existing ssh sessions. It appears that the CPU may be maxed out and not allowing interrupts. Eventually I see error “fatal: [control0]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}” > > > > RMQ seems fine on the 2 old controllers; they just don’t see the new control0 active: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After this the HAProxy IP is pingable but openstack commands are failing: > > > > (openstack) [root at chrnc-void-testupgrade-build openstack]# osl > > Failed to discover available identity versions when contacting http://172.16.0.100:35357/v3. Attempting to parse version from URL. > > Gateway Timeout (HTTP 504) > > > > After about an hour my open ssh session on the new control0 responded and confirmed that the CPU is maxed out: > > > > [root at chrnc-void-testupgrade-control-0-replace /]# uptime > > 17:41:55 up 1:55, 0 users, load average: 157.87, 299.75, 388.69 > > > > I built new heat stacks and tried it a few times, and it consistently fails on masakari. Do I need to change something in my masakari config before upgrading Train from Centos 7 to Centos 8? > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From rosmaita.fossdev at gmail.com Thu Mar 25 21:40:54 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 25 Mar 2021 17:40:54 -0400 Subject: [cinder] critical os-brick fix needs immediate review Message-ID: <5a73dc84-89ae-6d2f-2ff8-b59e7f8ba1b7@gmail.com> Gorka has a patch up to address a possible data loss situation: https://review.opendev.org/c/openstack/os-brick/+/782992 I spoke with the release team today, and if we can get the patch reviewed and merged to master and then backported to stable/wallaby by MONDAY (29 March), we can apply for a Requirements Freeze Exception so that os-brick with this fix will be included in the OpenStack coordinated wallaby release. (It's a data loss issue, of course we want the fix in the wallaby release.) So please review at your earliest convenience. thanks, brian From rosmaita.fossdev at gmail.com Thu Mar 25 22:03:54 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 25 Mar 2021 18:03:54 -0400 Subject: [cinder] need core reviewers for small driver fix In-Reply-To: References: Message-ID: <7924e6f0-28bd-d1e1-b27b-22f9e7488bfd@gmail.com> On 3/25/21 2:45 PM, Belogrudov, Vladislav wrote: > Dear core reviewers, > > I would like to ask for help in reviewing a small bug fix in Cinder > driver for Dell EMC PowerStore. It’s just 2 chars, the driver wrongly > used “eq” operator in PostgREST request instead of “cs” for finding a > value in an array. The fix is under review  [1]. Official API guide for > PowerStore [2] says that ip_pool_address endpoint returns address > instances with array of purposes, where the driver searches for iSCSI > purpose. If there are more purposes configured for an iSCSI target > address, the driver fails to operate. It would be great if the fix could > go to Wallaby. RC-1 and the stable/wallaby branch are being cut as you read this, but this sounds like a release-critical bug in that it can render the driver inoperable given a not uncommon backend configuration, so there is time to get it into the next release candidate and hence wallaby. I suggest adding a unit test to guard against a regression caused by someone coming along and "fixing" your change. Please add your review to the etherpad where we're tracking these: https://etherpad.opendev.org/p/cinder-wallaby-release-critical-bug-nominations > > Best regards, > > Vladislav Belogrudov > > [1] https://review.opendev.org/c/openstack/cinder/+/782087 > > > [2] https://downloads.dell.com/manuals/common/pwrstr-apig_en-us.pdf > > From missile0407 at gmail.com Thu Mar 25 23:50:42 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Fri, 26 Mar 2021 07:50:42 +0800 Subject: [kolla][glance] Few questions about images. Message-ID: Hi everyone, I want to ask about the image operating permission & Windows images issue we met since we can't find any answers on the internet. 1. Now we're still using Rocky with Ceph as storage. Sometimes we need to re-pack the image on the Openstack. We used to save as snapshot (re-pack by Nova ephemeral VM) or upload to image (re-pack by volume), then set snapshot/image's visibility to the public. But since Rocky, we can't do this anymore because when we try to set public, Horizon always shows "not enough permission" error. The workaround we're using for now is creating a nova snapshot after re-pack, download the snapshot, then upload the snapshot again as public. But it's utterly wasting time if the images are huge. So we want to know how to unleash this limitation can let us just change snapshot to public at least. 2. Openstack uses virtio as a network device by default, so we always install a virtio driver when packing Windows images. As the network performance issue in GSO/TSO enablement, we also need to disable them in device properties. But since Windows 10 2004 build (my thought) device properties always reset these settings after Sysprep. We found there's a workaround [1] to solve this issue, but may not work sometimes. Is there a better way to solve this issue? Many thanks, Eddie. [1] PersistAllDeviceInstalls | Microsoft Docs -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Mar 25 23:57:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 25 Mar 2021 23:57:17 +0000 Subject: [all][elections][tc] TC Vacancy Special Election Nominations Kickoff Message-ID: <20210325235717.yq34gdoqt7qy4qfk@yuggoth.org> Nominations for one vacant OpenStack TC (Technical Committee) position are now open and will remain open until Apr 08, 2021 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at https://governance.openstack.org/election/#how-to-submit-a-candidacy Please make sure to follow the candidacy file naming convention: candidates/xena// (for example, "candidates/xena/TC/stacker at example.org"). The name of the file should match an email address for your current OpenStack Foundation Individual Membership. Take this opportunity to ensure that your OSF member profile contains current information: https://www.openstack.org/profile/ Any OpenStack Foundation Individual Member can propose their candidacy for the vacant seat on the Technical Committee. This TC vacancy special election will be held from Apr 8, 2021 23:45 UTC through to Apr 15, 2021 23:45 UTC. The electorate for the TC election are the OpenStack Foundation Individual Members who have a code contribution to one of the official teams over the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC, as well as any Extra ATCs who are acknowledged by the TC. Note that the contribution qualifying period for this special election is being kept the same as what would have been used for the original TC election. The four already elected TC members for this term are listed as candidates in the special election, but will not appear on any resulting poll as they have already been officially elected. Only new candidates in addition to the four elected TC members for this term will appear on a subsequent poll for the TC vacancy special election. Please find below the timeline: nomination starts @ Mar 25, 2021 23:45 UTC nomination ends @ Apr 08, 2021 23:45 UTC elections start @ Apr 08, 2021 23:45 UTC elections end @ Apr 15, 2021 23:45 UTC Shortly after election officials approve candidates, they will be listed on the https://governance.openstack.org/election/ page. The electorate is requested to confirm their preferred email addresses in Gerrit prior to 2021-04-01 00:00:00+00:00, so that the emailed ballots are sent to the correct email address. This email address should match one which was provided in your foundation member profile as well. Gerrit account information and OSF member profiles can be updated at https://review.openstack.org/#/settings/contact and https://www.openstack.org/profile/ accordingly. If you have any questions please be sure to either ask them on the mailing list or to the elections officials: https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Elections Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From xin-ran.wang at intel.com Fri Mar 26 03:00:04 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Fri, 26 Mar 2021 03:00:04 +0000 Subject: [cyborg][ptg] Xena PTG Message-ID: Hi all, we signed up for the following slots [1] at Cactus room: * Tue Apr. 20th 6:00- 8:00 UTC (2 slots) * Wed Apr. 21th 6:00- 8:00 UTC (2 slots) * Thu Apr. 22th 6:00- 8:00 UTC (2 slots) * Fri Apr. 23th 6:00- 8:00 UTC (2 slots) If you want add some topic to discuss, please feel free to add it to the etherpad [2]. [1] https://ethercalc.net/oz7q0gds9zfi [2] https://etherpad.opendev.org/p/cyborg-xena-ptg Thanks, Xin-Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Fri Mar 26 05:55:21 2021 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 26 Mar 2021 06:55:21 +0100 Subject: [glance] [ops] Glance deployed on 2 nodes with rbd. Do I need a shared filesystem for the reserved stores ? Message-ID: We are using OpenStack Train but we plan to migrate soon to something newer The glance-api service is deployed on 2 nodes. We used to have 2 backends for glance (file and rbd) but now we want to use only ceph. So I guess the configuration for the stores can be simply something like this [*] Reading: https://docs.openstack.org/glance/train/admin/multistores.html is not too clear to me if I have to configure somewhere filesystem_store_datadir. In particular do I have to configure the [os_glance_tasks_store] and [os_glance_staging_store] reserved stores ? If so, do I still need shared (between the 2 glance-api nodes )filesystems for them ? Thanks, Massimo [*] [DEFAULT] enabled_backends = rbd:rbd [glance_store] default_backend = rbd [rbd] store_description = Ceph in replica 3 rbd_store_pool = glance-cloudtest rbd_store_user = glance-cloudtest rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Fri Mar 26 06:05:51 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 26 Mar 2021 11:35:51 +0530 Subject: [glance] [ops] Glance deployed on 2 nodes with rbd. Do I need a shared filesystem for the reserved stores ? In-Reply-To: References: Message-ID: Hi Massimo, If you are planning to use glance-direct import method then you need to configure os_glance_staging_store using a shared filesystem. If you plan to use only web-download and copy-image import methods then you don't need a shared filesystem for os_glance_staging_store. If glance-direct needs to be used then the below path needs to be on a shared file system. [os_glance_staging_store] filesystem_store_datadir = /path/to/staging/store If only web-download and copy-image import methods will be used then still you need to set the above section in glance-api.conf but no need for a shared filesystem in this case. Thanks & Best Regards, Abhishek Kekane On Fri, Mar 26, 2021 at 11:29 AM Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > We are using OpenStack Train but we plan to migrate soon to something newer > > The glance-api service is deployed on 2 nodes. > > We used to have 2 backends for glance (file and rbd) but now we want to > use only ceph. So I guess the configuration for the stores can be simply > something like this [*] > > Reading: > https://docs.openstack.org/glance/train/admin/multistores.html > > is not too clear to me if I have to configure somewhere > filesystem_store_datadir. > In particular do I have to configure the [os_glance_tasks_store] and > [os_glance_staging_store] reserved stores ? > If so, do I still need shared (between the 2 glance-api nodes )filesystems > for them ? > > Thanks, Massimo > > > [*] > > [DEFAULT] > enabled_backends = rbd:rbd > > [glance_store] > default_backend = rbd > > > [rbd] > store_description = Ceph in replica 3 > rbd_store_pool = glance-cloudtest > rbd_store_user = glance-cloudtest > rbd_store_ceph_conf = /etc/ceph/ceph.conf > rbd_store_chunk_size = 8 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Fri Mar 26 06:07:30 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 26 Mar 2021 11:37:30 +0530 Subject: [glance] [ops] Glance deployed on 2 nodes with rbd. Do I need a shared filesystem for the reserved stores ? In-Reply-To: References: Message-ID: Also I forget to mention that Wallaby onwards there is no need to configure a shared filesystem for `os_glance_staging_store` for any of the import methods. Thanks & Best Regards, Abhishek Kekane On Fri, Mar 26, 2021 at 11:35 AM Abhishek Kekane wrote: > Hi Massimo, > > If you are planning to use glance-direct import method then you need to > configure os_glance_staging_store using a shared filesystem. If you plan to > use only web-download and copy-image import methods then you don't need a > shared filesystem for os_glance_staging_store. > > If glance-direct needs to be used then the below path needs to be on a > shared file system. > [os_glance_staging_store] > filesystem_store_datadir = /path/to/staging/store > > If only web-download and copy-image import methods will be used then still > you need to set the above section in glance-api.conf but no need for a > shared filesystem in this case. > > > Thanks & Best Regards, > > Abhishek Kekane > > > On Fri, Mar 26, 2021 at 11:29 AM Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > >> We are using OpenStack Train but we plan to migrate soon to something >> newer >> >> The glance-api service is deployed on 2 nodes. >> >> We used to have 2 backends for glance (file and rbd) but now we want to >> use only ceph. So I guess the configuration for the stores can be simply >> something like this [*] >> >> Reading: >> https://docs.openstack.org/glance/train/admin/multistores.html >> >> is not too clear to me if I have to configure somewhere >> filesystem_store_datadir. >> In particular do I have to configure the [os_glance_tasks_store] and >> [os_glance_staging_store] reserved stores ? >> If so, do I still need shared (between the 2 glance-api nodes )filesystems >> for them ? >> >> Thanks, Massimo >> >> >> [*] >> >> [DEFAULT] >> enabled_backends = rbd:rbd >> >> [glance_store] >> default_backend = rbd >> >> >> [rbd] >> store_description = Ceph in replica 3 >> rbd_store_pool = glance-cloudtest >> rbd_store_user = glance-cloudtest >> rbd_store_ceph_conf = /etc/ceph/ceph.conf >> rbd_store_chunk_size = 8 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Fri Mar 26 07:43:45 2021 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 26 Mar 2021 08:43:45 +0100 Subject: [glance] [ops] Glance deployed on 2 nodes with rbd. Do I need a shared filesystem for the reserved stores ? In-Reply-To: References: Message-ID: Thanks a lot !! What about [os_glance_tasks_store] ? Is it needed ? Best Regards, Massimo On Fri, Mar 26, 2021 at 7:08 AM Abhishek Kekane wrote: > Also I forget to mention that Wallaby onwards there is no need to > configure a shared filesystem for `os_glance_staging_store` for any of the > import methods. > > Thanks & Best Regards, > > Abhishek Kekane > > > On Fri, Mar 26, 2021 at 11:35 AM Abhishek Kekane > wrote: > >> Hi Massimo, >> >> If you are planning to use glance-direct import method then you need to >> configure os_glance_staging_store using a shared filesystem. If you plan to >> use only web-download and copy-image import methods then you don't need a >> shared filesystem for os_glance_staging_store. >> >> If glance-direct needs to be used then the below path needs to be on a >> shared file system. >> [os_glance_staging_store] >> filesystem_store_datadir = /path/to/staging/store >> >> If only web-download and copy-image import methods will be used then >> still you need to set the above section in glance-api.conf but no need for >> a shared filesystem in this case. >> >> >> Thanks & Best Regards, >> >> Abhishek Kekane >> >> >> On Fri, Mar 26, 2021 at 11:29 AM Massimo Sgaravatto < >> massimo.sgaravatto at gmail.com> wrote: >> >>> We are using OpenStack Train but we plan to migrate soon to something >>> newer >>> >>> The glance-api service is deployed on 2 nodes. >>> >>> We used to have 2 backends for glance (file and rbd) but now we want to >>> use only ceph. So I guess the configuration for the stores can be simply >>> something like this [*] >>> >>> Reading: >>> https://docs.openstack.org/glance/train/admin/multistores.html >>> >>> is not too clear to me if I have to configure somewhere >>> filesystem_store_datadir. >>> In particular do I have to configure the [os_glance_tasks_store] and >>> [os_glance_staging_store] reserved stores ? >>> If so, do I still need shared (between the 2 glance-api nodes >>> )filesystems >>> for them ? >>> >>> Thanks, Massimo >>> >>> >>> [*] >>> >>> [DEFAULT] >>> enabled_backends = rbd:rbd >>> >>> [glance_store] >>> default_backend = rbd >>> >>> >>> [rbd] >>> store_description = Ceph in replica 3 >>> rbd_store_pool = glance-cloudtest >>> rbd_store_user = glance-cloudtest >>> rbd_store_ceph_conf = /etc/ceph/ceph.conf >>> rbd_store_chunk_size = 8 >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Mar 26 07:53:56 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 26 Mar 2021 08:53:56 +0100 Subject: [nova][placement] Wallaby release In-Reply-To: References: <8KLAPQ.YM5K5P44TRVX2@est.tech> Message-ID: Hi, We merged both the placement[1] and the nova[2] RC1 release patches that means Wallaby is now branched out to stable/wallaby. However this does not mean that master is fully open to Xena yet. Please only merge patches to master that are either needs backport and a new RC of Wallaby, or setting up the master to Xena or really low risk like doc and tooling patches. Master expected to be fully open for Xena after the last RC deadly which is 2 weeks from now. The specs repositories are open for Xena patches. Please triage incoming bugs, prepare for the coming PTG[3] and look at open specs for Xena. Thank you all who contributed to the Wallaby release! We finished 14 out of the 20 approved blueprints (70%) in this cycle. I think this is a excellent number (last cycle it was 9 bp and around ~60% completion). Cheers, gibi [1] https://review.opendev.org/c/openstack/releases/+/781733 [2] https://review.opendev.org/c/openstack/releases/+/781726 [3] https://etherpad.opendev.org/p/nova-xena-ptg On Fri, Mar 12, 2021 at 12:52, Herve Beraud wrote: > Ack (from the release team). > > Le ven. 12 mars 2021 à 11:29, Balazs Gibizer > a écrit : >> Hi, >> >> So we hit Feature Freeze yesterday. I've update launchpad blueprint >> statuses to reflect reality. >> >> There are couple of series that was approved before the freeze but >> haven't landed yet. We are pushing these through the gate: >> >> * pci-socket-affinity >> https://review.opendev.org/c/openstack/nova/+/772779 >> * port-scoped-sriov-numa-affinity >> https://review.opendev.org/c/openstack/nova/+/773792 >> * >> https://review.opendev.org/q/topic:bp/compact-db-migrations-wallaby+status:open >> * >> https://review.opendev.org/q/topic:bp/allow-secure-boot-for-qemu-kvm-guests+status:open >> >> We are finishing up the work on >> https://review.opendev.org/q/topic:vhost-vdpa+status:open where FFE >> is >> being requested. >> >> Cheers, >> gibi >> >> >> On Mon, Mar 1, 2021 at 14:31, Balazs Gibizer >> >> wrote: >> > Hi, >> > >> > We are getting close to the Wallaby release. So I create a >> tracking >> > etherpad[1] with the schedule and TODOs. >> > >> > One thing that I want to highlight is that we will hit Feature >> Freeze >> > on 11th of March. As the timeframe between FF and RC1 is short I'm >> > not plannig with FFEs. Patches that are approved before 11 March >> EOB >> > can be rechecked or rebased if needed and then re-approved. If you >> > have a patch that is really close but not approved before the >> > deadline, and you think there are two cores that willing to >> review it >> > before RC1, then please send a mail to the ML with [nova][FFE] >> > subject prefix not later than 16th of March EOB. >> > >> > Cheers, >> > gibi >> > >> > [1] https://etherpad.opendev.org/p/nova-wallaby-rc-potential >> > >> > >> > >> >> >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From mark at stackhpc.com Fri Mar 26 08:43:53 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 26 Mar 2021 08:43:53 +0000 Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari In-Reply-To: References: Message-ID: On Thu, 25 Mar 2021 at 21:01, Braden, Albert wrote: > > After about 2 hours the CPU settles down and then control0 joins the RMQ cluster and the admin display looks normal. The mysql container on control0 is stopped and the elasticsearch container is restarting every 60 seconds. > > acf52c003292 kolla/centos-source-mariadb:train-centos8 "dumb-init -- kolla_…" 5 hours ago Exited (128) 3 hours ago mariadb > 9bc064cf9b2b kolla/centos-source-elasticsearch6:train-centos8 "dumb-init --single-…" 5 hours ago Restarting (1) 20 seconds ago elasticsearch > > The mariadb container refuses to start: > > [root at chrnc-void-testupgrade-control-0-replace keystone]# docker start mariadb > Error response from daemon: OCI runtime create failed: container with id exists: acf52c003292e4841af15bb3c2894b983e37de5a65fc726ae2db2049f0e6774c: unknown > > I see a lot of this in mariadb.log on control0: > > 2021-03-25 17:29:16 13436 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-2.dev.chtrse.com' (using password: NO) > 2021-03-25 17:29:16 13437 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-1.dev.chtrse.com' (using password: NO) > 2021-03-25 17:29:17 13438 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-0-replace.dev.chtrse.com' (using password: NO) > 2021-03-25 17:29:18 13441 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-2.dev.chtrse.com' (using password: NO) > 2021-03-25 17:29:18 13442 [Warning] Access denied for user 'haproxy'@'chrnc-void-testupgrade-control-1.dev.chtrse.com' (using password: NO) > > Here's the entire mariadb.log starting when I created the cluster and ending when the container died: > > https://paste.ubuntu.com/p/FCn9pB6zV4/ > > The CPU is no longer consumed, but the times might be a clue: > > 60 root 20 0 0 0 0 S 0.0 0.0 104:07.36 kswapd0 > 1 root 20 0 253576 9756 4580 S 0.0 0.1 42:22.65 systemd > 28670 42472 20 0 180416 14140 5812 S 0.0 0.2 20:00.75 memcached_expor > 28515 42472 20 0 332512 29968 3408 S 0.0 0.4 12:27.02 mysqld_exporter > 28436 42472 20 0 251896 21584 6956 S 0.0 0.3 12:21.69 node_exporter > 14857 root 20 0 980460 45684 6592 S 0.0 0.6 9:55.42 containerd > 34608 42425 20 0 732436 106800 9012 S 0.0 1.4 9:03.50 httpd > 34609 42425 20 0 732436 106812 8736 S 0.0 1.4 9:01.90 httpd > 29123 42472 20 0 49360 9840 1288 S 0.3 0.1 8:00.38 elasticsearch_e > 15034 root 20 0 2793200 68112 0 S 0.0 0.9 6:36.63 dockerd > 23161 root 20 0 113120 6056 0 S 0.0 0.1 6:25.05 containerd-shim > 28592 42472 20 0 112820 16988 6348 S 0.0 0.2 6:00.40 haproxy_exporte > 57248 42472 20 0 253568 74152 0 S 0.0 1.0 5:47.56 openstack-expor > 28950 42472 20 0 120256 24664 9456 S 0.0 0.3 5:22.03 alertmanager > 31847 root 20 0 235156 2760 2196 S 0.0 0.0 5:00.97 bash > > I'll build another cluster and watch top during the upgrade to see what is consuming the CPU. Hi Albert, I would suggest using the --tags argument to go through the kolla-ansible deploy on the new controller service by service. You can check site.yml for the order of the plays. Mark > > -----Original Message----- > From: Radosław Piliszek > Sent: Thursday, March 25, 2021 2:17 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > Hi Albert, > > I can assure you this is unrelated to Masakari. > As you have observed, it's the RabbitMQ and Keystone (perhaps due to > MariaDB?) that failed. > Something is abusing the CPU there. What is that process? > > -yoctozepto > > On Thu, Mar 25, 2021 at 7:09 PM Braden, Albert > wrote: > > > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > > > > > Everything seems to work fine until I try to deploy the first replacement controller into the cluster. I upgrade RMQ, ES and Kibana, then follow the “remove existing controller” process to remove control0, create a Centos 8 VM, bootstrap the cluster, pull containers to the new control0, and everything is still working. Then I type the last command “kolla-ansible -i multinode deploy --limit control” > > > > > > > > The RMQ install works and I see all 3 nodes up in the RMQ admin, but it takes a long time to complete “TASK [service-ks-register : masakari | Creating users] “ and then hangs on “TASK [service-ks-register : masakari | Creating roles]”. At this time the new control0 becomes unreachable and drops out of the RMQ cluster. I can still ping it but console hangs along with new and existing ssh sessions. It appears that the CPU may be maxed out and not allowing interrupts. Eventually I see error “fatal: [control0]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}” > > > > > > > > RMQ seems fine on the 2 old controllers; they just don’t see the new control0 active: > > > > > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > > > 'rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > > > {partitions,[]}, > > > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-1',[]}, > > > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > > > > > After this the HAProxy IP is pingable but openstack commands are failing: > > > > > > > > (openstack) [root at chrnc-void-testupgrade-build openstack]# osl > > > > Failed to discover available identity versions when contacting http://172.16.0.100:35357/v3. Attempting to parse version from URL. > > > > Gateway Timeout (HTTP 504) > > > > > > > > After about an hour my open ssh session on the new control0 responded and confirmed that the CPU is maxed out: > > > > > > > > [root at chrnc-void-testupgrade-control-0-replace /]# uptime > > > > 17:41:55 up 1:55, 0 users, load average: 157.87, 299.75, 388.69 > > > > > > > > I built new heat stacks and tried it a few times, and it consistently fails on masakari. Do I need to change something in my masakari config before upgrading Train from Centos 7 to Centos 8? > > > > > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > > > > > The contents of this e-mail message and > > any attachments are intended solely for the > > addressee(s) and may contain confidential > > and/or legally privileged information. If you > > are not the intended recipient of this message > > or if this message has been addressed to you > > in error, please immediately alert the sender > > by reply e-mail and then delete this message > > and any attachments. If you are not the > > intended recipient, you are notified that > > any use, dissemination, distribution, copying, > > or storage of this message or any attachment > > is strictly prohibited. > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From hberaud at redhat.com Fri Mar 26 09:16:50 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 26 Mar 2021 10:16:50 +0100 Subject: [release] Release countdown for week R-2 Mar 29 - Apr 02 Message-ID: Development Focus ----------------- At this point we should have release candidates (RC1 or recent intermediary release) for all the Wallaby deliverables. Teams should be working on any release-critical bugs that would require another RC or intermediary release before the final release. Actions ------- Early in the week, the release team will be proposing stable/wallaby branch creation for all deliverables that have not branched yet, using the latest available Wallaby release as the branch point. If your team is ready to go for creating that branch, please let us know by leaving a +1 on these patches. If you would like to wait for another release before branching, you can -1 the patch and update it later in the week with the new release you would like to use. By the end of the week the release team will merge those patches though, unless an exception is granted. Once stable/wallaby branches are created, if a release-critical bug is detected, you will need to fix the issue in the master branch first, then backport the fix to the stable/wallaby branch before releasing out of the stable/wallaby branch. After all of the cycle-with-rc projects have branched we will branch devstack, grenade, and the requirements repos. This will effectively open them up for Xena development, though the focus should still be on finishing up Wallaby until the final release. For projects with translations, watch for any translation patches coming through and merge them quickly. A new release should be produced so that translations are included in the final Wallaby release. Finally, now is a good time to finalize release notes. In particular, consider adding any relevant "prelude" content. Release notes are targetted for the downstream consumers of your project, so it would be great to include any useful information for those that are going to pick up and use or deploy the Wallaby version of your project. Upcoming Deadlines & Dates -------------------------- Final RC deadline: 08 April, 2021 (R-1 week) Final Wallaby release: 14 April, 2021 Xena virtual PTG: 19 - 23 April, 2021 Thank you for your attention -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Fri Mar 26 09:19:23 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 26 Mar 2021 10:19:23 +0100 Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari In-Reply-To: References: Message-ID: On Thu, 25 Mar 2021 at 22:02, Braden, Albert wrote: > The CPU is no longer consumed, but the times might be a clue: > > 60 root 20 0 0 0 0 S 0.0 0.0 104:07.36 kswapd0 High kswapd0 usage: do you have enough memory on these controllers? From hberaud at redhat.com Fri Mar 26 12:43:52 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 26 Mar 2021 13:43:52 +0100 Subject: [release] Meeting Time Poll Message-ID: Hello We have a few regular attendees of the Release Management meeting who have conflicts with the current meeting time. As a result, we would like to find a new time to hold the meeting. I've created a Doodle poll[1] for everyone to give their input on times. It's mostly limited to times that reasonably overlap the working day in the US and Europe since that's where most of our attendees are located. If you attend the Release Management meeting, please fill out the poll so we can hopefully find a time that works better for everyone. For the sake of organization and to allow everyone to schedule his agenda accordingly, the poll will be closed on April 5th. On that date, I will announce the time of this meeting and the date on which it will take effect. Thanks! [1] https://doodle.com/poll/ip6tg4fvznz7p3qx -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Fri Mar 26 13:37:51 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 26 Mar 2021 13:37:51 +0000 Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari In-Reply-To: References: Message-ID: <86ff301c4b1a469fb06a46436287db08@ncwmexgp009.CORP.CHARTERCOM.com> That was it. Switching to a bigger flavor fixed it. -----Original Message----- From: Pierre Riteau Sent: Friday, March 26, 2021 5:19 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: Re: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on masakari CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Thu, 25 Mar 2021 at 22:02, Braden, Albert wrote: > The CPU is no longer consumed, but the times might be a clue: > > 60 root 20 0 0 0 0 S 0.0 0.0 104:07.36 kswapd0 High kswapd0 usage: do you have enough memory on these controllers? E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From mnaser at vexxhost.com Fri Mar 26 14:31:57 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 26 Mar 2021 10:31:57 -0400 Subject: [tc] New Chair Message-ID: Hi everyone, I'd like to introduce/congratulate/welcome Ghanshyam as the new chair of our technical committee. I'm happy to see them step up to this role and I think they're an excellent candidate from their hard work within the community. Thanks for having me as chair for the past few years, I'm still around and continue to be on the technical committee and thanks for reading my weekly updates :) Regards, Mohammed -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Mar 26 14:34:03 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 26 Mar 2021 16:34:03 +0200 Subject: [TripleO] next irc meeting Tuesday Mar 30 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 30 March at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on Mar 16 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-03-16-14.00.html Hope you can make it on Tuesday, regards, marios From gmann at ghanshyammann.com Fri Mar 26 16:49:16 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 26 Mar 2021 11:49:16 -0500 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: References: Message-ID: <1786f70a460.f5ab84601145474.6816087908324558641@ghanshyammann.com> It seems like https://ethercalc.net/oz7q0gds9zfi is down since morning (CT morning the time I checked). Did we export the last state or any idea how we can bring it back? Fungi mentioned on IRC that this is not ethercalc service opendev runs so nothing opendev team can do about this. -gmann ---- On Mon, 22 Mar 2021 13:22:27 -0500 Kendall Nelson wrote ---- > Hey Everyone! > Friendly reminder that the deadline to sign your team up for the upcoming PTG is this Thursday, March 25 at 7:00 UTC. To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2]. > Once your team is signed up, please register! And remind your team to register! Registration is free, but since it is how we contact you with passwords, event details, etc. it is important! > Continue to check back for updates at openstack.org/ptg. > -the Kendalls (diablo_rojo & wendallkaters) > > [1] Team Survey: https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey[2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi[3] PTG Registration: https://april2021-ptg.eventbrite.com > On Mon, Mar 8, 2021 at 10:08 AM Kendall Nelson wrote: > Greetings! > As you hopefully already know, our next PTG will be virtual again, and held from Monday, April 19 to Friday, April 23. We will have the same schedule set up available as last time with three windows of time spread across the day to cover all timezones with breaks in between. > To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by March 25 at 7:00 UTC. > We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions in with 4 rules/guidelines. > 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first.2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. > Again, you need to fill out BOTH the ethercalc AND the survey to complete your team's sign up. > If you have any issues with signing up your team, due to conflict or otherwise, please let me know! While we are trying to empower you to make your own decisions as to when you meet and for how long (after all, you know your needs and teams timezones better than we do), we are here to help! > Once your team is signed up, please register! And remind your team to register! Registration is free, but since it will be how we contact you with passwords, event details, etc. it is still important! > Continue to check back for updates at openstack.org/ptg. > -the Kendalls (diablo_rojo & wendallkaters) > > [1] Team Survey: https://openinfrafoundation.formstack.com/forms/april2021_vptg_survey[2] Ethercalc Signup: https://ethercalc.net/oz7q0gds9zfi[3] PTG Registration: https://april2021-ptg.eventbrite.com > From soumplis at admin.grnet.gr Fri Mar 26 17:42:15 2021 From: soumplis at admin.grnet.gr (Alexandros Soumplis) Date: Fri, 26 Mar 2021 19:42:15 +0200 Subject: [kolla-ansible] Inventory group vars Message-ID: Hi all, I am trying to define a couple of variables in the inventory file for a specific group and I do something like the following in the inventory: (...) [compute] comp[1:5] [compute:vars] neutron_external_interface=bond0.100, bond0.101 neutron_bridge_name=br-ex,br-ex2 (...) This is a valid ansible setup, however with kolla-ansible these variables are totally ignored. Any suggestion ? a. From DHilsbos at performair.com Fri Mar 26 17:42:51 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 26 Mar 2021 17:42:51 +0000 Subject: [ops][victoria] Windows Server 2012 R2 - Can't install - unbootable Message-ID: <0670B960225633449A24709C291A52524FBACB34@COM01.performair.local> All; We have a recently built Victoria cluster with storage provided by a Ceph Nautilus cluster. I'm working through the following tutorial: https://medium.com/@yooniks9/openstack-install-windows-server-2016-2019-with-image-iso-d8c17c8cfc36 When I get to step 4 under "Windows Server Installation," it says "Windows can't be installed on this drive." Details are: " Windows cannot be installed to this disk. This computer's hardware may not support booting to this disk. Ensure the disk's controller is enabled in the computer's BIOS menu. Any recommendations on what I'm missing here would be appreciated. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From fungi at yuggoth.org Fri Mar 26 17:56:37 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 26 Mar 2021 17:56:37 +0000 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: <1786f70a460.f5ab84601145474.6816087908324558641@ghanshyammann.com> References: <1786f70a460.f5ab84601145474.6816087908324558641@ghanshyammann.com> Message-ID: <20210326175636.3x5urqzjshgkv3ox@yuggoth.org> On 2021-03-26 11:49:16 -0500 (-0500), Ghanshyam Mann wrote: [...] > Did we export the last state or any idea how we can bring it back? I talked to Kendall briefly about it and she indicated she'd done an export on Monday and will do another as soon as the service (hopefully!) comes back up. > Fungi mentioned on IRC that this is not ethercalc service opendev > runs so nothing opendev team can do about this. [...] It may make sense to copy any periodic exports to an equivalent spreadsheet on ethercalc.openstack.org in case we need to announce a switch. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jungleboyj at gmail.com Fri Mar 26 18:09:45 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 26 Mar 2021 13:09:45 -0500 Subject: [tc] New Chair In-Reply-To: References: Message-ID: <001ab73a-ae0b-bfb5-fb6b-42b78ea9cd76@gmail.com> Mohammed, Thank you for chairing the TC!  Well done. Jay On 3/26/2021 9:31 AM, Mohammed Naser wrote: > Hi everyone, > > I'd like to introduce/congratulate/welcome Ghanshyam as the new chair > of our technical committee.  I'm happy to see them step up to this > role and I think they're an excellent candidate from their hard work > within the community. > > Thanks for having me as chair for the past few years, I'm still around > and continue to be on the technical committee and thanks for reading > my weekly updates :) > > Regards, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Mar 26 18:30:28 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 26 Mar 2021 11:30:28 -0700 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: <20210326175636.3x5urqzjshgkv3ox@yuggoth.org> References: <1786f70a460.f5ab84601145474.6816087908324558641@ghanshyammann.com> <20210326175636.3x5urqzjshgkv3ox@yuggoth.org> Message-ID: Yeah I had planned to do an openstack one, but had failed to do so thus far. If the .net ethercalc doesn't come back up by the end of the day, I will make an openstack.org one and copy by Monday backup into it (which sadly will be missing anything added after that date, but better than nothing!). I am really sorry that I messed that up. I have no idea why my tab complete switched etherlcalc instances AND I didn't notice. -Kendall (diablo_rojo) On Fri, Mar 26, 2021 at 10:58 AM Jeremy Stanley wrote: > On 2021-03-26 11:49:16 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > Did we export the last state or any idea how we can bring it back? > > I talked to Kendall briefly about it and she indicated she'd done an > export on Monday and will do another as soon as the service > (hopefully!) comes back up. > > > Fungi mentioned on IRC that this is not ethercalc service opendev > > runs so nothing opendev team can do about this. > [...] > > It may make sense to copy any periodic exports to an equivalent > spreadsheet on ethercalc.openstack.org in case we need to announce a > switch. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Mar 26 18:59:58 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 26 Mar 2021 11:59:58 -0700 Subject: [manila] RC1 is available, No IRC meeting on 1st April 2021 Message-ID: Hello Zorillas, RC1 builds have now shipped for both manila and manila-ui, thank you all for your hard work on getting these out on time. It's now time to test these builds and learn of any release blocking bugs and fix them. Some down time is also in order between now and the PTG! In order to let folks to plan for that, I'd like to skip the IRC meeting on 1st April. In the meantime, please feel free to post your issues to #openstack-manila and to this email list. Thanks, Goutham From rosmaita.fossdev at gmail.com Fri Mar 26 19:13:39 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 26 Mar 2021 15:13:39 -0400 Subject: [cinder] critical os-brick fix needs immediate review In-Reply-To: <5a73dc84-89ae-6d2f-2ff8-b59e7f8ba1b7@gmail.com> References: <5a73dc84-89ae-6d2f-2ff8-b59e7f8ba1b7@gmail.com> Message-ID: <295a15cc-5e12-0432-8c36-23edbed768bb@gmail.com> On 3/25/21 5:40 PM, Brian Rosmaita wrote: > Gorka has a patch up to address a possible data loss situation: >   https://review.opendev.org/c/openstack/os-brick/+/782992 > > I spoke with the release team today, and if we can get the patch > reviewed and merged to master and then backported to stable/wallaby by > MONDAY (29 March), we can apply for a Requirements Freeze Exception so > that os-brick with this fix will be included in the OpenStack > coordinated wallaby release.  (It's a data loss issue, of course we want > the fix in the wallaby release.) The patch has merged to master. The backport to stable/wallaby is available to review: https://review.opendev.org/c/openstack/os-brick/+/783207 There's also a release note update associated with the fix: https://review.opendev.org/c/openstack/os-brick/+/783391 > > So please review at your earliest convenience. > > thanks, > brian > From rosmaita.fossdev at gmail.com Fri Mar 26 19:18:29 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 26 Mar 2021 15:18:29 -0400 Subject: [cinder] review priorities for week of 29 march Message-ID: We're keeping track of release-critical patches here: https://etherpad.opendev.org/p/cinder-wallaby-release-critical-bug-nominations The stable/wallaby branch was cut yesterday when RC-1 was released. So these must be reviewed and merged to master and then proposed as backports to stable/wallaby and reviewed and approved again. The week of 29 March is a short week for people in various parts of the world, and Monday the week after (which is final RC week) is also a holiday for some people. So even though it seems like there's a lot of time to prepare RC-2, there really isn't. Please schedule some time to do reviews early in the week. cheers, brian From raubvogel at gmail.com Fri Mar 26 19:35:13 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Fri, 26 Mar 2021 15:35:13 -0400 Subject: Cannot specify host in availability zone In-Reply-To: References: Message-ID: On Thu, Feb 4, 2021 at 1:50 PM Ruslanas Gžibovskis wrote: > > Hi Mauricio, > > Faced similar issue. Maybe my issue and idea will help you. > > First try same steps from horizon. > > Second. In cli, i used same name as hypervizor list output. > In most cases for me it is: $stackName-$roleName-$index.localdomain if forget to change localdomain value ;)) > I hate to say I do not understand how to expand it back. I take roleName is off "openstack role list", but "openstack stack list" gives me nothing. > My faced issues were related, tah one compute can be in one zone, and multiple aggregation groups in same zone. > > Note: > Interesting that you use nova blah-blah, try openstack aggregate create zonename --zone zonename > And same with openstack aggregate help see help how to add it with openstack. > > Hope this helps at least a bit. From gmann at ghanshyammann.com Fri Mar 26 20:20:28 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 26 Mar 2021 15:20:28 -0500 Subject: [PTLs][All] vPTG April 2021 Team Signup In-Reply-To: References: <1786f70a460.f5ab84601145474.6816087908324558641@ghanshyammann.com> <20210326175636.3x5urqzjshgkv3ox@yuggoth.org> Message-ID: <17870320215.f548d2e31152134.7463033388976102263@ghanshyammann.com> ---- On Fri, 26 Mar 2021 13:30:28 -0500 Kendall Nelson wrote ---- > Yeah I had planned to do an openstack one, but had failed to do so thus far. If the .net ethercalc doesn't come back up by the end of the day, I will make an openstack.org one and copy by Monday backup into it (which sadly will be missing anything added after that date, but better than nothing!). > I am really sorry that I messed that up. I have no idea why my tab complete switched etherlcalc instances AND I didn't notice. No worry Kendall, you have been busy with a lot of things. I pinged them on Twitter. Though they are not so active there let's see if we get any updates. -gmann > -Kendall (diablo_rojo) > On Fri, Mar 26, 2021 at 10:58 AM Jeremy Stanley wrote: > On 2021-03-26 11:49:16 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > Did we export the last state or any idea how we can bring it back? > > I talked to Kendall briefly about it and she indicated she'd done an > export on Monday and will do another as soon as the service > (hopefully!) comes back up. > > > Fungi mentioned on IRC that this is not ethercalc service opendev > > runs so nothing opendev team can do about this. > [...] > > It may make sense to copy any periodic exports to an equivalent > spreadsheet on ethercalc.openstack.org in case we need to announce a > switch. > -- > Jeremy Stanley > From openstack at nemebean.com Fri Mar 26 21:05:55 2021 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 26 Mar 2021 16:05:55 -0500 Subject: [Keystone][Oslo] Policy problems In-Reply-To: <20210304150937.Horde.HUlWGgiGudMxr39pvoxGJzT@webmail.unl.edu.ar> References: <20210304150937.Horde.HUlWGgiGudMxr39pvoxGJzT@webmail.unl.edu.ar> Message-ID: On 3/4/21 9:09 AM, meberhardt at unl.edu.ar wrote: > Hi, > > my installation complains about deprecated policies and throw errors > when I try to run spacific commands in cli (list projects or list users, > for example). > > "You are not authorized to perform the requested action: > identity:list_users." > > I tried to fix this by upgrading the keystone policies using > oslopolicy-policy-generator and oslopolicy-policy-upgrade. Just found > two places where I have those keystone policy files in my sistem: > /etc/openstack_dashboard/keystone_policy.json and cd > /lib/python3.6/site-packages/openstack_auth/tests/conf/keystone_policy.json Neither of those files are used by Keystone itself. You'll want to create an updated policy.yaml (we strongly recommend YAML over JSON these days) in /etc/keystone to get rid of the deprecation warnings. > > I regenerated & upgraded it but Keystone still complains about the old > polices. ¿Where are it placed? ¿How I shoud fix it? > > OS: CentOS 8 > Openstack version: Ussuri > Manual installation > > Thanks, > > Matias > > /* --------------------------------------------------------------- */ > /*                      Matías A. Eberhardt                        */ > /*                                                                 */ > /*                     Centro de Telemática                        */ > /*                      Secretaría General                         */ > /*                UNIVERSIDAD NACIONAL DEL LITORAL                 */ > /*      Pje. Martínez 2652 - S3002AAB Santa Fe - Argentina         */ > /*           tel +54(342)455-4245 - FAX +54(342)457-1240           */ > /* --------------------------------------------------------------- */ > > From gmann at ghanshyammann.com Fri Mar 26 21:38:44 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 26 Mar 2021 16:38:44 -0500 Subject: [all][tc] Xena PTG Planning In-Reply-To: <17866f48e53.1285108df1037866.5958750053657313110@ghanshyammann.com> References: <17866f48e53.1285108df1037866.5958750053657313110@ghanshyammann.com> Message-ID: <1787079a72e.c94204041153293.5963890629884729625@ghanshyammann.com> ---- On Wed, 24 Mar 2021 20:16:46 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > As you already know that the Xena cycle virtual PTG will be held between 19th - 23rd April[1]. > > To plan the Technical Committee PTG planning, please do the following: > > 1. Fill the doodle poll as per your availability. Please fill it soon as we need to book the slot by > 25th March(which is tomorrow). > > - https://doodle.com/poll/2zy8hex4r6wvidqk?utm_source=poll&utm_medium=link > > 2. Add the topics you would like to discuss to the below etherpad. > > - https://etherpad.opendev.org/p/tc-xena-ptg As per the doodle poll[1] and discussion in IRC channel, Technical Committee will meet on Thursday, 22nd: 13 UTC to 15 UTC (2 hrs). Friday, 23rd: 13UTC - 17UTC (4 hrs). I will book the slots once https://ethercalc.net/oz7q0gds9zfi is back. [1] https://doodle.com/poll/5ixvdcbm488kc2ic -gmann > > NOTE: this is not limited to TC members only; I would like all community members to > fill the doodle poll and, add the topics you would like or want TC members > to discuss in PTG. > > -gmann > > From gmann at ghanshyammann.com Fri Mar 26 21:43:14 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 26 Mar 2021 16:43:14 -0500 Subject: [all][tc] Xena PTG Planning In-Reply-To: <1787079a72e.c94204041153293.5963890629884729625@ghanshyammann.com> References: <17866f48e53.1285108df1037866.5958750053657313110@ghanshyammann.com> <1787079a72e.c94204041153293.5963890629884729625@ghanshyammann.com> Message-ID: <178707dc71d.11d0dfc6d1153359.1295822777876459923@ghanshyammann.com> ---- On Fri, 26 Mar 2021 16:38:44 -0500 Ghanshyam Mann wrote ---- > > ---- On Wed, 24 Mar 2021 20:16:46 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > As you already know that the Xena cycle virtual PTG will be held between 19th - 23rd April[1]. > > > > To plan the Technical Committee PTG planning, please do the following: > > > > 1. Fill the doodle poll as per your availability. Please fill it soon as we need to book the slot by > > 25th March(which is tomorrow). > > > > - https://doodle.com/poll/2zy8hex4r6wvidqk?utm_source=poll&utm_medium=link > > > > 2. Add the topics you would like to discuss to the below etherpad. > > > > - https://etherpad.opendev.org/p/tc-xena-ptg > > > As per the doodle poll[1] and discussion in IRC channel, Technical Committee will meet on > Thursday, 22nd: 13 UTC to 15 UTC (2 hrs). Friday, 23rd: 13UTC - 17UTC (4 hrs). > > I will book the slots once https://ethercalc.net/oz7q0gds9zfi is back. > > [1] https://doodle.com/poll/5ixvdcbm488kc2ic Correct link for TC slots doodle poll: https://doodle.com/poll/2zy8hex4r6wvidqk?utm_source=poll&utm_medium=link > > -gmann > > > > > NOTE: this is not limited to TC members only; I would like all community members to > > fill the doodle poll and, add the topics you would like or want TC members > > to discuss in PTG. > > > > -gmann > > > > > > From openstack at nemebean.com Fri Mar 26 21:52:52 2021 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 26 Mar 2021 16:52:52 -0500 Subject: [oslo][security-sig] Please revisit your open vulnerability report In-Reply-To: <20210218191305.5psn6p3kp6tlexoq@yuggoth.org> References: <20210218144904.xeek6zwlyntm24u5@yuggoth.org> <3c103743-898f-79e1-04cc-2f97a52fece3@nemebean.com> <20210218170318.kysdpzibsqnferj5@yuggoth.org> <203cbbfd-9ca8-3f0c-83e4-6d57588103cf@nemebean.com> <20210218191305.5psn6p3kp6tlexoq@yuggoth.org> Message-ID: <53ba75c8-dd82-c470-e564-d4dedfb5090a@nemebean.com> Finally got back to this. More below. On 2/18/21 1:13 PM, Jeremy Stanley wrote: > On 2021-02-18 12:39:52 -0600 (-0600), Ben Nemec wrote: > [...] >> Okay, I did that. I think we may need to audit all of the Oslo projects >> because the spot check I did on oslo.policy also did not have the needed >> sharing, and did have someone who doesn't even work on OpenStack anymore >> with access to private security bugs(!). I don't appear to have permission >> to change that either. :-/ > > Aha, thanks, that explains why the VMT members wouldn't have been > notified (or even able to see the bug at all). > > If you put together a list of which ones need fixing, I think I have > a backdoor via being a member of the group which is the owner of the > groups which are listed as maintainer or owner of many of those > projects, so should be able to temporarily add myself to a group > which has access to adjust the sharing on them. Also at the moment, > the only Oslo deliverables which are listed as having explicit VMT > oversight are castellan and oslo.config. If there are others you > want our proactive help with, please add this tag to them: > > https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html I'll bring up VMT again with the Oslo team. I know it came up a few years ago, but I can't remember why it never happened. Probably I just never followed up. I have added the openstack-vuln-mgmt team to most of the Oslo projects. I apparently don't have permission to change settings in oslo.policy, oslo.windows, and taskflow, so I will need help with that. After going through all of the projects, my guess is that the individual people who have access to the private security bugs are the ones who created the project in the first place. I guess that's fine, but there's an argument to be made that some of those should be cleaned up too. I also noticed that oslo-coresec is not listed in most of the projects. Is there any sort of global setting that should give coresec memebers access to private security bugs, or do I need to add that to each project? From missile0407 at gmail.com Fri Mar 26 23:11:02 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Sat, 27 Mar 2021 07:11:02 +0800 Subject: [kolla-ansible] Inventory group vars In-Reply-To: References: Message-ID: Hi Alexandros, In the [compute] area, it's correct. But for define what physical interface should be use for each nodes, you should set like below: [compute] comp[1:5] neutron_external_interface=bond0.100,bond.101 And I'm not sure if "neutron_bridge_name" can be set behind hostname. For me I usually put into globals.yml. Alexandros Soumplis 於 2021年3月27日 週六 上午1:49寫道: > Hi all, > > I am trying to define a couple of variables in the inventory file for a > specific group and I do something like the following in the inventory: > > (...) > [compute] > comp[1:5] > > [compute:vars] > neutron_external_interface=bond0.100, bond0.101 > neutron_bridge_name=br-ex,br-ex2 > > (...) > > This is a valid ansible setup, however with kolla-ansible these > variables are totally ignored. Any suggestion ? > > > a. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Sat Mar 27 03:57:11 2021 From: ricolin at ricolky.com (Rico Lin) Date: Sat, 27 Mar 2021 11:57:11 +0800 Subject: [tc] New Chair In-Reply-To: <001ab73a-ae0b-bfb5-fb6b-42b78ea9cd76@gmail.com> References: <001ab73a-ae0b-bfb5-fb6b-42b78ea9cd76@gmail.com> Message-ID: Mohammed, Thank you for chairing the TC. I think you doing a great job! Jay Bryant 於 2021年3月27日 週六,上午2:13寫道: > Mohammed, > > Thank you for chairing the TC! Well done. > > Jay > On 3/26/2021 9:31 AM, Mohammed Naser wrote: > > Hi everyone, > > I'd like to introduce/congratulate/welcome Ghanshyam as the new chair of > our technical committee. I'm happy to see them step up to this role and I > think they're an excellent candidate from their hard work within the > community. > > Thanks for having me as chair for the past few years, I'm still around and > continue to be on the technical committee and thanks for reading my weekly > updates :) > > Regards, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. > > -- *Rico Lin* OIF Board director, OpenSack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack *Email: ricolin at ricolky.com * *Phone: +886-963-612-021* -------------- next part -------------- An HTML attachment was scrubbed... URL: From soumplis at admin.grnet.gr Sat Mar 27 06:48:02 2021 From: soumplis at admin.grnet.gr (Alexandros Soumplis) Date: Sat, 27 Mar 2021 08:48:02 +0200 Subject: [kolla-ansible] Inventory group vars In-Reply-To: References: Message-ID: Hi Eddie, If I define them in the hostname it works perfectly fine. I cannot use the globals.yml because there are different bridges per group (ex. control plane servers do not the provide network bridges). The problem with the host approach is that it makes the inventory file very difficult to parse with external scripts. If the typical ansible :vars section worked as expected it would make any parsing much easier with crudini :) a. On 27/3/21 1:11 π.μ., Eddie Yen wrote: > Hi Alexandros, > > In the [compute] area, it's correct. > But for define what physical interface should be use for each > nodes, you should set like below: > > [compute] > comp[1:5] neutron_external_interface=bond0.100,bond.101 > > And I'm not sure if "neutron_bridge_name" can be set behind > hostname. For me I usually put into globals.yml. > > Alexandros Soumplis > 於 2021年3月27日 週六 上午1:49寫道: > > Hi all, > > I am trying to define a couple of variables in the inventory file > for a > specific group and I do something like the following in the inventory: > > (...) > [compute] > comp[1:5] > > [compute:vars] > neutron_external_interface=bond0.100, bond0.101 > neutron_bridge_name=br-ex,br-ex2 > > (...) > > This is a valid ansible setup, however with kolla-ansible these > variables are totally ignored. Any suggestion ? > > > a. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3620 bytes Desc: S/MIME Cryptographic Signature URL: From radoslaw.piliszek at gmail.com Sat Mar 27 09:01:40 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 27 Mar 2021 10:01:40 +0100 Subject: [kolla-ansible] Inventory group vars In-Reply-To: References: Message-ID: On Fri, Mar 26, 2021 at 6:43 PM Alexandros Soumplis wrote: > > Hi all, Hi Alexandros, > I am trying to define a couple of variables in the inventory file for a > specific group and I do something like the following in the inventory: > > (...) > [compute] > comp[1:5] > > [compute:vars] > neutron_external_interface=bond0.100, bond0.101 > neutron_bridge_name=br-ex,br-ex2 > > (...) > > This is a valid ansible setup, however with kolla-ansible these > variables are totally ignored. Any suggestion ? That's interesting. Kolla Ansible does not do any magic at this level so it's basically up to Ansible. Perhaps there is a slight quirk somewhere. Maybe you have compute:vars more than once? And override yourself? Group vars work for me. It's good practice to use them. You might also be interested in [1] where it shows how to organise the vars in yaml files per group. [1] https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#organizing-host-and-group-variables -yoctozepto From missile0407 at gmail.com Sat Mar 27 13:22:53 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Sat, 27 Mar 2021 21:22:53 +0800 Subject: [kolla-ansible] Inventory group vars In-Reply-To: References: Message-ID: I see. Perhaps it will be better if you can provide an error msg shared on pastebin? Alexandros Soumplis 於 2021年3月27日 週六 下午2:56寫道: > Hi Eddie, > > If I define them in the hostname it works perfectly fine. I cannot use the > globals.yml because there are different bridges per group (ex. control > plane servers do not the provide network bridges). > > The problem with the host approach is that it makes the inventory file > very difficult to parse with external scripts. If the typical ansible :vars > section worked as expected it would make any parsing much easier with > crudini :) > > > a. > > > > On 27/3/21 1:11 π.μ., Eddie Yen wrote: > > Hi Alexandros, > > In the [compute] area, it's correct. > But for define what physical interface should be use for each > nodes, you should set like below: > > [compute] > comp[1:5] neutron_external_interface=bond0.100,bond.101 > > And I'm not sure if "neutron_bridge_name" can be set behind > hostname. For me I usually put into globals.yml. > > Alexandros Soumplis 於 2021年3月27日 週六 上午1:49寫道: > >> Hi all, >> >> I am trying to define a couple of variables in the inventory file for a >> specific group and I do something like the following in the inventory: >> >> (...) >> [compute] >> comp[1:5] >> >> [compute:vars] >> neutron_external_interface=bond0.100, bond0.101 >> neutron_bridge_name=br-ex,br-ex2 >> >> (...) >> >> This is a valid ansible setup, however with kolla-ansible these >> variables are totally ignored. Any suggestion ? >> >> >> a. >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Sat Mar 27 17:37:28 2021 From: mark at stackhpc.com (Mark Goddard) Date: Sat, 27 Mar 2021 17:37:28 +0000 Subject: [kolla-ansible] Inventory group vars In-Reply-To: References: Message-ID: On Sat, 27 Mar 2021 at 06:50, Alexandros Soumplis wrote: > > Hi Eddie, > > If I define them in the hostname it works perfectly fine. I cannot use the globals.yml because there are different bridges per group (ex. control plane servers do not the provide network bridges). > > The problem with the host approach is that it makes the inventory file very difficult to parse with external scripts. If the typical ansible :vars section worked as expected it would make any parsing much easier with crudini :) Hi Alexandros, There's some kolla-specific info here: https://docs.openstack.org/kolla-ansible/latest/user/multinode.html#host-and-group-variables Essentially, the problem is that we define various default values in a group_vars/all.yml file next to our playbooks, and this has a higher precedence than an inventory file: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#understanding-variable-precedence. Try using a group_vars/compute file, next to your inventory file. Mark > > > a. > > > > On 27/3/21 1:11 π.μ., Eddie Yen wrote: > > Hi Alexandros, > > In the [compute] area, it's correct. > But for define what physical interface should be use for each > nodes, you should set like below: > > [compute] > comp[1:5] neutron_external_interface=bond0.100,bond.101 > > And I'm not sure if "neutron_bridge_name" can be set behind > hostname. For me I usually put into globals.yml. > > Alexandros Soumplis 於 2021年3月27日 週六 上午1:49寫道: >> >> Hi all, >> >> I am trying to define a couple of variables in the inventory file for a >> specific group and I do something like the following in the inventory: >> >> (...) >> [compute] >> comp[1:5] >> >> [compute:vars] >> neutron_external_interface=bond0.100, bond0.101 >> neutron_bridge_name=br-ex,br-ex2 >> >> (...) >> >> This is a valid ansible setup, however with kolla-ansible these >> variables are totally ignored. Any suggestion ? >> >> >> a. >> >> >> From soumplis at admin.grnet.gr Sat Mar 27 18:41:04 2021 From: soumplis at admin.grnet.gr (Alexandros Soumplis) Date: Sat, 27 Mar 2021 20:41:04 +0200 Subject: [kolla-ansible] Inventory group vars In-Reply-To: References: Message-ID: <8d1e902a-5b31-4096-7ad2-c54b935e15c6@admin.grnet.gr> I really don't know how I have missed that part in the documentation, I've read through it numerous times :) In any case, it works as expected with the group_vars/compute file. Thank you all, for you help! a. On 27/3/21 7:37 μ.μ., Mark Goddard wrote: > On Sat, 27 Mar 2021 at 06:50, Alexandros Soumplis > wrote: >> Hi Eddie, >> >> If I define them in the hostname it works perfectly fine. I cannot use the globals.yml because there are different bridges per group (ex. control plane servers do not the provide network bridges). >> >> The problem with the host approach is that it makes the inventory file very difficult to parse with external scripts. If the typical ansible :vars section worked as expected it would make any parsing much easier with crudini :) > Hi Alexandros, > > There's some kolla-specific info here: > https://docs.openstack.org/kolla-ansible/latest/user/multinode.html#host-and-group-variables > > Essentially, the problem is that we define various default values in a > group_vars/all.yml file next to our playbooks, and this has a higher > precedence than an inventory file: > https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#understanding-variable-precedence. > > Try using a group_vars/compute file, next to your inventory file. > > Mark > >> >> a. >> >> >> >> On 27/3/21 1:11 π.μ., Eddie Yen wrote: >> >> Hi Alexandros, >> >> In the [compute] area, it's correct. >> But for define what physical interface should be use for each >> nodes, you should set like below: >> >> [compute] >> comp[1:5] neutron_external_interface=bond0.100,bond.101 >> >> And I'm not sure if "neutron_bridge_name" can be set behind >> hostname. For me I usually put into globals.yml. >> >> Alexandros Soumplis 於 2021年3月27日 週六 上午1:49寫道: >>> Hi all, >>> >>> I am trying to define a couple of variables in the inventory file for a >>> specific group and I do something like the following in the inventory: >>> >>> (...) >>> [compute] >>> comp[1:5] >>> >>> [compute:vars] >>> neutron_external_interface=bond0.100, bond0.101 >>> neutron_bridge_name=br-ex,br-ex2 >>> >>> (...) >>> >>> This is a valid ansible setup, however with kolla-ansible these >>> variables are totally ignored. Any suggestion ? >>> >>> >>> a. >>> >>> >>> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3620 bytes Desc: S/MIME Cryptographic Signature URL: From ricolin at ricolky.com Mon Mar 29 04:01:31 2021 From: ricolin at ricolky.com (Rico Lin) Date: Mon, 29 Mar 2021 12:01:31 +0800 Subject: [Containers SIG][tc]Is any one interested on Containers SIG? Or should we retire Containers SIG? In-Reply-To: <20210325135103.y5wtadzdl4q3pmu3@yuggoth.org> References: <20210325135103.y5wtadzdl4q3pmu3@yuggoth.org> Message-ID: To whom might care, We officially retired Containers SIG now [1]. If you hope to restart this SIG or related effort, it's possible to resume this SIG, have a new pop-up team (as Jeremy mentioned), or other possible formats you think it might be. Feel free to contact me or any TC members if you need help on these matters. [1] https://review.opendev.org/c/openstack/governance-sigs/+/783001/3 On Thu, Mar 25, 2021 at 9:56 PM Jeremy Stanley wrote: > > On 2021-03-25 21:28:29 +0800 (+0800), Rico Lin wrote: > [...] > > IMO, The goal from Containers SIG definitely provide great value > > to Openstack community because we didn't have much work on > > checking whether we reach great container support or not. Hance > > don't really wish it got archived and retired. Still if no one > > interested enough and have time to join and chair the SIG, we have > > no option but to move it to retired state ("archived"). > [...] > > If there's a set goal for the group, and you can define what you > think the end state would be like for "great container support" in > OpenStack, then this may be better suited as a pop-up team (that > model didn't actually exist when this SIG was originally created). > Still, if there's nobody interested in working toward that goal, I > agree there's not much point in keeping the group defined either as > a SIG or pop-up. > > https://governance.openstack.org/tc/reference/popup-teams.html > > -- > Jeremy Stanley *Rico Lin* OIF Board director, OpenSack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack *Email: ricolin at ricolky.com * *Phone: +886-963-612-021* -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Mar 29 05:05:04 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Sun, 28 Mar 2021 22:05:04 -0700 Subject: [tc] New Chair In-Reply-To: References: Message-ID: Congratulations Ghanshyam! I look forward to working with you in this new capacity! Thank you so much for all the hard work and tireless cat herding you've done, Mohammed :) -Kendall (diablo_rojo) On Fri, Mar 26, 2021 at 7:33 AM Mohammed Naser wrote: > Hi everyone, > > I'd like to introduce/congratulate/welcome Ghanshyam as the new chair of > our technical committee. I'm happy to see them step up to this role and I > think they're an excellent candidate from their hard work within the > community. > > Thanks for having me as chair for the past few years, I'm still around and > continue to be on the technical committee and thanks for reading my weekly > updates :) > > Regards, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Mar 29 05:12:20 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Sun, 28 Mar 2021 22:12:20 -0700 Subject: [all][tc] Xena PTG Planning In-Reply-To: <178707dc71d.11d0dfc6d1153359.1295822777876459923@ghanshyammann.com> References: <17866f48e53.1285108df1037866.5958750053657313110@ghanshyammann.com> <1787079a72e.c94204041153293.5963890629884729625@ghanshyammann.com> <178707dc71d.11d0dfc6d1153359.1295822777876459923@ghanshyammann.com> Message-ID: Super late answering, but I submitted now. Also, looks like the ethercalc is back up. -Kendall (diablo_rojo) On Fri, Mar 26, 2021 at 2:44 PM Ghanshyam Mann wrote: > ---- On Fri, 26 Mar 2021 16:38:44 -0500 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > > > ---- On Wed, 24 Mar 2021 20:16:46 -0500 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > > Hello Everyone, > > > > > > As you already know that the Xena cycle virtual PTG will be held > between 19th - 23rd April[1]. > > > > > > To plan the Technical Committee PTG planning, please do the > following: > > > > > > 1. Fill the doodle poll as per your availability. Please fill it > soon as we need to book the slot by > > > 25th March(which is tomorrow). > > > > > > - > https://doodle.com/poll/2zy8hex4r6wvidqk?utm_source=poll&utm_medium=link > > > > > > 2. Add the topics you would like to discuss to the below etherpad. > > > > > > - https://etherpad.opendev.org/p/tc-xena-ptg > > > > > > As per the doodle poll[1] and discussion in IRC channel, Technical > Committee will meet on > > Thursday, 22nd: 13 UTC to 15 UTC (2 hrs). Friday, 23rd: 13UTC - 17UTC > (4 hrs). > > > > I will book the slots once https://ethercalc.net/oz7q0gds9zfi is back. > > > > [1] https://doodle.com/poll/5ixvdcbm488kc2ic > > Correct link for TC slots doodle poll: > https://doodle.com/poll/2zy8hex4r6wvidqk?utm_source=poll&utm_medium=link > > > > > -gmann > > > > > > > > NOTE: this is not limited to TC members only; I would like all > community members to > > > fill the doodle poll and, add the topics you would like or want TC > members > > > to discuss in PTG. > > > > > > -gmann > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Mon Mar 29 06:54:28 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Mon, 29 Mar 2021 08:54:28 +0200 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <2ca7b032-dc3a-2648-a08c-1df8cc7c0542@inovex.de> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> <2a893395-8af6-5fdf-cf5f-303b8bb1394b@inovex.de> <095A8B69-9EDC-4AF7-90AC-D16B0C484361@gmail.com> <2ca7b032-dc3a-2648-a08c-1df8cc7c0542@inovex.de> Message-ID: Hello Artem, all, On 10/03/2021 15:21, Christian Rohmann wrote: > Hey Artem, > > On 09/03/2021 14:23, Artem Goncharov wrote: >> This is just a tiny subset of OpenStack resources with no possible >> flexibility vs native implementation in SDK/CLI > > I totally agree. I was just saying that by not having a cleanup > approach provided by OpenStack there will just be more and more tools > popping up, > each solving the same issue ... and never being tested and updated > along with OpenStack releases and new project / resources being added. > > So thanks again for your work on > https://review.opendev.org/c/openstack/python-openstackclient/+/734485 ! With your change now merged I believe there should be a list of missing features / projects not yet integrated. Maybe even a blueprint (for others) to pick up an and to continue the path to provide a full solution for project cleanup. Something apparently not on this list yet is object storage, so Swift / RADOS GW (S3), .. Swift documents something called the account reaper (https://docs.openstack.org/swift/victoria/overview_reaper.html), but I don't know how that fits into the picture. As for S3 powered by i.e. RADOS GW I don't know if there are any standard-tools yet. Regards and thanks again for cleaning up the trash ;-) Christian From missile0407 at gmail.com Mon Mar 29 07:09:11 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Mon, 29 Mar 2021 15:09:11 +0800 Subject: [ops][victoria] Windows Server 2012 R2 - Can't install - unbootable In-Reply-To: <0670B960225633449A24709C291A52524FBACB34@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBACB34@COM01.performair.local> Message-ID: Hi, If you checked that bootable volume has selected when creating volume, then you can try press 'Next' button if it's not grey. But IOE, if you're going to make this image as public after build, note that it may become a problem because you can't modify visibility for image uploaded from volume. Also, we don't know if Ceph storage method changed in Victoria or not. But in Rocky, upload volume to image will just create a RBD snapshot from volume in Ceph layer. That means you can't delete that volume unless delete the child image first in the future. Because the above reason, we still suggest using Linux KVM to build Windows image. 於 2021年3月27日 週六 上午1:49寫道: > All; > > We have a recently built Victoria cluster with storage provided by a Ceph > Nautilus cluster. > > I'm working through the following tutorial: > > https://medium.com/@yooniks9/openstack-install-windows-server-2016-2019-with-image-iso-d8c17c8cfc36 > > When I get to step 4 under "Windows Server Installation," it says "Windows > can't be installed on this drive." Details are: " Windows cannot be > installed to this disk. This computer's hardware may not support booting > to this disk. Ensure the disk's controller is enabled in the computer's > BIOS menu. > > Any recommendations on what I'm missing here would be appreciated. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Mar 29 07:32:02 2021 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 29 Mar 2021 09:32:02 +0200 Subject: [neutron] Bug deputy report for week of March 22 Message-ID: Hi, I was Neutron bug deputy for the previous week, please read a short summary of the reported bugs. Lajos - Critical - "test_add_remove_fixed_ip" faling in "grenade-dvr-multinode" CI job - https://bugs.launchpad.net/neutron/+bug/1920778 - grenade-dvr-multinode job is made non-voting - proposals to fix it: https://review.opendev.org/c/openstack/neutron/+/782677 & https://review.opendev.org/c/openstack/neutron/+/782690 - os.kill(SIGTERM) does not finish and timeouts - https://bugs.launchpad.net/neutron/+bug/1921154 - If I recall correctly the os.kill thing was reverted and decided to come back later to it: https://review.opendev.org/c/openstack/neutron/+/782972 - High Bugs - When OVS restart, some flows will be missing in the br-int - https://bugs.launchpad.net/neutron/+bug/1920700 - Merged / released - [OVN] Stale Logical Router Port entry fail to be removed by maintenance task - https://bugs.launchpad.net/neutron/+bug/1920968 - Merged / released - Rally test NeutronNetworks.create_and_update_subnets fails - https://bugs.launchpad.net/neutron/+bug/1920923 - In progress ( https://review.opendev.org/c/openstack/neutron/+/782587 ) - ovn: DVR on VLAN networks does not work - https://bugs.launchpad.net/neutron/+bug/1920976 - Merged / released - [QoS min bw] repeated ERROR log: Unable to save resource provider ... because: re-parenting a provider is not currently allowed - https://bugs.launchpad.net/neutron/+bug/1921150 - In progress: https://review.opendev.org/c/openstack/neutron/+/782553 - Medium Bugs - "rpc_response_max_timeout" configuration variable not present in metadata-agent - https://bugs.launchpad.net/neutron/+bug/1920842 - Merge / releases - neutron dvr should lower proxy_delay when using proxy_arp - https://bugs.launchpad.net/neutron/+bug/1920975 - In progress ( https://review.opendev.org/c/openstack/neutron/+/782570 ) - ml2/ovn should not be calling neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api - https://bugs.launchpad.net/neutron/+bug/1921491 - Unassigned - VPNaaS strongSwan driver does not reload secrets - https://bugs.launchpad.net/neutron/+bug/1921514 - In progress in neutron-vpnaas: https://review.opendev.org/c/openstack/neutron-vpnaas/+/783331 - Low - RFE/Whislist - [RFE] Allow explicit management of default routes - https://bugs.launchpad.net/neutron/+bug/1921126 - approved - [RFE] Enhancement to Neutron BGPaaS to directly support Neutron Routers & bgp-peering from such routers over internal & external Neutron Networks - https://bugs.launchpad.net/neutron/+bug/1921461 - Approved - - Invalid - neutron-server ovsdbapp timeout exceptions after intermittent connectivity issues - https://bugs.launchpad.net/neutron/+bug/1921085 - Designate PTR record creation results in in-addr.arpa. zone owned by invalid project ID - https://bugs.launchpad.net/neutron/+bug/1921414 - 'Table 'ovn_revision_numbers' is already defined for this MetaData instance - https://bugs.launchpad.net/neutron/+bug/1921577 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 29 08:31:45 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 29 Mar 2021 10:31:45 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release Message-ID: <59893229.6jtkhXVcMD@p1> Hi, We have opened release patch for networking-midonet [1] but our concern about that project is that its gate is completly broken since some time thus we don't really know if the project is still working and valid to be released. In Wallaby cycle Neutron for example finished transition to the engine facade, and patch to adjust that in networking-midonet is still opened [2] (and red as there were some unrelated issues with most of the jobs there). In the past we had discussion about networking-midonet project and it's status as the official Neutron stadium project. Then some new folks stepped in to maintain it but now it seems a bit like (again) it lacks of maintainers. I know that it is very late in the cycle now so my question to the TC and release teams is: should we release stable/wallaby with its current state, even if it's broken or should we maybe don't release it at all until its gate will be up and running? [1] https://review.opendev.org/c/openstack/releases/+/781713 [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bshewale at redhat.com Mon Mar 29 08:35:17 2021 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Mon, 29 Mar 2021 14:05:17 +0530 Subject: [tripleo] TripleO CI Summary: Unified Sprint 41 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 41** (Feb 25 thru Mar 17 2021). The following is a summary of completed work during this sprint cycle: - Successfully deployed promoter server for all c8 and c7 releases using new code (next gen) promoter code - Continue to adopt new changes and improvements in the promoter : - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32007 - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32069 - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32058 - Reduce our upstream resource usage with zuul jobs including optimizations across the zuul layouts in all the tripleo-* repos - https://review.opendev.org/q/topic:tripleo-ci-reduce - Cirros image setup more resilience to infra setup to avoid failures: - Tempest conf image retry: https://review.opendev.org/c/osf/python-tempestconf/+/775195 - Usage of local cirros image: https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/778157 - Basically, the tag was wrong, and the copy and tagging was one single operation, now the copy command is copying every hour all the hashes available and tag is running separately now collecting the current-tripleo tag directly from the delorean api. - copy-quay script: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32478 - Migrations from CentOS-8 to Centos-8-stream: These are the reviews: https://hackmd.io/9Xve-rYpRaKbk5NMe7kukw - Modification of the private_apis of scenario manager: - Following apis have been made public by this commit as the tempest.scenario.manager interface is meant to be consumed by tempest plugins. - https://review.opendev.org/c/openstack/tempest/+/777089 - https://review.opendev.org/c/openstack/tempest/+/778697 - https://review.opendev.org/c/openstack/tempest/+/778698 - https://review.opendev.org/c/openstack/tempest/+/77735 - Design Upstream and downstream dependency pipeline dashboard: - Added unit tests for telegraf zuul v3 queue status script : https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/31948 - RPM dependency pipeline dashboard : https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32477/ - Created the design document for tripleo-repos - https://hackmd.io/v2jCX9RwSeuP8EEFDHRa8g?view - https://review.opendev.org/c/openstack/tripleo-specs/+/772442 The planned work for the next sprint and leftover previous sprint work as following: - Deploy the promoter server for upstream promotions on vexxhost - Component / integration rdo pipelines for wallaby/ wallaby branching - Content provider jobs across all branches - Removing Rocky jobs (EOL) - elastic-recheck containerization - https://hackmd.io/dmxF-brbS-yg7tkFB_kxXQ - Openstack health for tripleo - https://hackmd.io/HQ5hyGAOSuG44Le2x6YzUw - Tripleo-repos spec and implementation - https://hackmd.io/v2jCX9RwSeuP8EEFDHRa8g?view - https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - Tempest skiplist: Started to work on tempest-skiplist allowed list of tests (no patches yet) - Patrole stable release- Started to work on it The Ruck and Rover for this sprint are Chandan Kumar (chkumar) and Sorin (zbr). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in hackmd. Thanks, Bhagyashri Shewale -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Mar 29 09:52:59 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 29 Mar 2021 11:52:59 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <59893229.6jtkhXVcMD@p1> References: <59893229.6jtkhXVcMD@p1> Message-ID: Hello, The main question is, does the previous Victoria version [1] will be compatible with the latest neutron changes and with the latest engine facade introduced during Wallaby? Releasing an unfixed engine facade code is useless, so we shouldn't release a new version of networking-midonet, because the project code won't be compatible with the rest of our projects (AFAIK neutron), unless, the previous version will not compatible either, and, unless, not releasing a Wallaby version leave the project branch uncut and so leave the corresponding series unmaintainable, and so unfixable a posteriori. If we do not release a new version then we will use a previous version of networking-midonet. This version will be the last Victoria version [1]. I suppose that this version (the victoria version) isn't compatible with the new facade engine either, isn't it? So release or not release a new version won't solve the facade engine problem, isn't? You said that neutron evolved and networking-midonet didn't, hence even if we release networking-midonet in the current state it will fail too, isn't it? However, releasing a new version and branching on it can give you the needed maintenance window to allow you to fix the issue later, when your gates will be fixed and then patches backported. git tags are cheap. We should notice that since Victoria some patches have been merged in Wallaby so even if they aren't ground breaking changes they are changes that it is worth to release. >From a release point of view I think it's worth it to release a new version and to cut Wallaby. We are close to the it's deadline. That will land the available delta between Victoria and Wallaby. That will allow to fix the engine facade by opening a maintenance window. If the project is still lacking maintainers in a few weeks / months, this will allow a more smooth deprecation of this one. Thoughts? [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a écrit : > Hi, > > We have opened release patch for networking-midonet [1] but our concern > about > that project is that its gate is completly broken since some time thus we > don't really know if the project is still working and valid to be released. > In Wallaby cycle Neutron for example finished transition to the engine > facade, > and patch to adjust that in networking-midonet is still opened [2] (and > red as > there were some unrelated issues with most of the jobs there). > > In the past we had discussion about networking-midonet project and it's > status > as the official Neutron stadium project. Then some new folks stepped in to > maintain it but now it seems a bit like (again) it lacks of maintainers. > I know that it is very late in the cycle now so my question to the TC and > release teams is: should we release stable/wallaby with its current state, > even if it's broken or should we maybe don't release it at all until its > gate > will be up and running? > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Mar 29 10:02:31 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 29 Mar 2021 12:02:31 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> Message-ID: Only one comment at the moment. MidoNet itself looks dead as well: https://github.com/midonet/midonet It might make sense to announce that and finally move away. That said, I'm also interested in answers to Hervé's questions below. -yoctozepto On Mon, Mar 29, 2021 at 11:55 AM Herve Beraud wrote: > > Hello, > > The main question is, does the previous Victoria version [1] will be compatible with the latest neutron changes and with the latest engine facade introduced during Wallaby? > > Releasing an unfixed engine facade code is useless, so we shouldn't release a new version of networking-midonet, because the project code won't be compatible with the rest of our projects (AFAIK neutron), unless, the previous version will not compatible either, and, unless, not releasing a Wallaby version leave the project branch uncut and so leave the corresponding series unmaintainable, and so unfixable a posteriori. > > If we do not release a new version then we will use a previous version of networking-midonet. This version will be the last Victoria version [1]. > > I suppose that this version (the victoria version) isn't compatible with the new facade engine either, isn't it? > > So release or not release a new version won't solve the facade engine problem, isn't? > > You said that neutron evolved and networking-midonet didn't, hence even if we release networking-midonet in the current state it will fail too, isn't it? > > However, releasing a new version and branching on it can give you the needed maintenance window to allow you to fix the issue later, when your gates will be fixed and then patches backported. git tags are cheap. > > We should notice that since Victoria some patches have been merged in Wallaby so even if they aren't ground breaking changes they are changes that it is worth to release. > > From a release point of view I think it's worth it to release a new version and to cut Wallaby. We are close to the it's deadline. That will land the available delta between Victoria and Wallaby. That will allow to fix the engine facade by opening a maintenance window. If the project is still lacking maintainers in a few weeks / months, this will allow a more smooth deprecation of this one. > > Thoughts? > > [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a écrit : >> >> Hi, >> >> We have opened release patch for networking-midonet [1] but our concern about >> that project is that its gate is completly broken since some time thus we >> don't really know if the project is still working and valid to be released. >> In Wallaby cycle Neutron for example finished transition to the engine facade, >> and patch to adjust that in networking-midonet is still opened [2] (and red as >> there were some unrelated issues with most of the jobs there). >> >> In the past we had discussion about networking-midonet project and it's status >> as the official Neutron stadium project. Then some new folks stepped in to >> maintain it but now it seems a bit like (again) it lacks of maintainers. >> I know that it is very late in the cycle now so my question to the TC and >> release teams is: should we release stable/wallaby with its current state, >> even if it's broken or should we maybe don't release it at all until its gate >> will be up and running? >> >> [1] https://review.opendev.org/c/openstack/releases/+/781713 >> [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From skaplons at redhat.com Mon Mar 29 11:06:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 29 Mar 2021 13:06:31 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> Message-ID: <20210329110631.5gnmro77saxnf64p@p1.localdomain> Hi, On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > Hello, > > The main question is, does the previous Victoria version [1] will be > compatible with the latest neutron changes and with the latest engine > facade introduced during Wallaby? It won't be compatible. Networking-midonet from Victoria will not work properly with Neutron Wallaby. > > Releasing an unfixed engine facade code is useless, so we shouldn't release > a new version of networking-midonet, because the project code won't be > compatible with the rest of our projects (AFAIK neutron), unless, the > previous version will not compatible either, and, unless, not releasing a > Wallaby version leave the project branch uncut and so leave the > corresponding series unmaintainable, and so unfixable a posteriori. > > If we do not release a new version then we will use a previous version of > networking-midonet. This version will be the last Victoria version [1]. > > I suppose that this version (the victoria version) isn't compatible with > the new facade engine either, isn't it? Correct. It's not compatible. > > So release or not release a new version won't solve the facade engine > problem, isn't? Yes. > > You said that neutron evolved and networking-midonet didn't, hence even if > we release networking-midonet in the current state it will fail too, isn't > it? Also yes :) > > However, releasing a new version and branching on it can give you the > needed maintenance window to allow you to fix the issue later, when your > gates will be fixed and then patches backported. git tags are cheap. > > We should notice that since Victoria some patches have been merged in > Wallaby so even if they aren't ground breaking changes they are changes > that it is worth to release. > > From a release point of view I think it's worth it to release a new version > and to cut Wallaby. We are close to the it's deadline. That will land the > available delta between Victoria and Wallaby. That will allow to fix the > engine facade by opening a maintenance window. If the project is still > lacking maintainers in a few weeks / months, this will allow a more smooth > deprecation of this one. > > Thoughts? Based on Your feedback I agree that we should release now what we have. Even if it's broken we can then fix it and backport fixes to stable/wallaby branch. @Akihiro: are You ok with that too? > > [1] > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a > écrit : > > > Hi, > > > > We have opened release patch for networking-midonet [1] but our concern > > about > > that project is that its gate is completly broken since some time thus we > > don't really know if the project is still working and valid to be released. > > In Wallaby cycle Neutron for example finished transition to the engine > > facade, > > and patch to adjust that in networking-midonet is still opened [2] (and > > red as > > there were some unrelated issues with most of the jobs there). > > > > In the past we had discussion about networking-midonet project and it's > > status > > as the official Neutron stadium project. Then some new folks stepped in to > > maintain it but now it seems a bit like (again) it lacks of maintainers. > > I know that it is very late in the cycle now so my question to the TC and > > release teams is: should we release stable/wallaby with its current state, > > even if it's broken or should we maybe don't release it at all until its > > gate > > will be up and running? > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From amotoki at gmail.com Mon Mar 29 11:11:09 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 29 Mar 2021 20:11:09 +0900 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> Message-ID: Hi, On Mon, Mar 29, 2021 at 6:53 PM Herve Beraud wrote: > > Hello, > > The main question is, does the previous Victoria version [1] will be compatible with the latest neutron changes and with the latest engine facade introduced during Wallaby? > > Releasing an unfixed engine facade code is useless, so we shouldn't release a new version of networking-midonet, because the project code won't be compatible with the rest of our projects (AFAIK neutron), unless, the previous version will not compatible either, and, unless, not releasing a Wallaby version leave the project branch uncut and so leave the corresponding series unmaintainable, and so unfixable a posteriori. > > If we do not release a new version then we will use a previous version of networking-midonet. This version will be the last Victoria version [1]. > > I suppose that this version (the victoria version) isn't compatible with the new facade engine either, isn't it? > > So release or not release a new version won't solve the facade engine problem, isn't? > > You said that neutron evolved and networking-midonet didn't, hence even if we release networking-midonet in the current state it will fail too, isn't it? Only folks involved in networking-midonet can answer these questions correctly, and other neutron folks do not run networking-midonet (and midonet). On the other hand, we know victoria release of some neutron related projects does not work with wallaby neutron and neutron-lib (at least for neutron-dynamic-routing and networking-bagpipe), so it is not surprising victoria networking-midonet does not work with wallaby neutron. > However, releasing a new version and branching on it can give you the needed maintenance window to allow you to fix the issue later, when your gates will be fixed and then patches backported. git tags are cheap. It is true to some extent, but I am not sure the merit here is more than releasing the broken code (which we are not sure is not expected to fix soon) as the initial release of Wallaby. > We should notice that since Victoria some patches have been merged in Wallaby so even if they aren't ground breaking changes they are changes that it is worth to release. No meaningful change happened in the main code after Victoria release. We have only two commits since Victoria. The one is related to the release note build which added stable/victoria. The other is a fix in the devstack plugin. Thus, I do not see a value at least from this point of view. > From a release point of view I think it's worth it to release a new version and to cut Wallaby. We are close to the it's deadline. That will land the available delta between Victoria and Wallaby. That will allow to fix the engine facade by opening a maintenance window. If the project is still lacking maintainers in a few weeks / months, this will allow a more smooth deprecation of this one. > > Thoughts? > > [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a écrit : >> >> Hi, >> >> We have opened release patch for networking-midonet [1] but our concern about >> that project is that its gate is completly broken since some time thus we >> don't really know if the project is still working and valid to be released. >> In Wallaby cycle Neutron for example finished transition to the engine facade, >> and patch to adjust that in networking-midonet is still opened [2] (and red as >> there were some unrelated issues with most of the jobs there). >> >> In the past we had discussion about networking-midonet project and it's status >> as the official Neutron stadium project. Then some new folks stepped in to >> maintain it but now it seems a bit like (again) it lacks of maintainers. >> I know that it is very late in the cycle now so my question to the TC and >> release teams is: should we release stable/wallaby with its current state, >> even if it's broken or should we maybe don't release it at all until its gate >> will be up and running? >> >> [1] https://review.opendev.org/c/openstack/releases/+/781713 >> [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From amotoki at gmail.com Mon Mar 29 11:14:06 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 29 Mar 2021 20:14:06 +0900 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <20210329110631.5gnmro77saxnf64p@p1.localdomain> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> Message-ID: On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski wrote: > > Hi, > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > Hello, > > > > The main question is, does the previous Victoria version [1] will be > > compatible with the latest neutron changes and with the latest engine > > facade introduced during Wallaby? > > It won't be compatible. Networking-midonet from Victoria will not work properly > with Neutron Wallaby. > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't release > > a new version of networking-midonet, because the project code won't be > > compatible with the rest of our projects (AFAIK neutron), unless, the > > previous version will not compatible either, and, unless, not releasing a > > Wallaby version leave the project branch uncut and so leave the > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > If we do not release a new version then we will use a previous version of > > networking-midonet. This version will be the last Victoria version [1]. > > > > I suppose that this version (the victoria version) isn't compatible with > > the new facade engine either, isn't it? > > Correct. It's not compatible. > > > > > So release or not release a new version won't solve the facade engine > > problem, isn't? > > Yes. > > > > > You said that neutron evolved and networking-midonet didn't, hence even if > > we release networking-midonet in the current state it will fail too, isn't > > it? > > Also yes :) > > > > > However, releasing a new version and branching on it can give you the > > needed maintenance window to allow you to fix the issue later, when your > > gates will be fixed and then patches backported. git tags are cheap. > > > > We should notice that since Victoria some patches have been merged in > > Wallaby so even if they aren't ground breaking changes they are changes > > that it is worth to release. > > > > From a release point of view I think it's worth it to release a new version > > and to cut Wallaby. We are close to the it's deadline. That will land the > > available delta between Victoria and Wallaby. That will allow to fix the > > engine facade by opening a maintenance window. If the project is still > > lacking maintainers in a few weeks / months, this will allow a more smooth > > deprecation of this one. > > > > Thoughts? > > Based on Your feedback I agree that we should release now what we have. Even if > it's broken we can then fix it and backport fixes to stable/wallaby branch. > > @Akihiro: are You ok with that too? I was writing another reply and did not notice this mail. While I still have a doubt on releasing the broken code (which we are not sure can be fixed soon or not), I am okay with either decision. > > > > > [1] > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a > > écrit : > > > > > Hi, > > > > > > We have opened release patch for networking-midonet [1] but our concern > > > about > > > that project is that its gate is completly broken since some time thus we > > > don't really know if the project is still working and valid to be released. > > > In Wallaby cycle Neutron for example finished transition to the engine > > > facade, > > > and patch to adjust that in networking-midonet is still opened [2] (and > > > red as > > > there were some unrelated issues with most of the jobs there). > > > > > > In the past we had discussion about networking-midonet project and it's > > > status > > > as the official Neutron stadium project. Then some new folks stepped in to > > > maintain it but now it seems a bit like (again) it lacks of maintainers. > > > I know that it is very late in the cycle now so my question to the TC and > > > release teams is: should we release stable/wallaby with its current state, > > > even if it's broken or should we maybe don't release it at all until its > > > gate > > > will be up and running? > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From neil at tigera.io Mon Mar 29 11:31:38 2021 From: neil at tigera.io (Neil Jerram) Date: Mon, 29 Mar 2021 11:31:38 +0000 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> Message-ID: Out of interest - for networking-calico - what changes are needed to adapt to the new engine facade? On Mon, Mar 29, 2021 at 12:14 PM Akihiro Motoki wrote: > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski > wrote: > > > > Hi, > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > Hello, > > > > > > The main question is, does the previous Victoria version [1] will be > > > compatible with the latest neutron changes and with the latest engine > > > facade introduced during Wallaby? > > > > It won't be compatible. Networking-midonet from Victoria will not work > properly > > with Neutron Wallaby. > > > > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't > release > > > a new version of networking-midonet, because the project code won't be > > > compatible with the rest of our projects (AFAIK neutron), unless, the > > > previous version will not compatible either, and, unless, not > releasing a > > > Wallaby version leave the project branch uncut and so leave the > > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > > > If we do not release a new version then we will use a previous version > of > > > networking-midonet. This version will be the last Victoria version [1]. > > > > > > I suppose that this version (the victoria version) isn't compatible > with > > > the new facade engine either, isn't it? > > > > Correct. It's not compatible. > > > > > > > > So release or not release a new version won't solve the facade engine > > > problem, isn't? > > > > Yes. > > > > > > > > You said that neutron evolved and networking-midonet didn't, hence > even if > > > we release networking-midonet in the current state it will fail too, > isn't > > > it? > > > > Also yes :) > > > > > > > > However, releasing a new version and branching on it can give you the > > > needed maintenance window to allow you to fix the issue later, when > your > > > gates will be fixed and then patches backported. git tags are cheap. > > > > > > We should notice that since Victoria some patches have been merged in > > > Wallaby so even if they aren't ground breaking changes they are changes > > > that it is worth to release. > > > > > > From a release point of view I think it's worth it to release a new > version > > > and to cut Wallaby. We are close to the it's deadline. That will land > the > > > available delta between Victoria and Wallaby. That will allow to fix > the > > > engine facade by opening a maintenance window. If the project is still > > > lacking maintainers in a few weeks / months, this will allow a more > smooth > > > deprecation of this one. > > > > > > Thoughts? > > > > Based on Your feedback I agree that we should release now what we have. > Even if > > it's broken we can then fix it and backport fixes to stable/wallaby > branch. > > > > @Akihiro: are You ok with that too? > > I was writing another reply and did not notice this mail. > While I still have a doubt on releasing the broken code (which we are > not sure can be fixed soon or not), > I am okay with either decision. > > > > > > > > > [1] > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a > > > écrit : > > > > > > > Hi, > > > > > > > > We have opened release patch for networking-midonet [1] but our > concern > > > > about > > > > that project is that its gate is completly broken since some time > thus we > > > > don't really know if the project is still working and valid to be > released. > > > > In Wallaby cycle Neutron for example finished transition to the > engine > > > > facade, > > > > and patch to adjust that in networking-midonet is still opened [2] > (and > > > > red as > > > > there were some unrelated issues with most of the jobs there). > > > > > > > > In the past we had discussion about networking-midonet project and > it's > > > > status > > > > as the official Neutron stadium project. Then some new folks stepped > in to > > > > maintain it but now it seems a bit like (again) it lacks of > maintainers. > > > > I know that it is very late in the cycle now so my question to the > TC and > > > > release teams is: should we release stable/wallaby with its current > state, > > > > even if it's broken or should we maybe don't release it at all until > its > > > > gate > > > > will be up and running? > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > [2] > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > > > > > > > -- > > > Hervé Beraud > > > Senior Software Engineer at Red Hat > > > irc: hberaud > > > https://github.com/4383/ > > > https://twitter.com/4383hberaud > > > -----BEGIN PGP SIGNATURE----- > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > v6rDpkeNksZ9fFSyoY2o > > > =ECSj > > > -----END PGP SIGNATURE----- > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Mon Mar 29 11:43:44 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 29 Mar 2021 11:43:44 +0000 Subject: [puppet] Artificially inflated dependencies in metadata.json of all modules In-Reply-To: References: <24c1dcbe-67ec-c80b-dc0c-5f6fea8a6a1b@debian.org> <7d07721a-722d-5c53-4aee-396a35a68a88@debian.org> <329974b0-ef8a-c7d8-687e-592a4fe97596@debian.org>, Message-ID: <58cb320163494184b0fa47b51ab39ad3@binero.com> My two cents. In a ideal world we should just skip milestones and release when we either 1) need to or 2) have a new major release for a new OpenStack coordinated release. That said, there is the painpoint of having to update metadata.json and it's been on my todo list to template all metadata.json and have the OpenStack release tooling handle it instead, ever since I fixed the automatic upload to Puppet Forge [1]. Another thing I have wanted to do is pretty much have a CI job running integration testing by simply installing modules from Puppet Forge (with r10k for example) so that we can actually test that our constraints in metadata.json actually results in a working deployment as well. All this is due to lack of contributors (and time on my part). I would support changes to improve releasing, if somebody wants to take that on. However I would really like it to be finished and not halfway through which would make it even worse than today. Best regards [1] https://review.opendev.org/c/openstack/project-config/+/627573 ________________________________ From: Alex Schultz Sent: Thursday, March 25, 2021 9:56:56 PM To: Thomas Goirand Cc: OpenStack Discuss Subject: Re: [puppet] Artificially inflated dependencies in metadata.json of all modules On Thu, Mar 25, 2021 at 2:48 PM Thomas Goirand > wrote: On 3/25/21 9:22 PM, Thomas Goirand wrote: > Hi Alex, > > Thanks for your time replying to my original post. > > On 3/25/21 5:39 PM, Alex Schultz wrote: >> It feels like the ask is for more manual version management on the >> Puppet OpenStack team (because we have to manually manage metadata.json >> before releasing), rather than just automating version updates for your >> packaging. > > Not at all. I'm asking for dependency to vaguely reflect reality, like > we've been doing this for years in the Python world of OpenStack. > >> This existing release model has been in place for at least 5 >> years now if not longer > > Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 > years then! :) Let me give an example. Today, puppet-ironic got released in version 18.3.0. The only thing that changed in it since 18.2.0 is a bunch of metadata bumping to 18.2.0... Why haven't we just kept version 18.2.0? It's the exact same content... Release due to milestone 3. Like I said, we could switch to independent or just stop doing milestone releases, but then that causes other problems and overhead. Given the lower amount of changes in the more recent releases, it might make sense to switch but I think that's a conversation that isn't necessarily puppet specific but could be expanded to openstack releases in general. From a RDO standpoint, we build the packages in dlrn which include dates/hashes and so the versions only matter for upgrades (we don't enforce the metadata.json requirements). Dropping milestones wouldn't affect us too badly, but we'd still want an initial metadata.json rev at the start of a cycle. We could hold off on releasing until much later and you wouldn't get the churn. You'd also not be able to match the puppet modules to any milestone release during the current development cycle. Cheers, Thomas Goirand (zigo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Mar 29 12:50:48 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 29 Mar 2021 08:50:48 -0400 Subject: [cinder][nova][requirements] RFE requested for os-brick Message-ID: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> Hello Requirements Team, The Cinder team recently became aware of a potential data-loss bug [0] that has been fixed in os-brick master [1] and backported to os-brick stable/wallaby [2]. We've proposed a release of os-brick 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to include 4.4.0 in the wallaby release. We have three jobs running tempest with os-brick source in master that have passed with [1]: os-brick-src-devstack-plugin-ceph [4], os-brick-src-tempest-lvm-lio-barbican [5],and os-brick-src-tempest-nfs [6]. The difference between os-brick master (at the time the tests were run) and stable/wallaby since the 4.3.0 tag is as follows: master: d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg change (Gorka Eguileor) 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to modifying offset" (Zuul) 28545c7 4 months ago RBD: catch read exceptions prior to modifying offset (Jon Bernard) 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" (Zuul) 7cfdb76 6 weeks ago Dropping explicit unicode literal (tushargite96) 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack Release Bot) ab57392 3 weeks ago Update master for stable/wallaby (OpenStack Release Bot) 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver connection information compatibility fix" (Zuul) stable/wallaby: f86944b 3 days ago Add release note prelude for os-brick 4.4.0 (Brian Rosmaita) c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg change (Gorka Eguileor) 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for stable/wallaby (OpenStack Release Bot) f3f93dc 3 weeks ago Update .gitreview for stable/wallaby (OpenStack Release Bot) 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver connection information compatibility fix" (Zuul) This gives us very high confidence that the results of the tests run against master also apply to stable/wallaby at f86944b. Thank you for considering this request. (I've included Nova here because the bug occurs when the configuration option that enables multipath connections on a compute is changed while volumes are attached, so if this RFE is approved, nova might want to raise the minimum version of os-brick in wallaby to 4.4.0.) [0] https://launchpad.net/bugs/1921381 [1] https://review.opendev.org/c/openstack/os-brick/+/782992 [2] https://review.opendev.org/c/openstack/os-brick/+/783207 [3] https://review.opendev.org/c/openstack/releases/+/783641 [4] https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb [5] https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 [6] https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa From oliver.wenz at dhbw-mannheim.de Mon Mar 29 13:01:56 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 29 Mar 2021 15:01:56 +0200 (CEST) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <759905868.77860.1617022916703@ox.dhbw-mannheim.de> > Oh, I have a guess what this might actually be. During snapshot upload process > user token that is used for the upload might get expired. If that's the case, > following changes in user_variables might help to resolve the issue: > > glance_glance_api_conf_overrides: > keystone_authtoken: > service_token_roles_required: True > service_token_roles: service I found out that after inserting the above the problems now occurs every time I try to take a snapshot of an instance. Below are my logs: For swift: Mar 29 12:50:37 infra1-swift-proxy-container-27169fa7 proxy-server[2049]: Could not autocreate account '/AUTH_024cc551782f41e395d3c9f13582ef7d' (txn: tx340e7873c3 ce42ab9808e-006061cd0f) Mar 29 12:50:37 infra1-swift-proxy-container-27169fa7 proxy-server[2049]: 192.168.110.106 192.168.110.211 29/Mar/2021/12/50/37 PUT /v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images HTTP/1.0 50 3 - python-swiftclient-3.10.1 gAAAAABgYczvODdX... - 118 - tx340e7873c3ce42ab9808e-006061cd0f - 14.3464 - - 1617022223.371025085 1617022237.717381477 - Mar 29 12:50:40 infra1-swift-proxy-container-27169fa7 proxy-server[2053]: ERROR with Account server 10.0.3.212:6002/os-objects re: Trying to HEAD /v1/AUTH_024cc551782f41e395d 3c9f13582ef7d: Host unreachable (txn: tx44e98ea667f04751bf22e-006061cd1f) Mar 29 12:50:40 infra1-swift-proxy-container-27169fa7 proxy-server[2053]: Account HEAD returning 503 for [] (txn: tx44e98ea667f04751bf22e-006061cd1f) Mar 29 12:50:40 infra1-swift-proxy-container-27169fa7 proxy-server[2053]: - - 29/Mar/2021/12/50/40 HEAD /v1/AUTH_024cc551782f41e395d3c9f13582ef7d%3Fformat%3Djson HTTP/1.0 503 - Swift - - - - tx44e98 ea667f04751bf22e-006061cd1f - 1.0482 RL - 1617022239.738376379 1617022240.786619186 - and for glance: Mar 29 12:50:56 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:50:56.155 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP STATUS: 503 Service Unavailable Mar 29 12:50:56 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:50:56.156 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP HEADERS: {'content-type': 'text/html; charset=UTF-8', 'content-length': '118', 'x-trans-id': 'tx44e98ea667f04751bf22e-006061cd1f', 'x-openstack-request-id': 'tx44e98ea667f04751bf22e-006061cd1f', 'date': 'Mon, 29 Mar 2021 12:50:56 GMT', 'connection': 'close'} Mar 29 12:50:56 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:50:56.157 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP BODY: b'

Service Unavailable

The server is currently unavailable. Please try again at a later time.

' Mar 29 12:51:15 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:15.544 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] REQ: curl -i http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images -X PUT -H "X-Auth-Token: gAAAAABgYczvODdX..." -H "Content-Length: 0" Mar 29 12:51:15 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:15.545 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP STATUS: 503 Service Unavailable Mar 29 12:51:15 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:15.545 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP HEADERS: {'content-type': 'text/html; charset=UTF-8', 'content-length': '118', 'x-trans-id': 'tx01d5fb0cb5cb4f6dbff12-006061cd34', 'x-openstack-request-id': 'tx01d5fb0cb5cb4f6dbff12-006061cd34', 'date': 'Mon, 29 Mar 2021 12:51:15 GMT', 'connection': 'close'} Mar 29 12:51:15 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:15.545 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP BODY: b'

Service Unavailable

The server is currently unavailable. Please try again at a later time.

' Mar 29 12:51:42 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:42.007 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] REQ: curl -i http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images -X PUT -H "X-Auth-Token: gAAAAABgYczvODdX..." -H "Content-Length: 0" Mar 29 12:51:42 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:42.008 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP STATUS: 503 Service Unavailable Mar 29 12:51:42 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:42.009 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP HEADERS: {'content-type': 'text/html; charset=UTF-8', 'content-length': '118', 'x-trans-id': 'tx9cab391639a344f4aa548-006061cd4b', 'x-openstack-request-id': 'tx9cab391639a344f4aa548-006061cd4b', 'date': 'Mon, 29 Mar 2021 12:51:42 GMT', 'connection': 'close'} Mar 29 12:51:42 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:51:42.009 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP BODY: b'

Service Unavailable

The server is currently unavailable. Please try again at a later time.

' Mar 29 12:52:16 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:52:16.472 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] REQ: curl -i http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images -X PUT -H "X-Auth-Token: gAAAAABgYczvODdX..." -H "Content-Length: 0" Mar 29 12:52:16 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:52:16.473 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP STATUS: 503 Service Unavailable Mar 29 12:52:16 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:52:16.473 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP HEADERS: {'content-type': 'text/html; charset=UTF-8', 'content-length': '118', 'x-trans-id': 'tx926ae448b316485d93a68-006061cd6e', 'x-openstack-request-id': 'tx926ae448b316485d93a68-006061cd6e', 'date': 'Mon, 29 Mar 2021 12:52:16 GMT', 'connection': 'close'} Mar 29 12:52:16 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:52:16.474 2403 INFO swiftclient [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] RESP BODY: b'

Service Unavailable

The server is currently unavailable. Please try again at a later time.

' Mar 29 12:52:16 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:52:16.476 2403 ERROR glance.api.v2.image_data [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Failed to upload image data due to internal error: glance_store.exceptions.BackendException: Failed to add container to Swift. Mar 29 12:52:16 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Caught error: Failed to add container to Swift. Got error from Swift: Container PUT failed: http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images 503 Service Unavailable [first 60 chars of response] b'

Service Unavailable

The server is currently'.: glance_store.exceptions.BackendException: Failed to add container to Swift. Got error from Swift: Container PUT failed: http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images 503 Service Unavailable [first 60 chars of response] b'

Service Unavailable

The server is currently'. 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi Traceback (most recent call last): 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", line 1154, in _create_container_if_missing 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi connection.head_container(container) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1870, in head_container 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi return self._retry(None, head_container, container, headers=headers) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1801, in _retry 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi rv = func(self.url, self.token, *args, 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1097, in head_container 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi raise ClientException.from_response( 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi swiftclient.exceptions.ClientException: Container HEAD failed: http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images 404 Not Found 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi During handling of the above exception, another exception occurred: 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi Traceback (most recent call last): 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", line 1162, in _create_container_if_missing 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi connection.put_container(container) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1890, in put_container 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi return self._retry(None, put_container, container, headers=headers, 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1801, in _retry 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi rv = func(self.url, self.token, *args, 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/swiftclient/client.py", line 1144, in put_container 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi raise ClientException.from_response(resp, 'Container PUT failed', body) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi swiftclient.exceptions.ClientException: Container PUT failed: http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images 503 Service Unavailable [first 60 chars of response] b'

Service Unavailable

The server is currently' 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi During handling of the above exception, another exception occurred: 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi Traceback (most recent call last): 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/wsgi.py", line 1347, in __call__ 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi action_result = self.dispatch(self.controller, action, 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/wsgi.py", line 1391, in dispatch 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi return method(*args, **kwargs) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/common/utils.py", line 416, in wrapped 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi return func(self, req, *args, **kwargs) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 298, in upload 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self._restore(image_repo, image) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self.force_reraise() 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi six.reraise(self.type_, self.value, self.tb) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi raise value 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 163, in upload 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi image.set_data(data, size, backend=backend) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/domain/proxy.py", line 208, in set_data 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self.base.set_data(data, size, backend=backend, set_active=set_active) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/notifier.py", line 501, in set_data 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi _send_notification(notify_error, 'image.upload', msg) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self.force_reraise() 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi six.reraise(self.type_, self.value, self.tb) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/six.py", line 703, in reraise 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi raise value 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/notifier.py", line 447, in set_data 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self.repo.set_data(data, size, backend=backend, 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/api/policy.py", line 198, in set_data 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi return self.image.set_data(*args, **kwargs) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/quota/__init__.py", line 318, in set_data 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self.image.set_data(data, size=size, backend=backend, 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/location.py", line 567, in set_data 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self._upload_to_store(data, verifier, backend, size) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance/location.py", line 458, in _upload_to_store 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi multihash, loc_meta) = self.store_api.add_with_multihash( 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/multi_backend.py", line 398, in add_with_multihash 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi return store_add_to_backend_with_multihash( 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/multi_backend.py", line 480, in store_add_to_backend_with_multihash 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi (location, size, checksum, multihash, metadata) = store.add( 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/driver.py", line 279, in add_adapter 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi metadata_dict) = store_add_fun(*args, **kwargs) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/capabilities.py", line 176, in op_checker 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi return store_op_fun(store, *args, **kwargs) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", line 942, in add 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi self._create_container_if_missing(location.container, 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi File "/openstack/venvs/glance-22.1.0/lib/python3.8/site-packages/glance_store/_drivers/swift/store.py", line 1167, in _create_container_if_missing 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi raise glance_store.BackendException(msg) 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi glance_store.exceptions.BackendException: Failed to add container to Swift. 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi Got error from Swift: Container PUT failed: http://192.168.110.211:8080/v1/AUTH_024cc551782f41e395d3c9f13582ef7d/glance_images 503 Service Unavailable [first 60 chars of response] b'

Service Unavailable

The server is currently'. 2021-03-29 12:52:16.552 2403 ERROR glance.common.wsgi Mar 29 12:52:16 infra1-glance-container-99614ac2 uwsgi[2403]: Mon Mar 29 12:52:16 2021 - uwsgi_response_writev_headers_and_body_do(): Connection reset by peer [core/writer.c line 306] during PUT /v2/images/15af8603-2428-4a9d-9202-eb7250cfea46/file (192.168.110.213) Mar 29 12:52:16 infra1-glance-container-99614ac2 glance-wsgi-api[2403]: 2021-03-29 12:52:16.565 2403 CRITICAL glance [req-d19ce0e3-2577-482e-ae09-9ea3c63aabe2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Unhandled error: OSError: write error 2021-03-29 12:52:16.565 2403 ERROR glance OSError: write error 2021-03-29 12:52:16.565 2403 ERROR glance Kind regards, Oliver From rosmaita.fossdev at gmail.com Mon Mar 29 13:20:47 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 29 Mar 2021 09:20:47 -0400 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> Message-ID: On 3/29/21 8:50 AM, Brian Rosmaita wrote: > Hello Requirements Team, > > The Cinder team recently became aware of a potential data-loss bug [0] > that has been fixed in os-brick master [1] and backported to os-brick > stable/wallaby [2].  We've proposed a release of os-brick 4.4.0 from > stable/wallaby [3] and are petitioning for an RFE to include 4.4.0 in > the wallaby release. I had a quick discussion with Hervé and he prefers 4.3.1 as the version number for the new release. Except for the version number, everything else in this email still applies. > > We have three jobs running tempest with os-brick source in master that > have passed with [1]: os-brick-src-devstack-plugin-ceph [4], > os-brick-src-tempest-lvm-lio-barbican [5],and os-brick-src-tempest-nfs > [6].  The difference between os-brick master (at the time the tests were > run) and stable/wallaby since the 4.3.0 tag is as follows: > > master: > d4205bd   3 days ago  iSCSI: Fix flushing after multipath cfg change > (Gorka Eguileor) > 0e63fe8  2 weeks ago  Merge "RBD: catch read exceptions prior to > modifying offset" (Zuul) > 28545c7 4 months ago  RBD: catch read exceptions prior to modifying > offset (Jon Bernard) > 99b2c60  2 weeks ago  Merge "Dropping explicit unicode literal" (Zuul) > 7cfdb76  6 weeks ago  Dropping explicit unicode literal (tushargite96) > 9afa1a0  3 weeks ago  Add Python3 xena unit tests (OpenStack Release Bot) > ab57392  3 weeks ago  Update master for stable/wallaby (OpenStack > Release Bot) > 91a1cca  3 weeks ago  (tag: 4.3.0) Merge "NVMeOF connector driver > connection information compatibility fix" (Zuul) > > stable/wallaby: > f86944b   3 days ago  Add release note prelude for os-brick 4.4.0 (Brian > Rosmaita) > c70d70b   3 days ago  iSCSI: Fix flushing after multipath cfg change > (Gorka Eguileor) > 6649b8d  3 weeks ago  Update TOX_CONSTRAINTS_FILE for stable/wallaby > (OpenStack Release Bot) > f3f93dc  3 weeks ago  Update .gitreview for stable/wallaby (OpenStack > Release Bot) > 91a1cca  3 weeks ago  (tag: 4.3.0) Merge "NVMeOF connector driver > connection information compatibility fix" (Zuul) > > This gives us very high confidence that the results of the tests run > against master also apply to stable/wallaby at f86944b. > > Thank you for considering this request. > > (I've included Nova here because the bug occurs when the configuration > option that enables multipath connections on a compute is changed while > volumes are attached, so if this RFE is approved, nova might want to > raise the minimum version of os-brick in wallaby to 4.4.0.) > > > [0] https://launchpad.net/bugs/1921381 > [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > [3] https://review.opendev.org/c/openstack/releases/+/783641 > [4] > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > [5] > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > [6] > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa From hberaud at redhat.com Mon Mar 29 13:47:27 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 29 Mar 2021 15:47:27 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> Message-ID: >From a release point of view this RFE LGTM. Le lun. 29 mars 2021 à 15:23, Brian Rosmaita a écrit : > On 3/29/21 8:50 AM, Brian Rosmaita wrote: > > Hello Requirements Team, > > > > The Cinder team recently became aware of a potential data-loss bug [0] > > that has been fixed in os-brick master [1] and backported to os-brick > > stable/wallaby [2]. We've proposed a release of os-brick 4.4.0 from > > stable/wallaby [3] and are petitioning for an RFE to include 4.4.0 in > > the wallaby release. > > I had a quick discussion with Hervé and he prefers 4.3.1 as the version > number for the new release. Except for the version number, everything > else in this email still applies. > > > > > We have three jobs running tempest with os-brick source in master that > > have passed with [1]: os-brick-src-devstack-plugin-ceph [4], > > os-brick-src-tempest-lvm-lio-barbican [5],and os-brick-src-tempest-nfs > > [6]. The difference between os-brick master (at the time the tests were > > run) and stable/wallaby since the 4.3.0 tag is as follows: > > > > master: > > d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg change > > (Gorka Eguileor) > > 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to > > modifying offset" (Zuul) > > 28545c7 4 months ago RBD: catch read exceptions prior to modifying > > offset (Jon Bernard) > > 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" (Zuul) > > 7cfdb76 6 weeks ago Dropping explicit unicode literal (tushargite96) > > 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack Release Bot) > > ab57392 3 weeks ago Update master for stable/wallaby (OpenStack > > Release Bot) > > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > > connection information compatibility fix" (Zuul) > > > > stable/wallaby: > > f86944b 3 days ago Add release note prelude for os-brick 4.4.0 (Brian > > Rosmaita) > > c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg change > > (Gorka Eguileor) > > 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for stable/wallaby > > (OpenStack Release Bot) > > f3f93dc 3 weeks ago Update .gitreview for stable/wallaby (OpenStack > > Release Bot) > > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > > connection information compatibility fix" (Zuul) > > > > This gives us very high confidence that the results of the tests run > > against master also apply to stable/wallaby at f86944b. > > > > Thank you for considering this request. > > > > (I've included Nova here because the bug occurs when the configuration > > option that enables multipath connections on a compute is changed while > > volumes are attached, so if this RFE is approved, nova might want to > > raise the minimum version of os-brick in wallaby to 4.4.0.) > > > > > > [0] https://launchpad.net/bugs/1921381 > > [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > > [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > > [3] https://review.opendev.org/c/openstack/releases/+/783641 > > [4] > > > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > > [5] > > > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > > [6] > > > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Mar 29 13:53:58 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 29 Mar 2021 08:53:58 -0500 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> Message-ID: <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki wrote ---- > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski wrote: > > > > Hi, > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > Hello, > > > > > > The main question is, does the previous Victoria version [1] will be > > > compatible with the latest neutron changes and with the latest engine > > > facade introduced during Wallaby? > > > > It won't be compatible. Networking-midonet from Victoria will not work properly > > with Neutron Wallaby. > > > > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't release > > > a new version of networking-midonet, because the project code won't be > > > compatible with the rest of our projects (AFAIK neutron), unless, the > > > previous version will not compatible either, and, unless, not releasing a > > > Wallaby version leave the project branch uncut and so leave the > > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > > > If we do not release a new version then we will use a previous version of > > > networking-midonet. This version will be the last Victoria version [1]. > > > > > > I suppose that this version (the victoria version) isn't compatible with > > > the new facade engine either, isn't it? > > > > Correct. It's not compatible. > > > > > > > > So release or not release a new version won't solve the facade engine > > > problem, isn't? > > > > Yes. > > > > > > > > You said that neutron evolved and networking-midonet didn't, hence even if > > > we release networking-midonet in the current state it will fail too, isn't > > > it? > > > > Also yes :) > > > > > > > > However, releasing a new version and branching on it can give you the > > > needed maintenance window to allow you to fix the issue later, when your > > > gates will be fixed and then patches backported. git tags are cheap. > > > > > > We should notice that since Victoria some patches have been merged in > > > Wallaby so even if they aren't ground breaking changes they are changes > > > that it is worth to release. > > > > > > From a release point of view I think it's worth it to release a new version > > > and to cut Wallaby. We are close to the it's deadline. That will land the > > > available delta between Victoria and Wallaby. That will allow to fix the > > > engine facade by opening a maintenance window. If the project is still > > > lacking maintainers in a few weeks / months, this will allow a more smooth > > > deprecation of this one. > > > > > > Thoughts? > > > > Based on Your feedback I agree that we should release now what we have. Even if > > it's broken we can then fix it and backport fixes to stable/wallaby branch. > > > > @Akihiro: are You ok with that too? > > I was writing another reply and did not notice this mail. > While I still have a doubt on releasing the broken code (which we are > not sure can be fixed soon or not), > I am okay with either decision. Yeah, releasing broken code and especially where we do not if there will be maintainer to fix it or not seems risky for me too. One option is to deprecate it for wallaby which means follow the deprecation steps mentioned in project-team-guide[1]. If maintainers show up then it can be un-deprecated. With that, we will not have any compatible wallaby version which I think is a better choice than releasing the broken code. Releasing the broken code now with the hope of someone will come up and fix it with backport makes me a little uncomfortable and if it did not get fix then we will live with broken release forever. [1]https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository -gmann > > > > > > > > > [1] > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a > > > écrit : > > > > > > > Hi, > > > > > > > > We have opened release patch for networking-midonet [1] but our concern > > > > about > > > > that project is that its gate is completly broken since some time thus we > > > > don't really know if the project is still working and valid to be released. > > > > In Wallaby cycle Neutron for example finished transition to the engine > > > > facade, > > > > and patch to adjust that in networking-midonet is still opened [2] (and > > > > red as > > > > there were some unrelated issues with most of the jobs there). > > > > > > > > In the past we had discussion about networking-midonet project and it's > > > > status > > > > as the official Neutron stadium project. Then some new folks stepped in to > > > > maintain it but now it seems a bit like (again) it lacks of maintainers. > > > > I know that it is very late in the cycle now so my question to the TC and > > > > release teams is: should we release stable/wallaby with its current state, > > > > even if it's broken or should we maybe don't release it at all until its > > > > gate > > > > will be up and running? > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > > > > > > > -- > > > Hervé Beraud > > > Senior Software Engineer at Red Hat > > > irc: hberaud > > > https://github.com/4383/ > > > https://twitter.com/4383hberaud > > > -----BEGIN PGP SIGNATURE----- > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > v6rDpkeNksZ9fFSyoY2o > > > =ECSj > > > -----END PGP SIGNATURE----- > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > From mthode at mthode.org Mon Mar 29 13:58:20 2021 From: mthode at mthode.org (Matthew Thode) Date: Mon, 29 Mar 2021 08:58:20 -0500 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> Message-ID: <20210329135820.mlo7hdzp7nm4cege@mthode.org> On 21-03-29 15:47:27, Herve Beraud wrote: > From a release point of view this RFE LGTM. > > Le lun. 29 mars 2021 à 15:23, Brian Rosmaita a > écrit : > > > On 3/29/21 8:50 AM, Brian Rosmaita wrote: > > > Hello Requirements Team, > > > > > > The Cinder team recently became aware of a potential data-loss bug [0] > > > that has been fixed in os-brick master [1] and backported to os-brick > > > stable/wallaby [2]. We've proposed a release of os-brick 4.4.0 from > > > stable/wallaby [3] and are petitioning for an RFE to include 4.4.0 in > > > the wallaby release. > > > > I had a quick discussion with Hervé and he prefers 4.3.1 as the version > > number for the new release. Except for the version number, everything > > else in this email still applies. > > > > > > > > We have three jobs running tempest with os-brick source in master that > > > have passed with [1]: os-brick-src-devstack-plugin-ceph [4], > > > os-brick-src-tempest-lvm-lio-barbican [5],and os-brick-src-tempest-nfs > > > [6]. The difference between os-brick master (at the time the tests were > > > run) and stable/wallaby since the 4.3.0 tag is as follows: > > > > > > master: > > > d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg change > > > (Gorka Eguileor) > > > 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to > > > modifying offset" (Zuul) > > > 28545c7 4 months ago RBD: catch read exceptions prior to modifying > > > offset (Jon Bernard) > > > 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" (Zuul) > > > 7cfdb76 6 weeks ago Dropping explicit unicode literal (tushargite96) > > > 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack Release Bot) > > > ab57392 3 weeks ago Update master for stable/wallaby (OpenStack > > > Release Bot) > > > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > > > connection information compatibility fix" (Zuul) > > > > > > stable/wallaby: > > > f86944b 3 days ago Add release note prelude for os-brick 4.4.0 (Brian > > > Rosmaita) > > > c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg change > > > (Gorka Eguileor) > > > 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for stable/wallaby > > > (OpenStack Release Bot) > > > f3f93dc 3 weeks ago Update .gitreview for stable/wallaby (OpenStack > > > Release Bot) > > > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > > > connection information compatibility fix" (Zuul) > > > > > > This gives us very high confidence that the results of the tests run > > > against master also apply to stable/wallaby at f86944b. > > > > > > Thank you for considering this request. > > > > > > (I've included Nova here because the bug occurs when the configuration > > > option that enables multipath connections on a compute is changed while > > > volumes are attached, so if this RFE is approved, nova might want to > > > raise the minimum version of os-brick in wallaby to 4.4.0.) > > > > > > > > > [0] https://launchpad.net/bugs/1921381 > > > [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > > > [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > > > [3] https://review.opendev.org/c/openstack/releases/+/783641 > > > [4] > > > > > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > > > [5] > > > > > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > > > [6] > > > > > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa > > > > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- Looks good from a reqs point of view as well -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 29 14:04:16 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 29 Mar 2021 16:04:16 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> Message-ID: Le lun. 29 mars 2021 à 15:54, Ghanshyam Mann a écrit : > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki > wrote ---- > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski > wrote: > > > > > > Hi, > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > Hello, > > > > > > > > The main question is, does the previous Victoria version [1] will be > > > > compatible with the latest neutron changes and with the latest > engine > > > > facade introduced during Wallaby? > > > > > > It won't be compatible. Networking-midonet from Victoria will not > work properly > > > with Neutron Wallaby. > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't > release > > > > a new version of networking-midonet, because the project code won't > be > > > > compatible with the rest of our projects (AFAIK neutron), unless, > the > > > > previous version will not compatible either, and, unless, not > releasing a > > > > Wallaby version leave the project branch uncut and so leave the > > > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > > > > > If we do not release a new version then we will use a previous > version of > > > > networking-midonet. This version will be the last Victoria version > [1]. > > > > > > > > I suppose that this version (the victoria version) isn't compatible > with > > > > the new facade engine either, isn't it? > > > > > > Correct. It's not compatible. > > > > > > > > > > > So release or not release a new version won't solve the facade > engine > > > > problem, isn't? > > > > > > Yes. > > > > > > > > > > > You said that neutron evolved and networking-midonet didn't, hence > even if > > > > we release networking-midonet in the current state it will fail > too, isn't > > > > it? > > > > > > Also yes :) > > > > > > > > > > > However, releasing a new version and branching on it can give you > the > > > > needed maintenance window to allow you to fix the issue later, when > your > > > > gates will be fixed and then patches backported. git tags are cheap. > > > > > > > > We should notice that since Victoria some patches have been merged > in > > > > Wallaby so even if they aren't ground breaking changes they are > changes > > > > that it is worth to release. > > > > > > > > From a release point of view I think it's worth it to release a new > version > > > > and to cut Wallaby. We are close to the it's deadline. That will > land the > > > > available delta between Victoria and Wallaby. That will allow to > fix the > > > > engine facade by opening a maintenance window. If the project is > still > > > > lacking maintainers in a few weeks / months, this will allow a more > smooth > > > > deprecation of this one. > > > > > > > > Thoughts? > > > > > > Based on Your feedback I agree that we should release now what we > have. Even if > > > it's broken we can then fix it and backport fixes to stable/wallaby > branch. > > > > > > @Akihiro: are You ok with that too? > > > > I was writing another reply and did not notice this mail. > > While I still have a doubt on releasing the broken code (which we are > > not sure can be fixed soon or not), > > I am okay with either decision. > > Yeah, releasing broken code and especially where we do not if there will be > maintainer to fix it or not seems risky for me too. > > One option is to deprecate it for wallaby which means follow the > deprecation steps > mentioned in project-team-guide[1]. If maintainers show up then it can be > un-deprecated. > With that, we will not have any compatible wallaby version which I think > is a better > choice than releasing the broken code. > If the team agrees with that I really prefer this approach. > Releasing the broken code now with the hope of someone will come up and > fix it with > backport makes me a little uncomfortable and if it did not get fix then we > will live > with broken release forever. > > > [1] > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > -gmann > > > > > > > > > > > > > > [1] > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski > a > > > > écrit : > > > > > > > > > Hi, > > > > > > > > > > We have opened release patch for networking-midonet [1] but our > concern > > > > > about > > > > > that project is that its gate is completly broken since some time > thus we > > > > > don't really know if the project is still working and valid to be > released. > > > > > In Wallaby cycle Neutron for example finished transition to the > engine > > > > > facade, > > > > > and patch to adjust that in networking-midonet is still opened > [2] (and > > > > > red as > > > > > there were some unrelated issues with most of the jobs there). > > > > > > > > > > In the past we had discussion about networking-midonet project > and it's > > > > > status > > > > > as the official Neutron stadium project. Then some new folks > stepped in to > > > > > maintain it but now it seems a bit like (again) it lacks of > maintainers. > > > > > I know that it is very late in the cycle now so my question to > the TC and > > > > > release teams is: should we release stable/wallaby with its > current state, > > > > > even if it's broken or should we maybe don't release it at all > until its > > > > > gate > > > > > will be up and running? > > > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > > [2] > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > -- > > > > > Slawek Kaplonski > > > > > Principal Software Engineer > > > > > Red Hat > > > > > > > > > > > > > > > > -- > > > > Hervé Beraud > > > > Senior Software Engineer at Red Hat > > > > irc: hberaud > > > > https://github.com/4383/ > > > > https://twitter.com/4383hberaud > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > v6rDpkeNksZ9fFSyoY2o > > > > =ECSj > > > > -----END PGP SIGNATURE----- > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon Mar 29 14:05:36 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 29 Mar 2021 16:05:36 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> Message-ID: On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita wrote: > Hello Requirements Team, > > The Cinder team recently became aware of a potential data-loss bug > [0] that has been fixed in os-brick master [1] and backported to > os-brick stable/wallaby [2]. We've proposed a release of os-brick > 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to > include 4.4.0 in the wallaby release. > > We have three jobs running tempest with os-brick source in master > that have passed with [1]: os-brick-src-devstack-plugin-ceph [4], > os-brick-src-tempest-lvm-lio-barbican [5],and > os-brick-src-tempest-nfs [6]. The difference between os-brick master > (at the time the tests were run) and stable/wallaby since the 4.3.0 > tag is as follows: > > master: > d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg change > (Gorka Eguileor) > 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to > modifying offset" (Zuul) > 28545c7 4 months ago RBD: catch read exceptions prior to modifying > offset (Jon Bernard) > 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" (Zuul) > 7cfdb76 6 weeks ago Dropping explicit unicode literal (tushargite96) > 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack Release > Bot) > ab57392 3 weeks ago Update master for stable/wallaby (OpenStack > Release Bot) > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > connection information compatibility fix" (Zuul) > > stable/wallaby: > f86944b 3 days ago Add release note prelude for os-brick 4.4.0 > (Brian Rosmaita) > c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg change > (Gorka Eguileor) > 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for stable/wallaby > (OpenStack Release Bot) > f3f93dc 3 weeks ago Update .gitreview for stable/wallaby (OpenStack > Release Bot) > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > connection information compatibility fix" (Zuul) > > This gives us very high confidence that the results of the tests run > against master also apply to stable/wallaby at f86944b. > > Thank you for considering this request. > > (I've included Nova here because the bug occurs when the > configuration option that enables multipath connections on a compute > is changed while volumes are attached, so if this RFE is approved, > nova might want to raise the minimum version of os-brick in wallaby > to 4.4.0.) > Thanks for the heads up. After the new os-brick version is released I will prepare a version bump patch in nova on master and stable/wallaby. This also means that nova will release an RC2. Cheers, gibi > > [0] https://launchpad.net/bugs/1921381 > [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > [3] https://review.opendev.org/c/openstack/releases/+/783641 > [4] > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > [5] > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > [6] > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa From hberaud at redhat.com Mon Mar 29 14:25:30 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 29 Mar 2021 16:25:30 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> Message-ID: Ok thanks. That works for us. Le lun. 29 mars 2021 à 16:11, Balazs Gibizer a écrit : > > > On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita > wrote: > > Hello Requirements Team, > > > > The Cinder team recently became aware of a potential data-loss bug > > [0] that has been fixed in os-brick master [1] and backported to > > os-brick stable/wallaby [2]. We've proposed a release of os-brick > > 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to > > include 4.4.0 in the wallaby release. > > > > We have three jobs running tempest with os-brick source in master > > that have passed with [1]: os-brick-src-devstack-plugin-ceph [4], > > os-brick-src-tempest-lvm-lio-barbican [5],and > > os-brick-src-tempest-nfs [6]. The difference between os-brick master > > (at the time the tests were run) and stable/wallaby since the 4.3.0 > > tag is as follows: > > > > master: > > d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg change > > (Gorka Eguileor) > > 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to > > modifying offset" (Zuul) > > 28545c7 4 months ago RBD: catch read exceptions prior to modifying > > offset (Jon Bernard) > > 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" (Zuul) > > 7cfdb76 6 weeks ago Dropping explicit unicode literal (tushargite96) > > 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack Release > > Bot) > > ab57392 3 weeks ago Update master for stable/wallaby (OpenStack > > Release Bot) > > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > > connection information compatibility fix" (Zuul) > > > > stable/wallaby: > > f86944b 3 days ago Add release note prelude for os-brick 4.4.0 > > (Brian Rosmaita) > > c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg change > > (Gorka Eguileor) > > 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for stable/wallaby > > (OpenStack Release Bot) > > f3f93dc 3 weeks ago Update .gitreview for stable/wallaby (OpenStack > > Release Bot) > > 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > > connection information compatibility fix" (Zuul) > > > > This gives us very high confidence that the results of the tests run > > against master also apply to stable/wallaby at f86944b. > > > > Thank you for considering this request. > > > > (I've included Nova here because the bug occurs when the > > configuration option that enables multipath connections on a compute > > is changed while volumes are attached, so if this RFE is approved, > > nova might want to raise the minimum version of os-brick in wallaby > > to 4.4.0.) > > > > Thanks for the heads up. After the new os-brick version is released I > will prepare a version bump patch in nova on master and stable/wallaby. > This also means that nova will release an RC2. > > Cheers, > gibi > > > > > [0] https://launchpad.net/bugs/1921381 > > [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > > [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > > [3] https://review.opendev.org/c/openstack/releases/+/783641 > > [4] > > > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > > [5] > > > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > > [6] > > > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 29 14:26:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 29 Mar 2021 16:26:31 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> Message-ID: <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> Hi, On Mon, Mar 29, 2021 at 08:53:58AM -0500, Ghanshyam Mann wrote: > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki wrote ---- > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > Hello, > > > > > > > > The main question is, does the previous Victoria version [1] will be > > > > compatible with the latest neutron changes and with the latest engine > > > > facade introduced during Wallaby? > > > > > > It won't be compatible. Networking-midonet from Victoria will not work properly > > > with Neutron Wallaby. > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't release > > > > a new version of networking-midonet, because the project code won't be > > > > compatible with the rest of our projects (AFAIK neutron), unless, the > > > > previous version will not compatible either, and, unless, not releasing a > > > > Wallaby version leave the project branch uncut and so leave the > > > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > > > > > If we do not release a new version then we will use a previous version of > > > > networking-midonet. This version will be the last Victoria version [1]. > > > > > > > > I suppose that this version (the victoria version) isn't compatible with > > > > the new facade engine either, isn't it? > > > > > > Correct. It's not compatible. > > > > > > > > > > > So release or not release a new version won't solve the facade engine > > > > problem, isn't? > > > > > > Yes. > > > > > > > > > > > You said that neutron evolved and networking-midonet didn't, hence even if > > > > we release networking-midonet in the current state it will fail too, isn't > > > > it? > > > > > > Also yes :) > > > > > > > > > > > However, releasing a new version and branching on it can give you the > > > > needed maintenance window to allow you to fix the issue later, when your > > > > gates will be fixed and then patches backported. git tags are cheap. > > > > > > > > We should notice that since Victoria some patches have been merged in > > > > Wallaby so even if they aren't ground breaking changes they are changes > > > > that it is worth to release. > > > > > > > > From a release point of view I think it's worth it to release a new version > > > > and to cut Wallaby. We are close to the it's deadline. That will land the > > > > available delta between Victoria and Wallaby. That will allow to fix the > > > > engine facade by opening a maintenance window. If the project is still > > > > lacking maintainers in a few weeks / months, this will allow a more smooth > > > > deprecation of this one. > > > > > > > > Thoughts? > > > > > > Based on Your feedback I agree that we should release now what we have. Even if > > > it's broken we can then fix it and backport fixes to stable/wallaby branch. > > > > > > @Akihiro: are You ok with that too? > > > > I was writing another reply and did not notice this mail. > > While I still have a doubt on releasing the broken code (which we are > > not sure can be fixed soon or not), > > I am okay with either decision. > > Yeah, releasing broken code and especially where we do not if there will be > maintainer to fix it or not seems risky for me too. > > One option is to deprecate it for wallaby which means follow the deprecation steps > mentioned in project-team-guide[1]. If maintainers show up then it can be un-deprecated. > With that, we will not have any compatible wallaby version which I think is a better > choice than releasing the broken code. We were asking about that some time ago already and then some new maintainers stepped in. But as now there is that problem again with networking-midonet I'm fine to deprecate it (or ask about it again at least). But isn't it too late in the cycle now? Last time when we were doing that it was before milestone-2 IIRC. Now we are almost at the end of the cycle. Should we do it still now? If yes, how much time do we really have to e.g. ask for some new maintainers? > > Releasing the broken code now with the hope of someone will come up and fix it with > backport makes me a little uncomfortable and if it did not get fix then we will live > with broken release forever. > > > [1]https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > -gmann > > > > > > > > > > > > > > [1] > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a > > > > écrit : > > > > > > > > > Hi, > > > > > > > > > > We have opened release patch for networking-midonet [1] but our concern > > > > > about > > > > > that project is that its gate is completly broken since some time thus we > > > > > don't really know if the project is still working and valid to be released. > > > > > In Wallaby cycle Neutron for example finished transition to the engine > > > > > facade, > > > > > and patch to adjust that in networking-midonet is still opened [2] (and > > > > > red as > > > > > there were some unrelated issues with most of the jobs there). > > > > > > > > > > In the past we had discussion about networking-midonet project and it's > > > > > status > > > > > as the official Neutron stadium project. Then some new folks stepped in to > > > > > maintain it but now it seems a bit like (again) it lacks of maintainers. > > > > > I know that it is very late in the cycle now so my question to the TC and > > > > > release teams is: should we release stable/wallaby with its current state, > > > > > even if it's broken or should we maybe don't release it at all until its > > > > > gate > > > > > will be up and running? > > > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > > [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > -- > > > > > Slawek Kaplonski > > > > > Principal Software Engineer > > > > > Red Hat > > > > > > > > > > > > > > > > -- > > > > Hervé Beraud > > > > Senior Software Engineer at Red Hat > > > > irc: hberaud > > > > https://github.com/4383/ > > > > https://twitter.com/4383hberaud > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > v6rDpkeNksZ9fFSyoY2o > > > > =ECSj > > > > -----END PGP SIGNATURE----- > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From skaplons at redhat.com Mon Mar 29 14:28:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 29 Mar 2021 16:28:33 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> Message-ID: <20210329142833.rilgaemupytlljce@p1.localdomain> Hi, On Mon, Mar 29, 2021 at 11:31:38AM +0000, Neil Jerram wrote: > Out of interest - for networking-calico - what changes are needed to adapt > to the new engine facade? Basically in most cases it is simply do changes like e.g. are done in [1] to use engine facade api to make db transactions. Then You should run Your tests, see what will be broken and fix it :) [1] https://review.opendev.org/c/openstack/networking-midonet/+/770797/3/midonet/neutron/db/gateway_device.py > > > On Mon, Mar 29, 2021 at 12:14 PM Akihiro Motoki wrote: > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski > > wrote: > > > > > > Hi, > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > Hello, > > > > > > > > The main question is, does the previous Victoria version [1] will be > > > > compatible with the latest neutron changes and with the latest engine > > > > facade introduced during Wallaby? > > > > > > It won't be compatible. Networking-midonet from Victoria will not work > > properly > > > with Neutron Wallaby. > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't > > release > > > > a new version of networking-midonet, because the project code won't be > > > > compatible with the rest of our projects (AFAIK neutron), unless, the > > > > previous version will not compatible either, and, unless, not > > releasing a > > > > Wallaby version leave the project branch uncut and so leave the > > > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > > > > > If we do not release a new version then we will use a previous version > > of > > > > networking-midonet. This version will be the last Victoria version [1]. > > > > > > > > I suppose that this version (the victoria version) isn't compatible > > with > > > > the new facade engine either, isn't it? > > > > > > Correct. It's not compatible. > > > > > > > > > > > So release or not release a new version won't solve the facade engine > > > > problem, isn't? > > > > > > Yes. > > > > > > > > > > > You said that neutron evolved and networking-midonet didn't, hence > > even if > > > > we release networking-midonet in the current state it will fail too, > > isn't > > > > it? > > > > > > Also yes :) > > > > > > > > > > > However, releasing a new version and branching on it can give you the > > > > needed maintenance window to allow you to fix the issue later, when > > your > > > > gates will be fixed and then patches backported. git tags are cheap. > > > > > > > > We should notice that since Victoria some patches have been merged in > > > > Wallaby so even if they aren't ground breaking changes they are changes > > > > that it is worth to release. > > > > > > > > From a release point of view I think it's worth it to release a new > > version > > > > and to cut Wallaby. We are close to the it's deadline. That will land > > the > > > > available delta between Victoria and Wallaby. That will allow to fix > > the > > > > engine facade by opening a maintenance window. If the project is still > > > > lacking maintainers in a few weeks / months, this will allow a more > > smooth > > > > deprecation of this one. > > > > > > > > Thoughts? > > > > > > Based on Your feedback I agree that we should release now what we have. > > Even if > > > it's broken we can then fix it and backport fixes to stable/wallaby > > branch. > > > > > > @Akihiro: are You ok with that too? > > > > I was writing another reply and did not notice this mail. > > While I still have a doubt on releasing the broken code (which we are > > not sure can be fixed soon or not), > > I am okay with either decision. > > > > > > > > > > > > > [1] > > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a > > > > écrit : > > > > > > > > > Hi, > > > > > > > > > > We have opened release patch for networking-midonet [1] but our > > concern > > > > > about > > > > > that project is that its gate is completly broken since some time > > thus we > > > > > don't really know if the project is still working and valid to be > > released. > > > > > In Wallaby cycle Neutron for example finished transition to the > > engine > > > > > facade, > > > > > and patch to adjust that in networking-midonet is still opened [2] > > (and > > > > > red as > > > > > there were some unrelated issues with most of the jobs there). > > > > > > > > > > In the past we had discussion about networking-midonet project and > > it's > > > > > status > > > > > as the official Neutron stadium project. Then some new folks stepped > > in to > > > > > maintain it but now it seems a bit like (again) it lacks of > > maintainers. > > > > > I know that it is very late in the cycle now so my question to the > > TC and > > > > > release teams is: should we release stable/wallaby with its current > > state, > > > > > even if it's broken or should we maybe don't release it at all until > > its > > > > > gate > > > > > will be up and running? > > > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > > [2] > > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > -- > > > > > Slawek Kaplonski > > > > > Principal Software Engineer > > > > > Red Hat > > > > > > > > > > > > > > > > -- > > > > Hervé Beraud > > > > Senior Software Engineer at Red Hat > > > > irc: hberaud > > > > https://github.com/4383/ > > > > https://twitter.com/4383hberaud > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > v6rDpkeNksZ9fFSyoY2o > > > > =ECSj > > > > -----END PGP SIGNATURE----- > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From bkslash at poczta.onet.pl Mon Mar 29 14:35:52 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 29 Mar 2021 16:35:52 +0200 Subject: [kolla-ansible][horizon] policy.yaml/json files Message-ID: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Hi, Im not quite clear about policy.yaml/json files in kolla-ansible. Let assume, that I need to allow one of project users to add other users to the project. So I create „project_admin” role and assign it to this user. Then I found /etc/kolla/keystone/policy.json.test file, which I use as template. There is rule „identity:create_credential” : „(role:admin and system_scope:all)” so I add „or role:project_admin” and put file in /etc/kolla/config/keystone/ and reconfigure kolla. And now few questions: 1. policy.json (or policy.yaml) always overwrite all default policies? I mean if I only add one rule to this file then other rules will „disappear” or will have default values? Is there any way to only overwrite some default rules and leave rest with defaults? Like with .conf files 2. what about Horizon and visibility of options? In mentioned case putting the same policy.json file in /etc/kolla/config/keystone/ and /etc/kolla/config/horizon/ should „unblock” Add User button for user with project_admin role? Or how to achieve it? 3. does Horizon need the duplicated policy.json files from other services in it’s configuration folder or is it enough to write policy.json for services I want to change? 4. when I assign admin role to a user with projectID (openstack role add —project PROJECT_ID —user SOME_USER admin) this user sees in Horizon everything systemwide, not only inside this project… Which rules should be created to allow him to see only users and resources which belongs to this project? Best regards Adam From hberaud at redhat.com Mon Mar 29 14:45:58 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 29 Mar 2021 16:45:58 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <20210329142833.rilgaemupytlljce@p1.localdomain> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <20210329142833.rilgaemupytlljce@p1.localdomain> Message-ID: Normally at this point we avoid discussing abandoning deliverables so close to the deadline and that aren't already abandoned from a governance point of view. However in this case it seems to be the more sane approach and the more consensual. If the neutron team and the TC agree with that, then we can simply remove the deliverable for Wallaby on the release repo. >From a release point of view I don't think that it is too late to abandon this deliverable as it follows the `cycle-with-rc` model and it wasn't ever released during Wallaby for now. However we are under pressure to cut all stable/wallaby branches in order to be able to branch requirements / devstack so that should be an immediate decision. Notice that the previously proposed RC1/cut approach [1] avoids the rush and let you have an opportunity to fix things. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021378.html Le lun. 29 mars 2021 à 16:29, Slawek Kaplonski a écrit : > Hi, > > On Mon, Mar 29, 2021 at 11:31:38AM +0000, Neil Jerram wrote: > > Out of interest - for networking-calico - what changes are needed to > adapt > > to the new engine facade? > > Basically in most cases it is simply do changes like e.g. are done in [1] > to use > engine facade api to make db transactions. > Then You should run Your tests, see what will be broken and fix it :) > > [1] > https://review.opendev.org/c/openstack/networking-midonet/+/770797/3/midonet/neutron/db/gateway_device.py > > > > > > > On Mon, Mar 29, 2021 at 12:14 PM Akihiro Motoki > wrote: > > > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski > > > wrote: > > > > > > > > Hi, > > > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > > Hello, > > > > > > > > > > The main question is, does the previous Victoria version [1] will > be > > > > > compatible with the latest neutron changes and with the latest > engine > > > > > facade introduced during Wallaby? > > > > > > > > It won't be compatible. Networking-midonet from Victoria will not > work > > > properly > > > > with Neutron Wallaby. > > > > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't > > > release > > > > > a new version of networking-midonet, because the project code > won't be > > > > > compatible with the rest of our projects (AFAIK neutron), unless, > the > > > > > previous version will not compatible either, and, unless, not > > > releasing a > > > > > Wallaby version leave the project branch uncut and so leave the > > > > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > > > > > > > If we do not release a new version then we will use a previous > version > > > of > > > > > networking-midonet. This version will be the last Victoria version > [1]. > > > > > > > > > > I suppose that this version (the victoria version) isn't compatible > > > with > > > > > the new facade engine either, isn't it? > > > > > > > > Correct. It's not compatible. > > > > > > > > > > > > > > So release or not release a new version won't solve the facade > engine > > > > > problem, isn't? > > > > > > > > Yes. > > > > > > > > > > > > > > You said that neutron evolved and networking-midonet didn't, hence > > > even if > > > > > we release networking-midonet in the current state it will fail > too, > > > isn't > > > > > it? > > > > > > > > Also yes :) > > > > > > > > > > > > > > However, releasing a new version and branching on it can give you > the > > > > > needed maintenance window to allow you to fix the issue later, when > > > your > > > > > gates will be fixed and then patches backported. git tags are > cheap. > > > > > > > > > > We should notice that since Victoria some patches have been merged > in > > > > > Wallaby so even if they aren't ground breaking changes they are > changes > > > > > that it is worth to release. > > > > > > > > > > From a release point of view I think it's worth it to release a new > > > version > > > > > and to cut Wallaby. We are close to the it's deadline. That will > land > > > the > > > > > available delta between Victoria and Wallaby. That will allow to > fix > > > the > > > > > engine facade by opening a maintenance window. If the project is > still > > > > > lacking maintainers in a few weeks / months, this will allow a more > > > smooth > > > > > deprecation of this one. > > > > > > > > > > Thoughts? > > > > > > > > Based on Your feedback I agree that we should release now what we > have. > > > Even if > > > > it's broken we can then fix it and backport fixes to stable/wallaby > > > branch. > > > > > > > > @Akihiro: are You ok with that too? > > > > > > I was writing another reply and did not notice this mail. > > > While I still have a doubt on releasing the broken code (which we are > > > not sure can be fixed soon or not), > > > I am okay with either decision. > > > > > > > > > > > > > > > > > [1] > > > > > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski < > skaplons at redhat.com> a > > > > > écrit : > > > > > > > > > > > Hi, > > > > > > > > > > > > We have opened release patch for networking-midonet [1] but our > > > concern > > > > > > about > > > > > > that project is that its gate is completly broken since some time > > > thus we > > > > > > don't really know if the project is still working and valid to be > > > released. > > > > > > In Wallaby cycle Neutron for example finished transition to the > > > engine > > > > > > facade, > > > > > > and patch to adjust that in networking-midonet is still opened > [2] > > > (and > > > > > > red as > > > > > > there were some unrelated issues with most of the jobs there). > > > > > > > > > > > > In the past we had discussion about networking-midonet project > and > > > it's > > > > > > status > > > > > > as the official Neutron stadium project. Then some new folks > stepped > > > in to > > > > > > maintain it but now it seems a bit like (again) it lacks of > > > maintainers. > > > > > > I know that it is very late in the cycle now so my question to > the > > > TC and > > > > > > release teams is: should we release stable/wallaby with its > current > > > state, > > > > > > even if it's broken or should we maybe don't release it at all > until > > > its > > > > > > gate > > > > > > will be up and running? > > > > > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > > > [2] > > > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > -- > > > > > Hervé Beraud > > > > > Senior Software Engineer at Red Hat > > > > > irc: hberaud > > > > > https://github.com/4383/ > > > > > https://twitter.com/4383hberaud > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > =ECSj > > > > > -----END PGP SIGNATURE----- > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Mar 29 14:51:59 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 29 Mar 2021 09:51:59 -0500 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> Message-ID: <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> ---- On Mon, 29 Mar 2021 09:26:31 -0500 Slawek Kaplonski wrote ---- > Hi, > > On Mon, Mar 29, 2021 at 08:53:58AM -0500, Ghanshyam Mann wrote: > > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki wrote ---- > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski wrote: > > > > > > > > Hi, > > > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > > Hello, > > > > > > > > > > The main question is, does the previous Victoria version [1] will be > > > > > compatible with the latest neutron changes and with the latest engine > > > > > facade introduced during Wallaby? > > > > > > > > It won't be compatible. Networking-midonet from Victoria will not work properly > > > > with Neutron Wallaby. > > > > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we shouldn't release > > > > > a new version of networking-midonet, because the project code won't be > > > > > compatible with the rest of our projects (AFAIK neutron), unless, the > > > > > previous version will not compatible either, and, unless, not releasing a > > > > > Wallaby version leave the project branch uncut and so leave the > > > > > corresponding series unmaintainable, and so unfixable a posteriori. > > > > > > > > > > If we do not release a new version then we will use a previous version of > > > > > networking-midonet. This version will be the last Victoria version [1]. > > > > > > > > > > I suppose that this version (the victoria version) isn't compatible with > > > > > the new facade engine either, isn't it? > > > > > > > > Correct. It's not compatible. > > > > > > > > > > > > > > So release or not release a new version won't solve the facade engine > > > > > problem, isn't? > > > > > > > > Yes. > > > > > > > > > > > > > > You said that neutron evolved and networking-midonet didn't, hence even if > > > > > we release networking-midonet in the current state it will fail too, isn't > > > > > it? > > > > > > > > Also yes :) > > > > > > > > > > > > > > However, releasing a new version and branching on it can give you the > > > > > needed maintenance window to allow you to fix the issue later, when your > > > > > gates will be fixed and then patches backported. git tags are cheap. > > > > > > > > > > We should notice that since Victoria some patches have been merged in > > > > > Wallaby so even if they aren't ground breaking changes they are changes > > > > > that it is worth to release. > > > > > > > > > > From a release point of view I think it's worth it to release a new version > > > > > and to cut Wallaby. We are close to the it's deadline. That will land the > > > > > available delta between Victoria and Wallaby. That will allow to fix the > > > > > engine facade by opening a maintenance window. If the project is still > > > > > lacking maintainers in a few weeks / months, this will allow a more smooth > > > > > deprecation of this one. > > > > > > > > > > Thoughts? > > > > > > > > Based on Your feedback I agree that we should release now what we have. Even if > > > > it's broken we can then fix it and backport fixes to stable/wallaby branch. > > > > > > > > @Akihiro: are You ok with that too? > > > > > > I was writing another reply and did not notice this mail. > > > While I still have a doubt on releasing the broken code (which we are > > > not sure can be fixed soon or not), > > > I am okay with either decision. > > > > Yeah, releasing broken code and especially where we do not if there will be > > maintainer to fix it or not seems risky for me too. > > > > One option is to deprecate it for wallaby which means follow the deprecation steps > > mentioned in project-team-guide[1]. If maintainers show up then it can be un-deprecated. > > With that, we will not have any compatible wallaby version which I think is a better > > choice than releasing the broken code. > > We were asking about that some time ago already and then some new maintainers > stepped in. But as now there is that problem again with networking-midonet I'm > fine to deprecate it (or ask about it again at least). > But isn't it too late in the cycle now? Last time when we were doing that it was > before milestone-2 IIRC. Now we are almost at the end of the cycle. Should we do > it still now? As there is nothing released for Wallaby[1], we can still do this. As per the process, TC can merge the required patches on that repo if no core is available to +A. > If yes, how much time do we really have to e.g. ask for some new maintainers? > I will say asap :) but I think the release team can set the deadline as they have to take care of release things. [1] https://opendev.org/openstack/releases/src/commit/30492c964f5d7eb85d806086c5b1c656b5c9e9f9/deliverables/wallaby/networking-midonet.yaml -gmann > > > > Releasing the broken code now with the hope of someone will come up and fix it with > > backport makes me a little uncomfortable and if it did not get fix then we will live > > with broken release forever. > > > > > > [1]https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > > > -gmann > > > > > > > > > > > > > > > > > > > [1] > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski a > > > > > écrit : > > > > > > > > > > > Hi, > > > > > > > > > > > > We have opened release patch for networking-midonet [1] but our concern > > > > > > about > > > > > > that project is that its gate is completly broken since some time thus we > > > > > > don't really know if the project is still working and valid to be released. > > > > > > In Wallaby cycle Neutron for example finished transition to the engine > > > > > > facade, > > > > > > and patch to adjust that in networking-midonet is still opened [2] (and > > > > > > red as > > > > > > there were some unrelated issues with most of the jobs there). > > > > > > > > > > > > In the past we had discussion about networking-midonet project and it's > > > > > > status > > > > > > as the official Neutron stadium project. Then some new folks stepped in to > > > > > > maintain it but now it seems a bit like (again) it lacks of maintainers. > > > > > > I know that it is very late in the cycle now so my question to the TC and > > > > > > release teams is: should we release stable/wallaby with its current state, > > > > > > even if it's broken or should we maybe don't release it at all until its > > > > > > gate > > > > > > will be up and running? > > > > > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > > > [2] https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > -- > > > > > Hervé Beraud > > > > > Senior Software Engineer at Red Hat > > > > > irc: hberaud > > > > > https://github.com/4383/ > > > > > https://twitter.com/4383hberaud > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > =ECSj > > > > > -----END PGP SIGNATURE----- > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > From hberaud at redhat.com Mon Mar 29 15:07:15 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 29 Mar 2021 17:07:15 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> Message-ID: If we decide to follow the depreciation process we can wait until tomorrow or Wednesday early in the morning. Le lun. 29 mars 2021 à 16:52, Ghanshyam Mann a écrit : > ---- On Mon, 29 Mar 2021 09:26:31 -0500 Slawek Kaplonski < > skaplons at redhat.com> wrote ---- > > Hi, > > > > On Mon, Mar 29, 2021 at 08:53:58AM -0500, Ghanshyam Mann wrote: > > > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki < > amotoki at gmail.com> wrote ---- > > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski < > skaplons at redhat.com> wrote: > > > > > > > > > > Hi, > > > > > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > > > Hello, > > > > > > > > > > > > The main question is, does the previous Victoria version [1] > will be > > > > > > compatible with the latest neutron changes and with the latest > engine > > > > > > facade introduced during Wallaby? > > > > > > > > > > It won't be compatible. Networking-midonet from Victoria will > not work properly > > > > > with Neutron Wallaby. > > > > > > > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we > shouldn't release > > > > > > a new version of networking-midonet, because the project code > won't be > > > > > > compatible with the rest of our projects (AFAIK neutron), > unless, the > > > > > > previous version will not compatible either, and, unless, not > releasing a > > > > > > Wallaby version leave the project branch uncut and so leave the > > > > > > corresponding series unmaintainable, and so unfixable a > posteriori. > > > > > > > > > > > > If we do not release a new version then we will use a previous > version of > > > > > > networking-midonet. This version will be the last Victoria > version [1]. > > > > > > > > > > > > I suppose that this version (the victoria version) isn't > compatible with > > > > > > the new facade engine either, isn't it? > > > > > > > > > > Correct. It's not compatible. > > > > > > > > > > > > > > > > > So release or not release a new version won't solve the facade > engine > > > > > > problem, isn't? > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > You said that neutron evolved and networking-midonet didn't, > hence even if > > > > > > we release networking-midonet in the current state it will > fail too, isn't > > > > > > it? > > > > > > > > > > Also yes :) > > > > > > > > > > > > > > > > > However, releasing a new version and branching on it can give > you the > > > > > > needed maintenance window to allow you to fix the issue later, > when your > > > > > > gates will be fixed and then patches backported. git tags are > cheap. > > > > > > > > > > > > We should notice that since Victoria some patches have been > merged in > > > > > > Wallaby so even if they aren't ground breaking changes they > are changes > > > > > > that it is worth to release. > > > > > > > > > > > > From a release point of view I think it's worth it to release > a new version > > > > > > and to cut Wallaby. We are close to the it's deadline. That > will land the > > > > > > available delta between Victoria and Wallaby. That will allow > to fix the > > > > > > engine facade by opening a maintenance window. If the project > is still > > > > > > lacking maintainers in a few weeks / months, this will allow a > more smooth > > > > > > deprecation of this one. > > > > > > > > > > > > Thoughts? > > > > > > > > > > Based on Your feedback I agree that we should release now what > we have. Even if > > > > > it's broken we can then fix it and backport fixes to > stable/wallaby branch. > > > > > > > > > > @Akihiro: are You ok with that too? > > > > > > > > I was writing another reply and did not notice this mail. > > > > While I still have a doubt on releasing the broken code (which we > are > > > > not sure can be fixed soon or not), > > > > I am okay with either decision. > > > > > > Yeah, releasing broken code and especially where we do not if there > will be > > > maintainer to fix it or not seems risky for me too. > > > > > > One option is to deprecate it for wallaby which means follow the > deprecation steps > > > mentioned in project-team-guide[1]. If maintainers show up then it > can be un-deprecated. > > > With that, we will not have any compatible wallaby version which I > think is a better > > > choice than releasing the broken code. > > > > We were asking about that some time ago already and then some new > maintainers > > stepped in. But as now there is that problem again with > networking-midonet I'm > > fine to deprecate it (or ask about it again at least). > > But isn't it too late in the cycle now? Last time when we were doing > that it was > > before milestone-2 IIRC. Now we are almost at the end of the cycle. > Should we do > > it still now? > > As there is nothing released for Wallaby[1], we can still do this. As per > the process, TC > can merge the required patches on that repo if no core is available to +A. > > > If yes, how much time do we really have to e.g. ask for some new > maintainers? > > > > I will say asap :) but I think the release team can set the deadline as > they have to take care > of release things. > > [1] > https://opendev.org/openstack/releases/src/commit/30492c964f5d7eb85d806086c5b1c656b5c9e9f9/deliverables/wallaby/networking-midonet.yaml > > > -gmann > > > > > > > Releasing the broken code now with the hope of someone will come up > and fix it with > > > backport makes me a little uncomfortable and if it did not get fix > then we will live > > > with broken release forever. > > > > > > > > > [1] > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski < > skaplons at redhat.com> a > > > > > > écrit : > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > We have opened release patch for networking-midonet [1] but > our concern > > > > > > > about > > > > > > > that project is that its gate is completly broken since some > time thus we > > > > > > > don't really know if the project is still working and valid > to be released. > > > > > > > In Wallaby cycle Neutron for example finished transition to > the engine > > > > > > > facade, > > > > > > > and patch to adjust that in networking-midonet is still > opened [2] (and > > > > > > > red as > > > > > > > there were some unrelated issues with most of the jobs > there). > > > > > > > > > > > > > > In the past we had discussion about networking-midonet > project and it's > > > > > > > status > > > > > > > as the official Neutron stadium project. Then some new folks > stepped in to > > > > > > > maintain it but now it seems a bit like (again) it lacks of > maintainers. > > > > > > > I know that it is very late in the cycle now so my question > to the TC and > > > > > > > release teams is: should we release stable/wallaby with its > current state, > > > > > > > even if it's broken or should we maybe don't release it at > all until its > > > > > > > gate > > > > > > > will be up and running? > > > > > > > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > > > > [2] > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > > > > > -- > > > > > > > Slawek Kaplonski > > > > > > > Principal Software Engineer > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Hervé Beraud > > > > > > Senior Software Engineer at Red Hat > > > > > > irc: hberaud > > > > > > https://github.com/4383/ > > > > > > https://twitter.com/4383hberaud > > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > > =ECSj > > > > > > -----END PGP SIGNATURE----- > > > > > > > > > > -- > > > > > Slawek Kaplonski > > > > > Principal Software Engineer > > > > > Red Hat > > > > > > > > > > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Mar 29 15:14:15 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 29 Mar 2021 17:14:15 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> Message-ID: <20210329151415.a7u7alxqkypjyiau@p1.localdomain> Hi, On Mon, Mar 29, 2021 at 05:07:15PM +0200, Herve Beraud wrote: > If we decide to follow the depreciation process we can wait until tomorrow > or Wednesday early in the morning. I also pinged Sam Morrison and YAMAMOTO Takashi about that today. If I will not have any reply from them until tomorrow morning CEST time, I will propose patches to deprecate it in this cycle. > > Le lun. 29 mars 2021 à 16:52, Ghanshyam Mann a > écrit : > > > ---- On Mon, 29 Mar 2021 09:26:31 -0500 Slawek Kaplonski < > > skaplons at redhat.com> wrote ---- > > > Hi, > > > > > > On Mon, Mar 29, 2021 at 08:53:58AM -0500, Ghanshyam Mann wrote: > > > > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki < > > amotoki at gmail.com> wrote ---- > > > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski < > > skaplons at redhat.com> wrote: > > > > > > > > > > > > Hi, > > > > > > > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > > > > Hello, > > > > > > > > > > > > > > The main question is, does the previous Victoria version [1] > > will be > > > > > > > compatible with the latest neutron changes and with the latest > > engine > > > > > > > facade introduced during Wallaby? > > > > > > > > > > > > It won't be compatible. Networking-midonet from Victoria will > > not work properly > > > > > > with Neutron Wallaby. > > > > > > > > > > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we > > shouldn't release > > > > > > > a new version of networking-midonet, because the project code > > won't be > > > > > > > compatible with the rest of our projects (AFAIK neutron), > > unless, the > > > > > > > previous version will not compatible either, and, unless, not > > releasing a > > > > > > > Wallaby version leave the project branch uncut and so leave the > > > > > > > corresponding series unmaintainable, and so unfixable a > > posteriori. > > > > > > > > > > > > > > If we do not release a new version then we will use a previous > > version of > > > > > > > networking-midonet. This version will be the last Victoria > > version [1]. > > > > > > > > > > > > > > I suppose that this version (the victoria version) isn't > > compatible with > > > > > > > the new facade engine either, isn't it? > > > > > > > > > > > > Correct. It's not compatible. > > > > > > > > > > > > > > > > > > > > So release or not release a new version won't solve the facade > > engine > > > > > > > problem, isn't? > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > You said that neutron evolved and networking-midonet didn't, > > hence even if > > > > > > > we release networking-midonet in the current state it will > > fail too, isn't > > > > > > > it? > > > > > > > > > > > > Also yes :) > > > > > > > > > > > > > > > > > > > > However, releasing a new version and branching on it can give > > you the > > > > > > > needed maintenance window to allow you to fix the issue later, > > when your > > > > > > > gates will be fixed and then patches backported. git tags are > > cheap. > > > > > > > > > > > > > > We should notice that since Victoria some patches have been > > merged in > > > > > > > Wallaby so even if they aren't ground breaking changes they > > are changes > > > > > > > that it is worth to release. > > > > > > > > > > > > > > From a release point of view I think it's worth it to release > > a new version > > > > > > > and to cut Wallaby. We are close to the it's deadline. That > > will land the > > > > > > > available delta between Victoria and Wallaby. That will allow > > to fix the > > > > > > > engine facade by opening a maintenance window. If the project > > is still > > > > > > > lacking maintainers in a few weeks / months, this will allow a > > more smooth > > > > > > > deprecation of this one. > > > > > > > > > > > > > > Thoughts? > > > > > > > > > > > > Based on Your feedback I agree that we should release now what > > we have. Even if > > > > > > it's broken we can then fix it and backport fixes to > > stable/wallaby branch. > > > > > > > > > > > > @Akihiro: are You ok with that too? > > > > > > > > > > I was writing another reply and did not notice this mail. > > > > > While I still have a doubt on releasing the broken code (which we > > are > > > > > not sure can be fixed soon or not), > > > > > I am okay with either decision. > > > > > > > > Yeah, releasing broken code and especially where we do not if there > > will be > > > > maintainer to fix it or not seems risky for me too. > > > > > > > > One option is to deprecate it for wallaby which means follow the > > deprecation steps > > > > mentioned in project-team-guide[1]. If maintainers show up then it > > can be un-deprecated. > > > > With that, we will not have any compatible wallaby version which I > > think is a better > > > > choice than releasing the broken code. > > > > > > We were asking about that some time ago already and then some new > > maintainers > > > stepped in. But as now there is that problem again with > > networking-midonet I'm > > > fine to deprecate it (or ask about it again at least). > > > But isn't it too late in the cycle now? Last time when we were doing > > that it was > > > before milestone-2 IIRC. Now we are almost at the end of the cycle. > > Should we do > > > it still now? > > > > As there is nothing released for Wallaby[1], we can still do this. As per > > the process, TC > > can merge the required patches on that repo if no core is available to +A. > > > > > If yes, how much time do we really have to e.g. ask for some new > > maintainers? > > > > > > > I will say asap :) but I think the release team can set the deadline as > > they have to take care > > of release things. > > > > [1] > > https://opendev.org/openstack/releases/src/commit/30492c964f5d7eb85d806086c5b1c656b5c9e9f9/deliverables/wallaby/networking-midonet.yaml > > > > > > -gmann > > > > > > > > > > Releasing the broken code now with the hope of someone will come up > > and fix it with > > > > backport makes me a little uncomfortable and if it did not get fix > > then we will live > > > > with broken release forever. > > > > > > > > > > > > [1] > > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski < > > skaplons at redhat.com> a > > > > > > > écrit : > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > We have opened release patch for networking-midonet [1] but > > our concern > > > > > > > > about > > > > > > > > that project is that its gate is completly broken since some > > time thus we > > > > > > > > don't really know if the project is still working and valid > > to be released. > > > > > > > > In Wallaby cycle Neutron for example finished transition to > > the engine > > > > > > > > facade, > > > > > > > > and patch to adjust that in networking-midonet is still > > opened [2] (and > > > > > > > > red as > > > > > > > > there were some unrelated issues with most of the jobs > > there). > > > > > > > > > > > > > > > > In the past we had discussion about networking-midonet > > project and it's > > > > > > > > status > > > > > > > > as the official Neutron stadium project. Then some new folks > > stepped in to > > > > > > > > maintain it but now it seems a bit like (again) it lacks of > > maintainers. > > > > > > > > I know that it is very late in the cycle now so my question > > to the TC and > > > > > > > > release teams is: should we release stable/wallaby with its > > current state, > > > > > > > > even if it's broken or should we maybe don't release it at > > all until its > > > > > > > > gate > > > > > > > > will be up and running? > > > > > > > > > > > > > > > > [1] https://review.opendev.org/c/openstack/releases/+/781713 > > > > > > > > [2] > > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > > > > > > > -- > > > > > > > > Slawek Kaplonski > > > > > > > > Principal Software Engineer > > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > Hervé Beraud > > > > > > > Senior Software Engineer at Red Hat > > > > > > > irc: hberaud > > > > > > > https://github.com/4383/ > > > > > > > https://twitter.com/4383hberaud > > > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > > > =ECSj > > > > > > > -----END PGP SIGNATURE----- > > > > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hberaud at redhat.com Mon Mar 29 15:23:25 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 29 Mar 2021 17:23:25 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <20210329151415.a7u7alxqkypjyiau@p1.localdomain> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> <20210329151415.a7u7alxqkypjyiau@p1.localdomain> Message-ID: Excellent! Thank you Slavek. Le lun. 29 mars 2021 à 17:15, Slawek Kaplonski a écrit : > Hi, > > On Mon, Mar 29, 2021 at 05:07:15PM +0200, Herve Beraud wrote: > > If we decide to follow the depreciation process we can wait until > tomorrow > > or Wednesday early in the morning. > > I also pinged Sam Morrison and YAMAMOTO Takashi about that today. If I > will not > have any reply from them until tomorrow morning CEST time, I will propose > patches to deprecate it in this cycle. > > > > > Le lun. 29 mars 2021 à 16:52, Ghanshyam Mann a > > écrit : > > > > > ---- On Mon, 29 Mar 2021 09:26:31 -0500 Slawek Kaplonski < > > > skaplons at redhat.com> wrote ---- > > > > Hi, > > > > > > > > On Mon, Mar 29, 2021 at 08:53:58AM -0500, Ghanshyam Mann wrote: > > > > > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki < > > > amotoki at gmail.com> wrote ---- > > > > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski < > > > skaplons at redhat.com> wrote: > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > > > > > Hello, > > > > > > > > > > > > > > > > The main question is, does the previous Victoria version > [1] > > > will be > > > > > > > > compatible with the latest neutron changes and with the > latest > > > engine > > > > > > > > facade introduced during Wallaby? > > > > > > > > > > > > > > It won't be compatible. Networking-midonet from Victoria will > > > not work properly > > > > > > > with Neutron Wallaby. > > > > > > > > > > > > > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we > > > shouldn't release > > > > > > > > a new version of networking-midonet, because the project > code > > > won't be > > > > > > > > compatible with the rest of our projects (AFAIK neutron), > > > unless, the > > > > > > > > previous version will not compatible either, and, unless, > not > > > releasing a > > > > > > > > Wallaby version leave the project branch uncut and so > leave the > > > > > > > > corresponding series unmaintainable, and so unfixable a > > > posteriori. > > > > > > > > > > > > > > > > If we do not release a new version then we will use a > previous > > > version of > > > > > > > > networking-midonet. This version will be the last Victoria > > > version [1]. > > > > > > > > > > > > > > > > I suppose that this version (the victoria version) isn't > > > compatible with > > > > > > > > the new facade engine either, isn't it? > > > > > > > > > > > > > > Correct. It's not compatible. > > > > > > > > > > > > > > > > > > > > > > > So release or not release a new version won't solve the > facade > > > engine > > > > > > > > problem, isn't? > > > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > > > > You said that neutron evolved and networking-midonet > didn't, > > > hence even if > > > > > > > > we release networking-midonet in the current state it will > > > fail too, isn't > > > > > > > > it? > > > > > > > > > > > > > > Also yes :) > > > > > > > > > > > > > > > > > > > > > > > However, releasing a new version and branching on it can > give > > > you the > > > > > > > > needed maintenance window to allow you to fix the issue > later, > > > when your > > > > > > > > gates will be fixed and then patches backported. git tags > are > > > cheap. > > > > > > > > > > > > > > > > We should notice that since Victoria some patches have been > > > merged in > > > > > > > > Wallaby so even if they aren't ground breaking changes they > > > are changes > > > > > > > > that it is worth to release. > > > > > > > > > > > > > > > > From a release point of view I think it's worth it to > release > > > a new version > > > > > > > > and to cut Wallaby. We are close to the it's deadline. That > > > will land the > > > > > > > > available delta between Victoria and Wallaby. That will > allow > > > to fix the > > > > > > > > engine facade by opening a maintenance window. If the > project > > > is still > > > > > > > > lacking maintainers in a few weeks / months, this will > allow a > > > more smooth > > > > > > > > deprecation of this one. > > > > > > > > > > > > > > > > Thoughts? > > > > > > > > > > > > > > Based on Your feedback I agree that we should release now > what > > > we have. Even if > > > > > > > it's broken we can then fix it and backport fixes to > > > stable/wallaby branch. > > > > > > > > > > > > > > @Akihiro: are You ok with that too? > > > > > > > > > > > > I was writing another reply and did not notice this mail. > > > > > > While I still have a doubt on releasing the broken code (which > we > > > are > > > > > > not sure can be fixed soon or not), > > > > > > I am okay with either decision. > > > > > > > > > > Yeah, releasing broken code and especially where we do not if > there > > > will be > > > > > maintainer to fix it or not seems risky for me too. > > > > > > > > > > One option is to deprecate it for wallaby which means follow the > > > deprecation steps > > > > > mentioned in project-team-guide[1]. If maintainers show up then it > > > can be un-deprecated. > > > > > With that, we will not have any compatible wallaby version which I > > > think is a better > > > > > choice than releasing the broken code. > > > > > > > > We were asking about that some time ago already and then some new > > > maintainers > > > > stepped in. But as now there is that problem again with > > > networking-midonet I'm > > > > fine to deprecate it (or ask about it again at least). > > > > But isn't it too late in the cycle now? Last time when we were doing > > > that it was > > > > before milestone-2 IIRC. Now we are almost at the end of the cycle. > > > Should we do > > > > it still now? > > > > > > As there is nothing released for Wallaby[1], we can still do this. As > per > > > the process, TC > > > can merge the required patches on that repo if no core is available to > +A. > > > > > > > If yes, how much time do we really have to e.g. ask for some new > > > maintainers? > > > > > > > > > > I will say asap :) but I think the release team can set the deadline as > > > they have to take care > > > of release things. > > > > > > [1] > > > > https://opendev.org/openstack/releases/src/commit/30492c964f5d7eb85d806086c5b1c656b5c9e9f9/deliverables/wallaby/networking-midonet.yaml > > > > > > > > > -gmann > > > > > > > > > > > > > Releasing the broken code now with the hope of someone will come > up > > > and fix it with > > > > > backport makes me a little uncomfortable and if it did not get fix > > > then we will live > > > > > with broken release forever. > > > > > > > > > > > > > > > [1] > > > > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski < > > > skaplons at redhat.com> a > > > > > > > > écrit : > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > We have opened release patch for networking-midonet [1] > but > > > our concern > > > > > > > > > about > > > > > > > > > that project is that its gate is completly broken since > some > > > time thus we > > > > > > > > > don't really know if the project is still working and > valid > > > to be released. > > > > > > > > > In Wallaby cycle Neutron for example finished transition > to > > > the engine > > > > > > > > > facade, > > > > > > > > > and patch to adjust that in networking-midonet is still > > > opened [2] (and > > > > > > > > > red as > > > > > > > > > there were some unrelated issues with most of the jobs > > > there). > > > > > > > > > > > > > > > > > > In the past we had discussion about networking-midonet > > > project and it's > > > > > > > > > status > > > > > > > > > as the official Neutron stadium project. Then some new > folks > > > stepped in to > > > > > > > > > maintain it but now it seems a bit like (again) it lacks > of > > > maintainers. > > > > > > > > > I know that it is very late in the cycle now so my > question > > > to the TC and > > > > > > > > > release teams is: should we release stable/wallaby with > its > > > current state, > > > > > > > > > even if it's broken or should we maybe don't release it > at > > > all until its > > > > > > > > > gate > > > > > > > > > will be up and running? > > > > > > > > > > > > > > > > > > [1] > https://review.opendev.org/c/openstack/releases/+/781713 > > > > > > > > > [2] > > > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > > > > > > > > > -- > > > > > > > > > Slawek Kaplonski > > > > > > > > > Principal Software Engineer > > > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > Hervé Beraud > > > > > > > > Senior Software Engineer at Red Hat > > > > > > > > irc: hberaud > > > > > > > > https://github.com/4383/ > > > > > > > > https://twitter.com/4383hberaud > > > > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > > > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > > > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > > > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > > > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > > > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > > > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > > > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > > > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > > > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > > > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > > > > =ECSj > > > > > > > > -----END PGP SIGNATURE----- > > > > > > > > > > > > > > -- > > > > > > > Slawek Kaplonski > > > > > > > Principal Software Engineer > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Mon Mar 29 15:34:17 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 29 Mar 2021 17:34:17 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> Message-ID: W dniu 23.03.2021 o 14:36, Thomas Goirand pisze: > On 3/23/21 12:43 PM, Marcin Juszkiewicz wrote: >> AArch64 support in Kolla(-Ansible) project is present since Ocata >> (iirc). For most of that time we supported all three distributions: >> CentOS, Debian and Ubuntu. >> >> # Distribution coverage >> >> Debian is source only as there never was any source of up-to-date >> OpenStack packages (Debian OpenStack team builds only x86-64 packages >> and they do it after release). >> >> # My interest >> >> I take care of Debian/source ones as we use them at Linaro in our >> developer cloud setups. It often means backporting of fixes when we >> upgrade from one previous release to something newer. > > Getting binary support is just a mater of rebuilding packages for arm64. > I once did that for you setting-up a Jenkins machine just for > rebuilding. It's really a shame that you guys aren't following that > road. You'd have all of my support if you did. I personally don't have > access to the necessary hardware for it, and wont use the arm64 repos... The thing is - no one asked for Debian/binary images so far so I did not bothered you. Your team provides packages for already released OpenStack stuff. From Kolla point of view it means that at beginning of a new development cycle we would need to get images for previous release with Debian binary packages built, tested and backported. From balazs.gibizer at est.tech Mon Mar 29 15:40:55 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 29 Mar 2021 17:40:55 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> Message-ID: <78MQQQ.FMXMGIRLYEMQ@est.tech> On Mon, Mar 29, 2021 at 16:05, Balazs Gibizer wrote: > > > On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita > wrote: >> Hello Requirements Team, >> >> The Cinder team recently became aware of a potential data-loss bug >> [0] that has been fixed in os-brick master [1] and backported to >> os-brick stable/wallaby [2]. We've proposed a release of os-brick >> 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to >> include 4.4.0 in the wallaby release. >> >> We have three jobs running tempest with os-brick source in master >> that have passed with [1]: os-brick-src-devstack-plugin-ceph [4], >> os-brick-src-tempest-lvm-lio-barbican [5],and >> os-brick-src-tempest-nfs [6]. The difference between os-brick >> master (at the time the tests were run) and stable/wallaby since >> the 4.3.0 tag is as follows: >> >> master: >> d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg change >> (Gorka Eguileor) >> 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to >> modifying offset" (Zuul) >> 28545c7 4 months ago RBD: catch read exceptions prior to modifying >> offset (Jon Bernard) >> 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" >> (Zuul) >> 7cfdb76 6 weeks ago Dropping explicit unicode literal >> (tushargite96) >> 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack Release >> Bot) >> ab57392 3 weeks ago Update master for stable/wallaby (OpenStack >> Release Bot) >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver >> connection information compatibility fix" (Zuul) >> >> stable/wallaby: >> f86944b 3 days ago Add release note prelude for os-brick 4.4.0 >> (Brian Rosmaita) >> c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg change >> (Gorka Eguileor) >> 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for stable/wallaby >> (OpenStack Release Bot) >> f3f93dc 3 weeks ago Update .gitreview for stable/wallaby >> (OpenStack Release Bot) >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver >> connection information compatibility fix" (Zuul) >> >> This gives us very high confidence that the results of the tests run >> against master also apply to stable/wallaby at f86944b. >> >> Thank you for considering this request. >> >> (I've included Nova here because the bug occurs when the >> configuration option that enables multipath connections on a >> compute is changed while volumes are attached, so if this RFE is >> approved, nova might want to raise the minimum version of os-brick >> in wallaby to 4.4.0.) >> > > Thanks for the heads up. After the new os-brick version is released I > will prepare a version bump patch in nova on master and > stable/wallaby. This also means that nova will release an RC2. I've proposed the nova patch on master to bump min os-brick to 4.3.1 in nova[1] [1] https://review.opendev.org/c/openstack/nova/+/783674 > > Cheers, > gibi > >> >> [0] https://launchpad.net/bugs/1921381 >> [1] https://review.opendev.org/c/openstack/os-brick/+/782992 >> [2] https://review.opendev.org/c/openstack/os-brick/+/783207 >> [3] https://review.opendev.org/c/openstack/releases/+/783641 >> [4] >> https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb >> [5] >> https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 >> [6] >> https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa > > > > From apps at mossakowski.ch Sun Mar 28 18:32:43 2021 From: apps at mossakowski.ch (apps at mossakowski.ch) Date: Sun, 28 Mar 2021 18:32:43 +0000 Subject: [octavia] victoria - loadbalancer works but its operational status is offline In-Reply-To: References: Message-ID: Yes, that was it: missing \[health\_manager\].heartbeat\_key in octavia.conf It is not present in openstack victoria octavia docs, I'll push it together with my installation guide for centos8. Thanks for your accurate hint Gregory. It is always crucial to ask the right guy:) Regards, Piotr Mossakowski Sent from ProtonMail mobile \-------- Original Message -------- On 22 Mar 2021, 09:10, Gregory Thiemonge < gthiemonge at redhat.com> wrote: > > > > Hi, > > Most of the OFFLINE operational status issues are caused by communication problems between the amphorae and the Octavia health-manager. > > In your case, the "Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode'" log message shows that the health-manager receives the heartbeat packets from the amphorae but it is unable to decode them. Those packets are encrypted JSON messages and it seems that the key (\[health\_manager\].heartbeat\_key see [https://docs.openstack.org/octavia/latest/configuration/configref.html\#health-manager][https_docs.openstack.org_octavia_latest_configuration_configref.html_health-manager]) used to encrypt those messages is not defined in your configuration file. So I would suggest configuring it and restarting the Octavia services, then you can re-create or failover the load balancers (you cannot change this parameter in a running load balancer). > > > > > > Gregory > > > > > On Sun, Mar 21, 2021 at 6:17 PM <[apps at mossakowski.ch][apps_mossakowski.ch]> wrote: > > > > Hello, > > > > > > I have stable/victoria baremetal openstack with octavia installed on centos8 using openvswitch mechanism driver: octavia api on controller, health-manager,housekeeping,worker on 3 compute/network nodes. > > > > > > Official docs include only ubuntu with linuxbridge mechanism but I used https://github.com/prastamaha/openstack-octavia as a reference to get it working on centos8 with ovs. > > > > > > I will push those docs instructions for centos8 soon: https://github.com/openstack/octavia/tree/master/doc/source/install. > > > > > > I created basic http scenario using [https://docs.openstack.org/octavia/victoria/user/guides/basic-cookbook.html\#deploy-a-basic-http-load-balancer][https_docs.openstack.org_octavia_victoria_user_guides_basic-cookbook.html_deploy-a-basic-http-load-balancer]. > > > > > > Loadbalancer works but its operational status is offline (openstack\_loadbalancer\_outputs.txt). > > > > > > On all octavia workers I see the same warning message in health\_manager.log: > > > > > > Health Manager experienced an exception processing a heartbeat message from ('172.31.255.233', 1907). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' > > > > > > I've searched for related active bug but all I found is this not related in my opinion: [https://storyboard.openstack.org/\#!/story/2008615][https_storyboard.openstack.org_story_2008615] > > > > > > I'm attaching all info I've gathered: > > > > > > * octavia.conf and health\_manager debug logs (octavia\_config\_and\_health\_manager\_logs.txt) > > > > * tcpdump from amphora VM (tcpdump\_from\_amphora\_vm.txt) > > > > * tcpdump from octavia worker (tcpdump\_from\_octavia\_worker.txt) > > > > * debug amphora-agent.log from amphora VM (amphora-agent.log) > > > > > > Can you point me to the right direction what I have missed? > > > > > > Thanks! > > > > > > Piotr Mossakowski > > > > > > https://github.com/moss2k13 > > > > > > > > > > > > > > > > > > > > [https_docs.openstack.org_octavia_latest_configuration_configref.html_health-manager]: https://docs.openstack.org/octavia/latest/configuration/configref.html#health-manager [apps_mossakowski.ch]: mailto:apps at mossakowski.ch [https_docs.openstack.org_octavia_victoria_user_guides_basic-cookbook.html_deploy-a-basic-http-load-balancer]: https://docs.openstack.org/octavia/victoria/user/guides/basic-cookbook.html#deploy-a-basic-http-load-balancer [https_storyboard.openstack.org_story_2008615]: https://storyboard.openstack.org/#!/story/2008615 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - EmailAddress(s=apps at mossakowski.ch) - 0x9FDBE75C.asc Type: application/pgp-keys Size: 640 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 294 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Mon Mar 29 17:32:19 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 29 Mar 2021 12:32:19 -0500 Subject: [tc] Technical Committee weekly meeting Message-ID: <1787f0b2273.b33e94cc1241611.4906469719374016487@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for April 1st at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, March 31st, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From zigo at debian.org Mon Mar 29 17:48:04 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 29 Mar 2021 19:48:04 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> Message-ID: On 3/29/21 5:34 PM, Marcin Juszkiewicz wrote: > W dniu 23.03.2021 o 14:36, Thomas Goirand pisze: >> On 3/23/21 12:43 PM, Marcin Juszkiewicz wrote: > >>> AArch64 support in Kolla(-Ansible) project is present since Ocata >>> (iirc). For most of that time we supported all three distributions: >>> CentOS, Debian and Ubuntu. >>> >>> # Distribution coverage >>> >>> Debian is source only as there never was any source of up-to-date >>> OpenStack packages (Debian OpenStack team builds only x86-64 packages >>> and they do it after release). >>> >>> # My interest >>> >>> I take care of Debian/source ones as we use them at Linaro in our >>> developer cloud setups. It often means backporting of fixes when we >>> upgrade from one previous release to something newer. >> >> Getting binary support is just a mater of rebuilding packages for arm64. >> I once did that for you setting-up a Jenkins machine just for >> rebuilding. It's really a shame that you guys aren't following that >> road. You'd have all of my support if you did. I personally don't have >> access to the necessary hardware for it, and wont use the arm64 repos... > > The thing is - no one asked for Debian/binary images so far so I did not > bothered you. > > Your team provides packages for already released OpenStack stuff. From > Kolla point of view it means that at beginning of a new development > cycle we would need to get images for previous release with Debian > binary packages built, tested and backported. I used to package every beta (b1, b2, b3). But then, nobody was consuming them, so I stopped. And the harsh reality right now, is that most projects stopped producing these bX releases. Today, I'm already done with the packaging of all RC1 releases, and I'm already beginning to test installing them. This is 2 days after the RCs. No other distribution beats me on this timing. So I'm not sure what more I could do... If you wish to participate and provide intermediary releases using Git, that's possible, but that means someone needs to participate and do it. I can't be full time doing packaging like I used to when I was employed to do only that (these days, I maintain my own installer and 8 clusters in production...). Radosław Piliszek often ask me about the status of Debian packages, and he manages to maintain compat in OSA using them. Why can't you do it? Again, I very much would love to collaborate and help you doing more with the Debian binary stuff for ARM. Cheers, Thomas Goirand (zigo) From radoslaw.piliszek at gmail.com Mon Mar 29 19:41:55 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 29 Mar 2021 21:41:55 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> Message-ID: On Mon, Mar 29, 2021 at 7:49 PM Thomas Goirand wrote: > Radosław Piliszek often ask me about the status of Debian packages, and > he manages to maintain compat in OSA using them. Why can't you do it? Just for the record, I do it for Kolla too. :-) Just x86_64, not aarch64. -yoctozepto From peter.matulis at canonical.com Mon Mar 29 20:34:16 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 29 Mar 2021 16:34:16 -0400 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> Message-ID: ~ Update ~ I've managed to build a PDF locally using a tox target: $ tox -e deploy-guide-pdf deploy-guide/build/pdf/DeploymentGuide.pdf For my local web server, the resulting HTML has an icon with this download link: http:////doc-charm-deployment-guide.pdf Which doesn't work. How do I fix that? And especially, how do I get Zuul to do it? My PR is here: https://review.opendev.org/c/openstack/charm-deployment-guide/+/782581 On Thu, Mar 4, 2021 at 2:13 PM Michael Johnson wrote: > Peter, > > Feel free to message me on IRC (johnsom) if you run into questions > about enabling the PDF docs for your projects. I did the work for > Octavia so might have some answers. > > Michael > > On Thu, Mar 4, 2021 at 12:36 AM Bogdan Dobrelya > wrote: > > > > On 3/3/21 9:30 PM, Jeremy Stanley wrote: > > > On 2021-03-03 15:05:10 -0500 (-0500), Peter Matulis wrote: > > > [...] > > >> How do I get a download PDF link like what is available in the > > >> published pages of the Nova project? Where is that documented? > > > > On each project's documentation page, there is a "Download PDF" button, > > at the top, near to the "Report a Bug". > > > > >> > > >> In short, yes, I am interested in having downloadable PDFs for the > > >> projects that I maintain: > > >> > > >> https://opendev.org/openstack/charm-guide > > >> https://opendev.org/openstack/charm-deployment-guide > > > > > > The official goal document is still available here: > > > > > > > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html > > > > > > Some technical detail can also be found in the earlier docs spec: > > > > > > > https://specs.openstack.org/openstack/docs-specs/specs/ocata/build-pdf-from-rst-guides.html > > > > > > A bit of spelunking in Git history turns up, for example, this > > > change implementing PDF generation for openstack/ironic (you can > > > find plenty more if you hunt): > > > > > > https://review.opendev.org/680585 > > > > > > I expect you would just do something similar to that. If memory > > > serves (it's been a couple years now), each project hit slightly > > > different challenges as no two bodies of documentation are every > > > quite the same. You'll likely have to dig deep occasionally in > > > Sphinx and LaTeX examples to iron things out. > > > > > > One thing which would have been nice as an output of that cycle goal > > > was if the PTI section for documentation was updated with related > > > technical guidance on building PDFs, but it's rather lacking in that > > > department still: > > > > > > > https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation > > > > > > If you can come up with a succinct summary for what's needed, I > > > expect adding it there would be really useful to others too. > > > > > > > > > -- > > Best regards, > > Bogdan Dobrelya, > > Irc #bogdando > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Mar 29 21:54:16 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 29 Mar 2021 21:54:16 +0000 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> Message-ID: <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> On 2021-03-29 16:34:16 -0400 (-0400), Peter Matulis wrote: [...] > For my local web server, the resulting HTML has an icon with this > download link: http:////doc-charm-deployment-guide.pdf > > Which doesn't work. > > How do I fix that? And especially, how do I get Zuul to do it? [...] Looking at the console for the docs build on your change, it seems to be skipping PDF building in the run phase because of this check: Your change uses deploy-guide-pdf as its tox testenv instead of the default "pdf-docs" value set by the build-pdf-docs role. You could override it in a vars block within a custom job, but probably easier to adjust your tox.ini to what the standard build-openstack-deploy-guide job expects. Alternatively, you might adjust the job definition here, maybe it was an oversight (I don't immediately spot any examples of projects publishing PDFs with that job specifically, though my search was far from complete): -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gagehugo at gmail.com Mon Mar 29 21:54:36 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 29 Mar 2021 16:54:36 -0500 Subject: [openstack-helm] Meeting cancelled March 31st Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting tomorrow, March 31st, the meeting is cancelled. Our next IRC meeting will be April 6th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Mon Mar 29 23:47:24 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 29 Mar 2021 19:47:24 -0400 Subject: [docs] Project guides in PDF format In-Reply-To: <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> Message-ID: I changed the testenv to 'pdf-docs' and the build is still being skipped. Do I need to submit a PR to have this [1] set to 'false'? [1]: https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/zuul.d/jobs.yaml#L970 On Mon, Mar 29, 2021 at 5:56 PM Jeremy Stanley wrote: > On 2021-03-29 16:34:16 -0400 (-0400), Peter Matulis wrote: > [...] > > For my local web server, the resulting HTML has an icon with this > > download link: http:////doc-charm-deployment-guide.pdf > > > > Which doesn't work. > > > > How do I fix that? And especially, how do I get Zuul to do it? > [...] > > Looking at the console for the docs build on your change, it seems > to be skipping PDF building in the run phase because of this check: > > https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/roles/build-pdf-docs/tasks/main.yaml#L3 > > > > Your change uses deploy-guide-pdf as its tox testenv instead of the > default "pdf-docs" value set by the build-pdf-docs role. You could > override it in a vars block within a custom job, but probably easier > to adjust your tox.ini to what the standard > build-openstack-deploy-guide job expects. Alternatively, you might > adjust the job definition here, maybe it was an oversight (I don't > immediately spot any examples of projects publishing PDFs with that > job specifically, though my search was far from complete): > > https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/zuul.d/jobs.yaml#L962-L970 > > > > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 30 00:07:05 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Mar 2021 00:07:05 +0000 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> Message-ID: <20210330000704.bsuukwkon2vnint3@yuggoth.org> On 2021-03-29 19:47:24 -0400 (-0400), Peter Matulis wrote: > I changed the testenv to 'pdf-docs' and the build is still being skipped. > > Do I need to submit a PR to have this [1] set to 'false'? > > [1]: > https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/zuul.d/jobs.yaml#L970 [...] Oh, yep that'll need to be adjusted or overridden as well. I see that https://review.opendev.org/678077 explicitly chose not to do PDF builds for deploy guides for the original PDF docs implementation a couple of years ago. Unfortunately the commit message doesn't say why, but maybe this is a good opportunity to start. However, many (most?) API projects seem to include their deployment guides in their software Git repos, so switching this on for everyone might break their deploy guide builds. If we combine it with an expectation for a deploy-guide-specific PDF building tox testenv like you had previously, then it would get safely skipped by any projects without that testenv defined. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From johnsomor at gmail.com Tue Mar 30 01:20:22 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 29 Mar 2021 18:20:22 -0700 Subject: [designate][ptg] Designate will have a PTG time for Xena Message-ID: Hi Designate community! I have reserved a timeslot to discuss Designate at the PTG. As discussed in the IRC channel, we will meet during the same time slot as we did for the last PTG. April 22th, 13:00-15:00 UTC I have created an etherpad to start collecting topics: https://etherpad.opendev.org/p/xena-ptg-designate I hope to see you there, Michael From openinfradn at gmail.com Tue Mar 30 06:48:18 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 30 Mar 2021 12:18:18 +0530 Subject: Use of BladeCenter as a compute node(s) Message-ID: Hi, Done anyone deployed OpenStack compute nodes on BladeCenter? Do we need to attach each blade as a compute node? If we do so, aren't we restrict the maximum capacity of the VM to capacity of each blade? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Mar 30 07:03:08 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 30 Mar 2021 09:03:08 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> <20210329151415.a7u7alxqkypjyiau@p1.localdomain> Message-ID: <20210330070308.eltpfavg44rawo7e@p1.localdomain> Hi, As Takashi told me that he will not have time to work on this anytime soon and I don't get any info from Sam, I just proposed patches to deprecate networking-midonet project [1]. I know that it's very late in the cycle but I think that this will be the best to do based on current circumstances and discussion in this thread. [1] https://review.opendev.org/q/topic:%22deprecate-networking-midonet%22+(status:open%20OR%20status:merged) On Mon, Mar 29, 2021 at 05:23:25PM +0200, Herve Beraud wrote: > Excellent! Thank you Slavek. > > Le lun. 29 mars 2021 à 17:15, Slawek Kaplonski a > écrit : > > > Hi, > > > > On Mon, Mar 29, 2021 at 05:07:15PM +0200, Herve Beraud wrote: > > > If we decide to follow the depreciation process we can wait until > > tomorrow > > > or Wednesday early in the morning. > > > > I also pinged Sam Morrison and YAMAMOTO Takashi about that today. If I > > will not > > have any reply from them until tomorrow morning CEST time, I will propose > > patches to deprecate it in this cycle. > > > > > > > > Le lun. 29 mars 2021 à 16:52, Ghanshyam Mann a > > > écrit : > > > > > > > ---- On Mon, 29 Mar 2021 09:26:31 -0500 Slawek Kaplonski < > > > > skaplons at redhat.com> wrote ---- > > > > > Hi, > > > > > > > > > > On Mon, Mar 29, 2021 at 08:53:58AM -0500, Ghanshyam Mann wrote: > > > > > > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki < > > > > amotoki at gmail.com> wrote ---- > > > > > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski < > > > > skaplons at redhat.com> wrote: > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud wrote: > > > > > > > > > Hello, > > > > > > > > > > > > > > > > > > The main question is, does the previous Victoria version > > [1] > > > > will be > > > > > > > > > compatible with the latest neutron changes and with the > > latest > > > > engine > > > > > > > > > facade introduced during Wallaby? > > > > > > > > > > > > > > > > It won't be compatible. Networking-midonet from Victoria will > > > > not work properly > > > > > > > > with Neutron Wallaby. > > > > > > > > > > > > > > > > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so we > > > > shouldn't release > > > > > > > > > a new version of networking-midonet, because the project > > code > > > > won't be > > > > > > > > > compatible with the rest of our projects (AFAIK neutron), > > > > unless, the > > > > > > > > > previous version will not compatible either, and, unless, > > not > > > > releasing a > > > > > > > > > Wallaby version leave the project branch uncut and so > > leave the > > > > > > > > > corresponding series unmaintainable, and so unfixable a > > > > posteriori. > > > > > > > > > > > > > > > > > > If we do not release a new version then we will use a > > previous > > > > version of > > > > > > > > > networking-midonet. This version will be the last Victoria > > > > version [1]. > > > > > > > > > > > > > > > > > > I suppose that this version (the victoria version) isn't > > > > compatible with > > > > > > > > > the new facade engine either, isn't it? > > > > > > > > > > > > > > > > Correct. It's not compatible. > > > > > > > > > > > > > > > > > > > > > > > > > > So release or not release a new version won't solve the > > facade > > > > engine > > > > > > > > > problem, isn't? > > > > > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > > > > > > > You said that neutron evolved and networking-midonet > > didn't, > > > > hence even if > > > > > > > > > we release networking-midonet in the current state it will > > > > fail too, isn't > > > > > > > > > it? > > > > > > > > > > > > > > > > Also yes :) > > > > > > > > > > > > > > > > > > > > > > > > > > However, releasing a new version and branching on it can > > give > > > > you the > > > > > > > > > needed maintenance window to allow you to fix the issue > > later, > > > > when your > > > > > > > > > gates will be fixed and then patches backported. git tags > > are > > > > cheap. > > > > > > > > > > > > > > > > > > We should notice that since Victoria some patches have been > > > > merged in > > > > > > > > > Wallaby so even if they aren't ground breaking changes they > > > > are changes > > > > > > > > > that it is worth to release. > > > > > > > > > > > > > > > > > > From a release point of view I think it's worth it to > > release > > > > a new version > > > > > > > > > and to cut Wallaby. We are close to the it's deadline. That > > > > will land the > > > > > > > > > available delta between Victoria and Wallaby. That will > > allow > > > > to fix the > > > > > > > > > engine facade by opening a maintenance window. If the > > project > > > > is still > > > > > > > > > lacking maintainers in a few weeks / months, this will > > allow a > > > > more smooth > > > > > > > > > deprecation of this one. > > > > > > > > > > > > > > > > > > Thoughts? > > > > > > > > > > > > > > > > Based on Your feedback I agree that we should release now > > what > > > > we have. Even if > > > > > > > > it's broken we can then fix it and backport fixes to > > > > stable/wallaby branch. > > > > > > > > > > > > > > > > @Akihiro: are You ok with that too? > > > > > > > > > > > > > > I was writing another reply and did not notice this mail. > > > > > > > While I still have a doubt on releasing the broken code (which > > we > > > > are > > > > > > > not sure can be fixed soon or not), > > > > > > > I am okay with either decision. > > > > > > > > > > > > Yeah, releasing broken code and especially where we do not if > > there > > > > will be > > > > > > maintainer to fix it or not seems risky for me too. > > > > > > > > > > > > One option is to deprecate it for wallaby which means follow the > > > > deprecation steps > > > > > > mentioned in project-team-guide[1]. If maintainers show up then it > > > > can be un-deprecated. > > > > > > With that, we will not have any compatible wallaby version which I > > > > think is a better > > > > > > choice than releasing the broken code. > > > > > > > > > > We were asking about that some time ago already and then some new > > > > maintainers > > > > > stepped in. But as now there is that problem again with > > > > networking-midonet I'm > > > > > fine to deprecate it (or ask about it again at least). > > > > > But isn't it too late in the cycle now? Last time when we were doing > > > > that it was > > > > > before milestone-2 IIRC. Now we are almost at the end of the cycle. > > > > Should we do > > > > > it still now? > > > > > > > > As there is nothing released for Wallaby[1], we can still do this. As > > per > > > > the process, TC > > > > can merge the required patches on that repo if no core is available to > > +A. > > > > > > > > > If yes, how much time do we really have to e.g. ask for some new > > > > maintainers? > > > > > > > > > > > > > I will say asap :) but I think the release team can set the deadline as > > > > they have to take care > > > > of release things. > > > > > > > > [1] > > > > > > https://opendev.org/openstack/releases/src/commit/30492c964f5d7eb85d806086c5b1c656b5c9e9f9/deliverables/wallaby/networking-midonet.yaml > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > Releasing the broken code now with the hope of someone will come > > up > > > > and fix it with > > > > > > backport makes me a little uncomfortable and if it did not get fix > > > > then we will live > > > > > > with broken release forever. > > > > > > > > > > > > > > > > > > [1] > > > > > > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > > > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > > > > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski < > > > > skaplons at redhat.com> a > > > > > > > > > écrit : > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > We have opened release patch for networking-midonet [1] > > but > > > > our concern > > > > > > > > > > about > > > > > > > > > > that project is that its gate is completly broken since > > some > > > > time thus we > > > > > > > > > > don't really know if the project is still working and > > valid > > > > to be released. > > > > > > > > > > In Wallaby cycle Neutron for example finished transition > > to > > > > the engine > > > > > > > > > > facade, > > > > > > > > > > and patch to adjust that in networking-midonet is still > > > > opened [2] (and > > > > > > > > > > red as > > > > > > > > > > there were some unrelated issues with most of the jobs > > > > there). > > > > > > > > > > > > > > > > > > > > In the past we had discussion about networking-midonet > > > > project and it's > > > > > > > > > > status > > > > > > > > > > as the official Neutron stadium project. Then some new > > folks > > > > stepped in to > > > > > > > > > > maintain it but now it seems a bit like (again) it lacks > > of > > > > maintainers. > > > > > > > > > > I know that it is very late in the cycle now so my > > question > > > > to the TC and > > > > > > > > > > release teams is: should we release stable/wallaby with > > its > > > > current state, > > > > > > > > > > even if it's broken or should we maybe don't release it > > at > > > > all until its > > > > > > > > > > gate > > > > > > > > > > will be up and running? > > > > > > > > > > > > > > > > > > > > [1] > > https://review.opendev.org/c/openstack/releases/+/781713 > > > > > > > > > > [2] > > > > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > Slawek Kaplonski > > > > > > > > > > Principal Software Engineer > > > > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > Hervé Beraud > > > > > > > > > Senior Software Engineer at Red Hat > > > > > > > > > irc: hberaud > > > > > > > > > https://github.com/4383/ > > > > > > > > > https://twitter.com/4383hberaud > > > > > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > > > > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > > > > > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > > > > > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > > > > > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > > > > > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > > > > > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > > > > > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > > > > > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > > > > > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > > > > > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > > > > > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > > > > > =ECSj > > > > > > > > > -----END PGP SIGNATURE----- > > > > > > > > > > > > > > > > -- > > > > > > > > Slawek Kaplonski > > > > > > > > Principal Software Engineer > > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > Slawek Kaplonski > > > > > Principal Software Engineer > > > > > Red Hat > > > > > > > > > > > > > > > > > > > -- > > > Hervé Beraud > > > Senior Software Engineer at Red Hat > > > irc: hberaud > > > https://github.com/4383/ > > > https://twitter.com/4383hberaud > > > -----BEGIN PGP SIGNATURE----- > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > v6rDpkeNksZ9fFSyoY2o > > > =ECSj > > > -----END PGP SIGNATURE----- > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mark at stackhpc.com Tue Mar 30 08:24:41 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 30 Mar 2021 09:24:41 +0100 Subject: [kolla-ansible][horizon] policy.yaml/json files In-Reply-To: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: On Mon, 29 Mar 2021 at 15:36, Adam Tomas wrote: > > Hi, Hi, Looks like we need some more/better docs on this in Kolla. > Im not quite clear about policy.yaml/json files in kolla-ansible. Let assume, that I need to allow one of project users to add other users to the project. So I create „project_admin” role and assign it to this user. Then I found /etc/kolla/keystone/policy.json.test file, which I use as template. There is rule „identity:create_credential” : „(role:admin and system_scope:all)” so I add „or role:project_admin” and put file in /etc/kolla/config/keystone/ and reconfigure kolla. And now few questions: > > 1. policy.json (or policy.yaml) always overwrite all default policies? I mean if I only add one rule to this file then other rules will „disappear” or will have default values? Is there any way to only overwrite some default rules and leave rest with defaults? Like with .conf files For a few releases now, OpenStack supports policy in code. This means that you only need to include the rules you want to override in your JSON/YAML file. > > 2. what about Horizon and visibility of options? In mentioned case putting the same policy.json file in /etc/kolla/config/keystone/ and /etc/kolla/config/horizon/ should „unblock” Add User button for user with project_admin role? Or how to achieve it? For keystone policy in horizon, you need to use: /etc/kolla/config/horizon/keystone_policy.json > > 3. does Horizon need the duplicated policy.json files from other services in it’s configuration folder or is it enough to write policy.json for services I want to change? Only the ones you want to change. > > 4. when I assign admin role to a user with projectID (openstack role add —project PROJECT_ID —user SOME_USER admin) this user sees in Horizon everything systemwide, not only inside this project… Which rules should be created to allow him to see only users and resources which belongs to this project? Currently admin is generally global in OpenStack. It's a known limitation, and currently being worked on. > > Best regards > Adam From mark at stackhpc.com Tue Mar 30 09:05:04 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 30 Mar 2021 10:05:04 +0100 Subject: [kolla-ansible][horizon] policy.yaml/json files In-Reply-To: References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: On Tue, 30 Mar 2021 at 09:24, Mark Goddard wrote: > > On Mon, 29 Mar 2021 at 15:36, Adam Tomas wrote: > > > > Hi, > > Hi, Looks like we need some more/better docs on this in Kolla. Proposed some docs improvements: https://review.opendev.org/c/openstack/kolla-ansible/+/783809 > > > Im not quite clear about policy.yaml/json files in kolla-ansible. Let assume, that I need to allow one of project users to add other users to the project. So I create „project_admin” role and assign it to this user. Then I found /etc/kolla/keystone/policy.json.test file, which I use as template. There is rule „identity:create_credential” : „(role:admin and system_scope:all)” so I add „or role:project_admin” and put file in /etc/kolla/config/keystone/ and reconfigure kolla. And now few questions: > > > > 1. policy.json (or policy.yaml) always overwrite all default policies? I mean if I only add one rule to this file then other rules will „disappear” or will have default values? Is there any way to only overwrite some default rules and leave rest with defaults? Like with .conf files > > For a few releases now, OpenStack supports policy in code. This means > that you only need to include the rules you want to override in your > JSON/YAML file. > > > > > 2. what about Horizon and visibility of options? In mentioned case putting the same policy.json file in /etc/kolla/config/keystone/ and /etc/kolla/config/horizon/ should „unblock” Add User button for user with project_admin role? Or how to achieve it? > > For keystone policy in horizon, you need to use: > > /etc/kolla/config/horizon/keystone_policy.json > > > > > 3. does Horizon need the duplicated policy.json files from other services in it’s configuration folder or is it enough to write policy.json for services I want to change? > > Only the ones you want to change. > > > > > 4. when I assign admin role to a user with projectID (openstack role add —project PROJECT_ID —user SOME_USER admin) this user sees in Horizon everything systemwide, not only inside this project… Which rules should be created to allow him to see only users and resources which belongs to this project? > > Currently admin is generally global in OpenStack. It's a known > limitation, and currently being worked on. > > > > > Best regards > > Adam From eblock at nde.ag Tue Mar 30 09:16:32 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 30 Mar 2021 09:16:32 +0000 Subject: [neutron][db] Train: neutron-db-manage can't locate revision Message-ID: <20210330091632.Horde.II5iKA3aKPEIdV8f4sBpw7Q@webmail.nde.ag> Hi *, I have a new issue with our Train cloud (upgraded through the years). Currently I'm preparing the next upgrade step to Ussuri and stumbled upon this neutron db issue: ---snip--- controller01:~ # neutron-db-manage current Running current for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. ERROR [alembic.util.messaging] Can't locate revision identified by 'e4e236b0e1ff' FAILED: Can't locate revision identified by 'e4e236b0e1ff' controller01:~ # neutron-db-manage revision --contract Running revision for neutron ... Generating /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract/b5344a66e818_.py ... done OK controller01:~ # neutron-db-manage revision --expand Running revision for neutron ... Generating /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py ... done OK controller01:~ # neutron-db-manage heads Running heads for neutron ... 6c9eb0469914 (expand) (head) c43a0ddb6a03 (contract) (head) OK ---snip--- I've been searching how that revision got into the database but I can't find any hint where that e4e236b0e1ff belongs to. To test the upgrade process I have a VM that was initially setup with Ocata, I'm able to investigate a snapshot from before the Pike upgrade, and I can't find a clue either, except for the neutron database. Although our current setup works just fine I'm not sure if this will be a show stopper for the Ussuri upgrade, although it wasn't for the previous upgrades. Anyway, how would I resolve this? I see these versions in the neutron.alembic_version table: +--------------+ | version_num | +--------------+ | 5c85685d616d | | e4e236b0e1ff | +--------------+ And 5c85685d616d seems to be a valid version from Newton, it seems. Could anyone provide some guidance on how to fix this before the next uprade? If possible I would like to avoid making manual changes in the database myself. Thanks and best regards, Eugen From jlibosva at redhat.com Tue Mar 30 09:41:42 2021 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 30 Mar 2021 11:41:42 +0200 Subject: [neutron][db] Train: neutron-db-manage can't locate revision In-Reply-To: <20210330091632.Horde.II5iKA3aKPEIdV8f4sBpw7Q@webmail.nde.ag> References: <20210330091632.Horde.II5iKA3aKPEIdV8f4sBpw7Q@webmail.nde.ag> Message-ID: <856542b7-19e3-2834-effa-fdf649de741d@redhat.com> On 30/03/2021 11:16, Eugen Block wrote: > Hi *, > > I have a new issue with our Train cloud (upgraded through the years). > Currently I'm preparing the next upgrade step to Ussuri and stumbled > upon this neutron db issue: > > > ---snip--- > controller01:~ # neutron-db-manage current > Running current for neutron ... > INFO [alembic.runtime.migration] Context impl MySQLImpl. > INFO [alembic.runtime.migration] Will assume non-transactional DDL. > ERROR [alembic.util.messaging] Can't locate revision identified by > 'e4e236b0e1ff' > FAILED: Can't locate revision identified by 'e4e236b0e1ff' > > > controller01:~ # neutron-db-manage revision --contract > Running revision for neutron ... > Generating > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract/b5344a66e818_.py ... > done > OK > > > controller01:~ # neutron-db-manage revision --expand > Running revision for neutron ... > Generating > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py ... > done > OK > > > controller01:~ # neutron-db-manage heads > Running heads for neutron ... > 6c9eb0469914 (expand) (head) > c43a0ddb6a03 (contract) (head) > OK > ---snip--- > > > I've been searching how that revision got into the database but I > can't find any hint where that e4e236b0e1ff belongs to. To test the > upgrade process I have a VM that was initially setup with Ocata, I'm > able to investigate a snapshot from before the Pike upgrade, and I > can't find a clue either, except for the neutron database. > Although our current setup works just fine I'm not sure if this will > be a show stopper for the Ussuri upgrade, although it wasn't for the > previous upgrades. > > Anyway, how would I resolve this? I see these versions in the > neutron.alembic_version table: > > +--------------+ > | version_num | > +--------------+ > | 5c85685d616d | > | e4e236b0e1ff | > +--------------+ > > And 5c85685d616d seems to be a valid version from Newton, it seems. > Could anyone provide some guidance on how to fix this before the next > uprade? If possible I would like to avoid making manual changes in the > database myself. > > Thanks and best regards, > Eugen > > Hi Eugen, it looks like you've run the db update with Ussuri code already. e4e236b0e1ff was added in that cycle and 5c85685d616d is the contract head since Newton (seems like there has not been a change since). I think what you need is to update your alembic scripts to Ussuri and run the "update heads" command again. This should bring the DB schema to Ussuri version because it knows the revisions. I hope it helps. Best regards, Jakub From ssbarnea at redhat.com Tue Mar 30 09:42:25 2021 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Tue, 30 Mar 2021 05:42:25 -0400 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby In-Reply-To: References: <20210324160917.nzzfcuanytmrbujh@yuggoth.org> Message-ID: This is all is needed to make hacking being directly consumable by pre-commit (tool, not git hook): https://review.opendev.org/c/openstack/hacking/+/783820 I already did some tests and it worked fine, that is how I made bashate and doc8 long time ago: https://github.com/openstack/bashate/blob/master/.pre-commit-hooks.yaml https://github.com/PyCQA/doc8/blob/master/.pre-commit-hooks.yaml Once that is merged and released, you could replace the normal flake8 usage inside your repo .pre-commit-config.yaml file: - repo: https://review.opendev.org/openstack/hacking rev: 4.1.0 # or whatever release number we will pick hooks: - id: hacking If you want to add your own extra plugins you can still do it using the additional_dependencies key. -- /zbr On 24 Mar 2021 at 16:34:09, Herve Beraud wrote: > > > Le mer. 24 mars 2021 à 17:13, Jeremy Stanley a écrit : > >> On 2021-03-24 05:44:59 -0700 (-0700), Sorin Sbarnea wrote: >> [...] >> > If we still have a strong case for using hacking, I think it >> > should be converted to be usable as a hook itself, one that calls >> > flake8. >> [...] >> >> Can you expand on how you would expect that to work? Quoting from >> hacking's README: >> >> "hacking is a set of flake8 plugins that test and enforce the >> OpenStack StyleGuide" >> >> How would you turn a set of flake8 plugins into a pre-commit hook >> which calls flake8? That seems rather circular. >> > > Excellent point! > > -- >> Jeremy Stanley >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Mar 30 09:48:46 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 30 Mar 2021 11:48:46 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <20210330070308.eltpfavg44rawo7e@p1.localdomain> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> <20210329151415.a7u7alxqkypjyiau@p1.localdomain> <20210330070308.eltpfavg44rawo7e@p1.localdomain> Message-ID: Le mar. 30 mars 2021 à 09:04, Slawek Kaplonski a écrit : > Hi, > > As Takashi told me that he will not have time to work on this anytime soon > and I > don't get any info from Sam, I just proposed patches to deprecate > networking-midonet project [1]. > I know that it's very late in the cycle but I think that this will be the > best > to do based on current circumstances and discussion in this thread. > > [1] > https://review.opendev.org/q/topic:%22deprecate-networking-midonet%22+(status:open%20OR%20status:merged) > Accordingly to our choice, I updated the release patch. https://review.opendev.org/c/openstack/releases/+/781713 Thanks Slawek. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Tue Mar 30 09:52:07 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 30 Mar 2021 11:52:07 +0200 Subject: [kolla-ansible][horizon] policy.yaml/json files In-Reply-To: References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: Hi, thank you for the answers, but I still have more questions :) Without any custom policies when I look inside the horizon container I see (in /etc/openstack-dashboard) current/default policies. If I override (for example keystone_policy.json) with a file placed in /etc/kolla/config/horizon which contains only 3 rules, then after kolla-ansible reconfigure inside horizon container there is of course keystone_police.json file, but only with my 3 rules - should I assume, that previously seen default rules (other than the ones overridden by my rules) still works, whether I see them in the file or not? And another question - I need a rule, that will allow some „special” user (which I call project_admin) to see,create, update and delete users inside a project (but not elsewhere). How should the policy look like? „project_admin_required”: „role:project_admin and default_project_id:%(target.project_id)s" „identity:list_user”: „rule: admin_required or project_admin_required” „identity:create_user”: „rule: admin_required or project_admin_required” „identity:update_user”: „rule: admin_required or project_admin_required” „identity:delete_user”: „rule: admin_required or project_admin_required” ? Best regards, Adam > Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 11:05: > > On Tue, 30 Mar 2021 at 09:24, Mark Goddard wrote: >> >> On Mon, 29 Mar 2021 at 15:36, Adam Tomas wrote: >>> >>> Hi, >> >> Hi, Looks like we need some more/better docs on this in Kolla. > > Proposed some docs improvements: > https://review.opendev.org/c/openstack/kolla-ansible/+/783809 > >> >>> Im not quite clear about policy.yaml/json files in kolla-ansible. Let assume, that I need to allow one of project users to add other users to the project. So I create „project_admin” role and assign it to this user. Then I found /etc/kolla/keystone/policy.json.test file, which I use as template. There is rule „identity:create_credential” : „(role:admin and system_scope:all)” so I add „or role:project_admin” and put file in /etc/kolla/config/keystone/ and reconfigure kolla. And now few questions: >>> >>> 1. policy.json (or policy.yaml) always overwrite all default policies? I mean if I only add one rule to this file then other rules will „disappear” or will have default values? Is there any way to only overwrite some default rules and leave rest with defaults? Like with .conf files >> >> For a few releases now, OpenStack supports policy in code. This means >> that you only need to include the rules you want to override in your >> JSON/YAML file. >> >>> >>> 2. what about Horizon and visibility of options? In mentioned case putting the same policy.json file in /etc/kolla/config/keystone/ and /etc/kolla/config/horizon/ should „unblock” Add User button for user with project_admin role? Or how to achieve it? >> >> For keystone policy in horizon, you need to use: >> >> /etc/kolla/config/horizon/keystone_policy.json >> >>> >>> 3. does Horizon need the duplicated policy.json files from other services in it’s configuration folder or is it enough to write policy.json for services I want to change? >> >> Only the ones you want to change. >> >>> >>> 4. when I assign admin role to a user with projectID (openstack role add —project PROJECT_ID —user SOME_USER admin) this user sees in Horizon everything systemwide, not only inside this project… Which rules should be created to allow him to see only users and resources which belongs to this project? >> >> Currently admin is generally global in OpenStack. It's a known >> limitation, and currently being worked on. >> >>> >>> Best regards >>> Adam From neil at tigera.io Tue Mar 30 10:19:34 2021 From: neil at tigera.io (Neil Jerram) Date: Tue, 30 Mar 2021 10:19:34 +0000 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <20210329142833.rilgaemupytlljce@p1.localdomain> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <20210329142833.rilgaemupytlljce@p1.localdomain> Message-ID: On Mon, Mar 29, 2021 at 3:29 PM Slawek Kaplonski wrote: > Hi, > > On Mon, Mar 29, 2021 at 11:31:38AM +0000, Neil Jerram wrote: > > Out of interest - for networking-calico - what changes are needed to > adapt > > to the new engine facade? > > Basically in most cases it is simply do changes like e.g. are done in [1] > to use > engine facade api to make db transactions. > Then You should run Your tests, see what will be broken and fix it :) > > [1] > https://review.opendev.org/c/openstack/networking-midonet/+/770797/3/midonet/neutron/db/gateway_device.py Thank you very much Slawek ! Neil > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Mar 30 10:49:46 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 30 Mar 2021 10:49:46 +0000 Subject: [neutron][db] Train: neutron-db-manage can't locate revision In-Reply-To: <856542b7-19e3-2834-effa-fdf649de741d@redhat.com> References: <20210330091632.Horde.II5iKA3aKPEIdV8f4sBpw7Q@webmail.nde.ag> <856542b7-19e3-2834-effa-fdf649de741d@redhat.com> Message-ID: <20210330104946.Horde.fhRXZNsnfOkBL5jVBwyBa30@webmail.nde.ag> Thank you, Jakub. > it looks like you've run the db update with Ussuri code already. That can't be true, my virtual controller runs on Ussuri and it reports these: ---snip--- controller-vm:~ # neutron-db-manage current Running current for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. 5c85685d616d (head) d8bdf05313f4 (head) OK ---snip--- There's also no sign of mentioned missing revision, I grepped in /usr/lib/... without finding anything related to that. > I think what you need is to update your alembic scripts to Ussuri and > run the "update heads" command again. This should bring the DB schema to > Ussuri version because it knows the revisions. Since I'm planning to upgrade to Ussuri anyway that's probably what will resolve this, I assume. But as I said, I'm confused why I don't find anything related to e4e236b0e1ff. Regards, Eugen Zitat von Jakub Libosvar : > On 30/03/2021 11:16, Eugen Block wrote: >> Hi *, >> >> I have a new issue with our Train cloud (upgraded through the years). >> Currently I'm preparing the next upgrade step to Ussuri and stumbled >> upon this neutron db issue: >> >> >> ---snip--- >> controller01:~ # neutron-db-manage current >> Running current for neutron ... >> INFO [alembic.runtime.migration] Context impl MySQLImpl. >> INFO [alembic.runtime.migration] Will assume non-transactional DDL. >> ERROR [alembic.util.messaging] Can't locate revision identified by >> 'e4e236b0e1ff' >> FAILED: Can't locate revision identified by 'e4e236b0e1ff' >> >> >> controller01:~ # neutron-db-manage revision --contract >> Running revision for neutron ... >> Generating >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract/b5344a66e818_.py >> ... >> done >> OK >> >> >> controller01:~ # neutron-db-manage revision --expand >> Running revision for neutron ... >> Generating >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> ... >> done >> OK >> >> >> controller01:~ # neutron-db-manage heads >> Running heads for neutron ... >> 6c9eb0469914 (expand) (head) >> c43a0ddb6a03 (contract) (head) >> OK >> ---snip--- >> >> >> I've been searching how that revision got into the database but I >> can't find any hint where that e4e236b0e1ff belongs to. To test the >> upgrade process I have a VM that was initially setup with Ocata, I'm >> able to investigate a snapshot from before the Pike upgrade, and I >> can't find a clue either, except for the neutron database. >> Although our current setup works just fine I'm not sure if this will >> be a show stopper for the Ussuri upgrade, although it wasn't for the >> previous upgrades. >> >> Anyway, how would I resolve this? I see these versions in the >> neutron.alembic_version table: >> >> +--------------+ >> | version_num | >> +--------------+ >> | 5c85685d616d | >> | e4e236b0e1ff | >> +--------------+ >> >> And 5c85685d616d seems to be a valid version from Newton, it seems. >> Could anyone provide some guidance on how to fix this before the next >> uprade? If possible I would like to avoid making manual changes in the >> database myself. >> >> Thanks and best regards, >> Eugen >> >> > > Hi Eugen, > > it looks like you've run the db update with Ussuri code already. > > e4e236b0e1ff was added in that cycle and 5c85685d616d is the contract > head since Newton (seems like there has not been a change since). > > I think what you need is to update your alembic scripts to Ussuri and > run the "update heads" command again. This should bring the DB schema to > Ussuri version because it knows the revisions. > > I hope it helps. > > Best regards, > Jakub From mark at stackhpc.com Tue Mar 30 10:51:20 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 30 Mar 2021 11:51:20 +0100 Subject: [kolla-ansible][horizon][keystone] policy.yaml/json files In-Reply-To: References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: On Tue, 30 Mar 2021 at 10:52, Adam Tomas wrote: > > Hi, > thank you for the answers, but I still have more questions :) > > Without any custom policies when I look inside the horizon container I see (in /etc/openstack-dashboard) current/default policies. If I override (for example keystone_policy.json) with a file placed in /etc/kolla/config/horizon which contains only 3 rules, then after kolla-ansible reconfigure inside horizon container there is of course keystone_police.json file, but only with my 3 rules - should I assume, that previously seen default rules (other than the ones overridden by my rules) still works, whether I see them in the file or not? I'd assume Horizon works in the same way as other services, and you only need to include changes. Please test and report back. > > And another question - I need a rule, that will allow some „special” user (which I call project_admin) to see,create, update and delete users inside a project (but not elsewhere). How should the policy look like? > > „project_admin_required”: „role:project_admin and default_project_id:%(target.project_id)s" > „identity:list_user”: „rule: admin_required or project_admin_required” > „identity:create_user”: „rule: admin_required or project_admin_required” > „identity:update_user”: „rule: admin_required or project_admin_required” > „identity:delete_user”: „rule: admin_required or project_admin_required” > As I mentioned before, admin is global in OpenStack for now. There may be various ways to achieve what you want. One is to introduce a role, and use it in the rules. It's a bit of a can of worms though, since there are many API endpoints which might need to be updated to catch all corner cases. I added keystone to the subject, in case anyone from that team wants to comment. > ? > Best regards, > Adam > > > Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 11:05: > > > > On Tue, 30 Mar 2021 at 09:24, Mark Goddard wrote: > >> > >> On Mon, 29 Mar 2021 at 15:36, Adam Tomas wrote: > >>> > >>> Hi, > >> > >> Hi, Looks like we need some more/better docs on this in Kolla. > > > > Proposed some docs improvements: > > https://review.opendev.org/c/openstack/kolla-ansible/+/783809 > > > >> > >>> Im not quite clear about policy.yaml/json files in kolla-ansible. Let assume, that I need to allow one of project users to add other users to the project. So I create „project_admin” role and assign it to this user. Then I found /etc/kolla/keystone/policy.json.test file, which I use as template. There is rule „identity:create_credential” : „(role:admin and system_scope:all)” so I add „or role:project_admin” and put file in /etc/kolla/config/keystone/ and reconfigure kolla. And now few questions: > >>> > >>> 1. policy.json (or policy.yaml) always overwrite all default policies? I mean if I only add one rule to this file then other rules will „disappear” or will have default values? Is there any way to only overwrite some default rules and leave rest with defaults? Like with .conf files > >> > >> For a few releases now, OpenStack supports policy in code. This means > >> that you only need to include the rules you want to override in your > >> JSON/YAML file. > >> > >>> > >>> 2. what about Horizon and visibility of options? In mentioned case putting the same policy.json file in /etc/kolla/config/keystone/ and /etc/kolla/config/horizon/ should „unblock” Add User button for user with project_admin role? Or how to achieve it? > >> > >> For keystone policy in horizon, you need to use: > >> > >> /etc/kolla/config/horizon/keystone_policy.json > >> > >>> > >>> 3. does Horizon need the duplicated policy.json files from other services in it’s configuration folder or is it enough to write policy.json for services I want to change? > >> > >> Only the ones you want to change. > >> > >>> > >>> 4. when I assign admin role to a user with projectID (openstack role add —project PROJECT_ID —user SOME_USER admin) this user sees in Horizon everything systemwide, not only inside this project… Which rules should be created to allow him to see only users and resources which belongs to this project? > >> > >> Currently admin is generally global in OpenStack. It's a known > >> limitation, and currently being worked on. > >> > >>> > >>> Best regards > >>> Adam > From openinfradn at gmail.com Tue Mar 30 12:05:26 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 30 Mar 2021 17:35:26 +0530 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: Message-ID: Does anyone deploy Meghdwar with OpenStack? Seems it wasn't active for the last few years [1] https://wiki.openstack.org/wiki/Meghdwar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Tue Mar 30 12:10:59 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 30 Mar 2021 14:10:59 +0200 Subject: [octavia] Xena PTG planning Message-ID: Hi Octavia community! We have booked 2 sessions for Octavia during the Xena PTG: Tuesday April 20 - 13:00 - 17:00 UTC Wednesday April 21 - 13:00 - 17:00 UTC There's an etherpad for the PTG planning, please add your topics to the list: https://etherpad.opendev.org/p/xena-ptg-octavia Thanks, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Tue Mar 30 12:23:57 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 30 Mar 2021 17:53:57 +0530 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: Message-ID: On Thu, Mar 25, 2021 at 8:17 PM Sean Mooney wrote: > This is a demo of a third party extention that was never upstreamed. > > nova does not support create a base vm and then doing a local live > migration or restore for memory snapshots to create another vm. > > I just need to understand the risk and impact here but not desperately trying to use the technology. Let say there won't be multiple tenants, but different users supposed to access stateless VMs. Is it still secure? > this approch likely has several security implciations that would not be > accpeatable in a multi tenant enviornment. > > we have disucssed this type of vm creation in the past and determined > that it is not a valid implematnion of spawn. a virt driver that > precreate vms or copys an existing instance can be faster but that virt > driver is not considered a compliant implementation. > > so in short there is no way to achive this today in a compliant > openstack powered cloud. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Tue Mar 30 12:40:39 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Tue, 30 Mar 2021 12:40:39 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <63db4377c36d4200a9864c0eb9bba7e7@ncwmexgp009.CORP.CHARTERCOM.com> I've created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 I used the instructions here to successfully remove and replace control0 with a Centos8 box https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', 'rabbit at chrnc-void-testupgrade-control-0-replace', 'rabbit at chrnc-void-testupgrade-control-1', 'rabbit at chrnc-void-testupgrade-control-2']}]}, {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', 'rabbit at chrnc-void-testupgrade-control-1', 'rabbit at chrnc-void-testupgrade-control-2']}, {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, {partitions,[]}, {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, {'rabbit at chrnc-void-testupgrade-control-1',[]}, {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 ... control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', 'rabbit at chrnc-void-testupgrade-control-0-replace', 'rabbit at chrnc-void-testupgrade-control-1', 'rabbit at chrnc-void-testupgrade-control-2']}]}, {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', 'rabbit at chrnc-void-testupgrade-control-0-replace']}, {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, {partitions,[]}, {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] But my hypervisors are down: (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: 172.16.2.31 compute0 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out In the RMQ logs I see this every 10 seconds: 172.16.1.132 control2 [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): client unexpectedly closed TCP connection 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Mar 30 12:45:19 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 30 Mar 2021 12:45:19 +0000 Subject: [neutron][db] Train: neutron-db-manage can't locate revision In-Reply-To: <20210330104946.Horde.fhRXZNsnfOkBL5jVBwyBa30@webmail.nde.ag> References: <20210330091632.Horde.II5iKA3aKPEIdV8f4sBpw7Q@webmail.nde.ag> <856542b7-19e3-2834-effa-fdf649de741d@redhat.com> <20210330104946.Horde.fhRXZNsnfOkBL5jVBwyBa30@webmail.nde.ag> Message-ID: <20210330124519.Horde.KjINM3vVvmOxAagzdBFVlUs@webmail.nde.ag> Interesting, I downloaded the latest ussuri rpm for python3-neutron and indeed, I see this line: revision = 'e4e236b0e1ff' But apparently my VM doesn't have that, I wonder how that could be. And the more important question is, how did that version get into the Train control nodes, they have never seen an Ussuri repository. But I did run the database migrations on said VM, so the only logical explanation is that the VM was in an inconsistent state, I guess. Anyway, I assume that after I upgrade the packages on the control nodes I'll have this mess cleaned up. Thanks again! Eugen Zitat von Eugen Block : > Thank you, Jakub. > >> it looks like you've run the db update with Ussuri code already. > > That can't be true, my virtual controller runs on Ussuri and it > reports these: > > ---snip--- > controller-vm:~ # neutron-db-manage current > Running current for neutron ... > INFO [alembic.runtime.migration] Context impl MySQLImpl. > INFO [alembic.runtime.migration] Will assume non-transactional DDL. > 5c85685d616d (head) > d8bdf05313f4 (head) > OK > ---snip--- > > There's also no sign of mentioned missing revision, I grepped in > /usr/lib/... without finding anything related to that. > > >> I think what you need is to update your alembic scripts to Ussuri and >> run the "update heads" command again. This should bring the DB schema to >> Ussuri version because it knows the revisions. > > Since I'm planning to upgrade to Ussuri anyway that's probably what > will resolve this, I assume. But as I said, I'm confused why I don't > find anything related to e4e236b0e1ff. > > Regards, > Eugen > > > Zitat von Jakub Libosvar : > >> On 30/03/2021 11:16, Eugen Block wrote: >>> Hi *, >>> >>> I have a new issue with our Train cloud (upgraded through the years). >>> Currently I'm preparing the next upgrade step to Ussuri and stumbled >>> upon this neutron db issue: >>> >>> >>> ---snip--- >>> controller01:~ # neutron-db-manage current >>> Running current for neutron ... >>> INFO [alembic.runtime.migration] Context impl MySQLImpl. >>> INFO [alembic.runtime.migration] Will assume non-transactional DDL. >>> ERROR [alembic.util.messaging] Can't locate revision identified by >>> 'e4e236b0e1ff' >>> FAILED: Can't locate revision identified by 'e4e236b0e1ff' >>> >>> >>> controller01:~ # neutron-db-manage revision --contract >>> Running revision for neutron ... >>> Generating >>> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract/b5344a66e818_.py >>> ... >>> done >>> OK >>> >>> >>> controller01:~ # neutron-db-manage revision --expand >>> Running revision for neutron ... >>> Generating >>> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >>> ... >>> done >>> OK >>> >>> >>> controller01:~ # neutron-db-manage heads >>> Running heads for neutron ... >>> 6c9eb0469914 (expand) (head) >>> c43a0ddb6a03 (contract) (head) >>> OK >>> ---snip--- >>> >>> >>> I've been searching how that revision got into the database but I >>> can't find any hint where that e4e236b0e1ff belongs to. To test the >>> upgrade process I have a VM that was initially setup with Ocata, I'm >>> able to investigate a snapshot from before the Pike upgrade, and I >>> can't find a clue either, except for the neutron database. >>> Although our current setup works just fine I'm not sure if this will >>> be a show stopper for the Ussuri upgrade, although it wasn't for the >>> previous upgrades. >>> >>> Anyway, how would I resolve this? I see these versions in the >>> neutron.alembic_version table: >>> >>> +--------------+ >>> | version_num | >>> +--------------+ >>> | 5c85685d616d | >>> | e4e236b0e1ff | >>> +--------------+ >>> >>> And 5c85685d616d seems to be a valid version from Newton, it seems. >>> Could anyone provide some guidance on how to fix this before the next >>> uprade? If possible I would like to avoid making manual changes in the >>> database myself. >>> >>> Thanks and best regards, >>> Eugen >>> >>> >> >> Hi Eugen, >> >> it looks like you've run the db update with Ussuri code already. >> >> e4e236b0e1ff was added in that cycle and 5c85685d616d is the contract >> head since Newton (seems like there has not been a change since). >> >> I think what you need is to update your alembic scripts to Ussuri and >> run the "update heads" command again. This should bring the DB schema to >> Ussuri version because it knows the revisions. >> >> I hope it helps. >> >> Best regards, >> Jakub From smooney at redhat.com Tue Mar 30 12:51:02 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 30 Mar 2021 13:51:02 +0100 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: Message-ID: <75bdba74-32aa-0086-1f78-98d07ba0d1a8@redhat.com> On 30/03/2021 13:23, open infra wrote: > > > On Thu, Mar 25, 2021 at 8:17 PM Sean Mooney > wrote: > > This is a demo of a third party extention that was never upstreamed. > > nova does not support create a base vm and then doing a local live > migration or restore for memory snapshots to create another vm. > > I just need to understand the risk and impact here but not desperately > trying to use the technology. > Let say there won't be multiple tenants, but different users supposed > to access stateless VMs. > Is it still secure? you will need to ask the third party vendor who forked openstack to produce it. in general i dont think cross projefct/teant shareing of stateless vm memory would be safe. we dont know why the image is loading into memory  when it boots. within the same project it might be but upstream cant really say since we have not review that code. what i would be most worried about is any keys that might be loaded by cloud init or similar that would be differnt between instances. im skeptical that this is actually a generic solution that should be implemented in a cloud environment. > > this approch likely has several security implciations that would > not be > accpeatable in a multi tenant enviornment. > > we have disucssed this type of vm creation in the past and determined > that it is not a valid implematnion of spawn. a virt driver that > precreate vms or copys an existing instance can be faster but that > virt > driver is not considered a compliant implementation. > > so in short there is no way to achive this today in a compliant > openstack powered cloud. > > From marcin.juszkiewicz at linaro.org Tue Mar 30 13:20:15 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 30 Mar 2021 15:20:15 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> Message-ID: <37df3b61-f0cd-b139-c132-5640c23bbb4e@linaro.org> W dniu 29.03.2021 o 19:48, Thomas Goirand pisze: > On 3/29/21 5:34 PM, Marcin Juszkiewicz wrote: >>> Getting binary support is just a mater of rebuilding packages >>> for arm64. I once did that for you setting-up a Jenkins machine >>> just for rebuilding. It's really a shame that you guys aren't >>> following that road. You'd have all of my support if you did. I >>> personally don't have access to the necessary hardware for it, >>> and wont use the arm64 repos... >> >> The thing is - no one asked for Debian/binary images so far so I >> did not bothered you. >> >> Your team provides packages for already released OpenStack stuff. >> From Kolla point of view it means that at beginning of a new >> development cycle we would need to get images for previous release >> with Debian binary packages built, tested and backported. > > I used to package every beta (b1, b2, b3). But then, nobody was > consuming them, so I stopped. And the harsh reality right now, is > that most projects stopped producing these bX releases. > > Today, I'm already done with the packaging of all RC1 releases, and > I'm already beginning to test installing them. This is 2 days after > the RCs. Are they available somewhere online so they can be tested? > No other distribution beats me on this timing. RDO and UCA produce packages during development cycle. > So I'm not sure what more I could do... If you wish to participate > and provide intermediary releases using Git, that's possible, but > that means someone needs to participate and do it. I can't be full > time doing packaging like I used to when I was employed to do only > that (these days, I maintain my own installer and 8 clusters in > production...). I am aware that building OpenStack packages is not your daily job anymore. > Radosław Piliszek often ask me about the status of Debian packages, > and he manages to maintain compat in OSA using them. Why can't you > do it? Taking care of Kolla is one of my work tasks. Not primary anymore as we got to the point when I only need to fix when things break. > Again, I very much would love to collaborate and help you doing more > with the Debian binary stuff for ARM. I can check are there free resources left for providing you AArch64 machine for builds. From rosmaita.fossdev at gmail.com Tue Mar 30 13:23:54 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 30 Mar 2021 09:23:54 -0400 Subject: [cinder] this week's meeting in video+IRC Message-ID: <1fe97e07-2f7b-e77d-547d-c1f063ba6b7a@gmail.com> Quick reminder that this week's Cinder team meeting on Wednesday 31 March, being the final meeting of the month, will be held in both videoconference and IRC at the regularly scheduled time of 1400 UTC. Here's a quick reminder of the video meeting rules we've agreed to: * Everyone will keep IRC open during the meeting. * We'll take notes in IRC to leave a record similar to what we have for our regular IRC meetings. * Some people are more comfortable communicating in written English. So at any point, any attendee may request that the discussion of the current topic be conducted entirely in IRC. connection info: https://bluejeans.com/3228528973 cheers, brian From jmlineb at sandia.gov Tue Mar 30 13:26:35 2021 From: jmlineb at sandia.gov (Linebarger, John) Date: Tue, 30 Mar 2021 13:26:35 +0000 Subject: How to debug silent live migration errors In-Reply-To: <580b3a3a2bf64d14a8e9d002cfcdfd47@ES07AMSNLNT.srn.sandia.gov> References: <580b3a3a2bf64d14a8e9d002cfcdfd47@ES07AMSNLNT.srn.sandia.gov> Message-ID: <6031f20aea0a436ab1c61da4af7facba@ES07AMSNLNT.srn.sandia.gov> How would I debug silent (or mostly silent) live migration errors? We're using the Stein release of Canonical's Charmed OpenStack. I have configured it for live migration per the instructions at this link: https://docs.openstack.org/nova/pike/admin/configuring-migrations.html#section-configuring-compute-migrations Specifically: 1. I did not specify vncserver_listen=0.0.0.0 in nova.conf because we are not running VNC on our instances 2. instances_path is /var/lib/nova/instances on all compute nodes 3. I believe that MAAS is "the sole provider of DHCP and DNS for the network hosting the MAAS cluster", per https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-maas.html 4. Identical authorized_keys files are present on all compute nodes with keys from all compute nodes by default 5. I manually configured the firewalls on all compute nodes to allow libvirt to communicate between compute hosts with: sudo ufw allow 49152:49261/tcp 6. The following settings are specified in nova.conf on each compute node: live_migration_downtime = 500 live_migration_downtime_steps = 10 live_migration_downtime_delay = 75 live_migration_permit_post_copy=true Here's what happens when I try to Live Migrate from the Horizon Dashboard: 1. As admin, in the Admin --> Instances menu, I select the dropdown arrow to the right of the instance. Live Migrate Instance appears (but in black, unlike Migrate Instance, which appears in red). I select Live Migrate Instance, and whether or not I Automatically schedule new host or manually select a new host the Task column says "Migrating" and then it stops and reverts to None. The server never changes. The Action Log shows the live migration request but the Message column is blank. 2. I do the very same thing but this time select Disk Over Commit. Same results. Migrating reverts back to None and the server never changes. 3. I do the very same thing but this time select Block Migration. This time I do get an error: "Failed to live migrate instance to host 'AUTO_SCHEDULE'". And this time the Action Log has "Error" in the Message column. Same behavior with the CLI. For example, this CLI command below completes silently, yet the server for the instance never changes. openstack server migrate --live [Silent failure] openstack server show [Still running on original server] Note that I *can* successfully Migrate, both using the Horizon Dashboard and the CLI. What fails is Live Migration. I just have no idea why, and no error is displayed in the Action Log for the instance. For reference, the instance is an m1.small with 2GB of RAM, 1 VCPU, and a 20GB Cinder disk volume attached on /dev/vda. Any and all debugging ideas would be most welcome. Without logs I am simply guessing in the dark at this point. Thanks! Enjoy! John M. Linebarger, PhD, MBA Principal Member of Technical Staff Sandia National Laboratories (Office) 505-845-8282 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Tue Mar 30 13:40:47 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 30 Mar 2021 15:40:47 +0200 Subject: [kolla-ansible][horizon][keystone] policy.yaml/json files In-Reply-To: References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: <9422CF6A-CEE0-4D96-A1C7-87DBE42319A6@poczta.onet.pl> (horizon)[root at controller1 /]# cd /etc/openstack-dashboard/ (horizon)[root at controller1 openstack-dashboard]# ls -la total 104 drwxr-xr-x 1 horizon horizon 186 Mar 30 13:17 . drwxr-xr-x 1 root root 59 Mar 30 12:56 .. -rw-r--r-- 1 horizon horizon 8560 Feb 7 08:13 cinder_policy.json -rw------- 1 horizon horizon 301 Mar 30 13:17 custom_local_settings -rw-r--r-- 1 horizon horizon 1388 Feb 7 08:13 glance_policy.json -rw-r--r-- 1 root root 4544 Feb 7 08:14 heat_policy.json -rw-r--r-- 1 horizon horizon 10144 Feb 7 08:13 keystone_policy.json -rw------- 1 horizon horizon 29710 Mar 30 13:17 local_settings -rw-r--r-- 1 root root 395 Feb 7 08:14 masakari_policy.json -rw-r--r-- 1 horizon horizon 12580 Feb 7 08:13 neutron_policy.json drwxr-xr-x 2 horizon horizon 33 Feb 7 08:13 nova_policy.d -rw------- 1 horizon horizon 269 Mar 30 13:17 nova_policy.json -rw-r--r-- 1 root root 1268 Feb 7 08:14 senlin_policy.json -rw-r--r-- 1 root root 1138 Feb 7 08:14 watcher_policy.json (horizon)[root at controller1 openstack-dashboard]# cat nova_policy.json { "context_is_admin": "role:admin", "admin_or": "is_admin:True" "os_compute_api:os-consoles:show": "rule:admin_or", "os_compute_api:servers:start": "rule:admin_or", "os_compute_api:servers:stop": "rule:admin_or" } So the file with overrides is properly placed inside horizon container. And the above policy should prevent non-admin user from starting/stopping instances and showing the console and… it does not work. Still as regular user I’m able to start and stop all instances in project from Horizon GUI. If the policy is in horizon configuration then it should work from horizon dashboard (without the same policy in nova it should be still possible to start/stop instances from CLI). > Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 12:51: > > On Tue, 30 Mar 2021 at 10:52, Adam Tomas wrote: >> >> Hi, >> thank you for the answers, but I still have more questions :) >> >> Without any custom policies when I look inside the horizon container I see (in /etc/openstack-dashboard) current/default policies. If I override (for example keystone_policy.json) with a file placed in /etc/kolla/config/horizon which contains only 3 rules, then after kolla-ansible reconfigure inside horizon container there is of course keystone_police.json file, but only with my 3 rules - should I assume, that previously seen default rules (other than the ones overridden by my rules) still works, whether I see them in the file or not? > > I'd assume Horizon works in the same way as other services, and you > only need to include changes. Please test and report back. > >> >> And another question - I need a rule, that will allow some „special” user (which I call project_admin) to see,create, update and delete users inside a project (but not elsewhere). How should the policy look like? >> >> „project_admin_required”: „role:project_admin and default_project_id:%(target.project_id)s" >> „identity:list_user”: „rule: admin_required or project_admin_required” >> „identity:create_user”: „rule: admin_required or project_admin_required” >> „identity:update_user”: „rule: admin_required or project_admin_required” >> „identity:delete_user”: „rule: admin_required or project_admin_required” >> > > As I mentioned before, admin is global in OpenStack for now. There may > be various ways to achieve what you want. One is to introduce a role, > and use it in the rules. It's a bit of a can of worms though, since > there are many API endpoints which might need to be updated to catch > all corner cases. I added keystone to the subject, in case anyone from > that team wants to comment. > >> ? >> Best regards, >> Adam >> >>> Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 11:05: >>> >>> On Tue, 30 Mar 2021 at 09:24, Mark Goddard wrote: >>>> >>>> On Mon, 29 Mar 2021 at 15:36, Adam Tomas wrote: >>>>> >>>>> Hi, >>>> >>>> Hi, Looks like we need some more/better docs on this in Kolla. >>> >>> Proposed some docs improvements: >>> https://review.opendev.org/c/openstack/kolla-ansible/+/783809 >>> >>>> >>>>> Im not quite clear about policy.yaml/json files in kolla-ansible. Let assume, that I need to allow one of project users to add other users to the project. So I create „project_admin” role and assign it to this user. Then I found /etc/kolla/keystone/policy.json.test file, which I use as template. There is rule „identity:create_credential” : „(role:admin and system_scope:all)” so I add „or role:project_admin” and put file in /etc/kolla/config/keystone/ and reconfigure kolla. And now few questions: >>>>> >>>>> 1. policy.json (or policy.yaml) always overwrite all default policies? I mean if I only add one rule to this file then other rules will „disappear” or will have default values? Is there any way to only overwrite some default rules and leave rest with defaults? Like with .conf files >>>> >>>> For a few releases now, OpenStack supports policy in code. This means >>>> that you only need to include the rules you want to override in your >>>> JSON/YAML file. >>>> >>>>> >>>>> 2. what about Horizon and visibility of options? In mentioned case putting the same policy.json file in /etc/kolla/config/keystone/ and /etc/kolla/config/horizon/ should „unblock” Add User button for user with project_admin role? Or how to achieve it? >>>> >>>> For keystone policy in horizon, you need to use: >>>> >>>> /etc/kolla/config/horizon/keystone_policy.json >>>> >>>>> >>>>> 3. does Horizon need the duplicated policy.json files from other services in it’s configuration folder or is it enough to write policy.json for services I want to change? >>>> >>>> Only the ones you want to change. >>>> >>>>> >>>>> 4. when I assign admin role to a user with projectID (openstack role add —project PROJECT_ID —user SOME_USER admin) this user sees in Horizon everything systemwide, not only inside this project… Which rules should be created to allow him to see only users and resources which belongs to this project? >>>> >>>> Currently admin is generally global in OpenStack. It's a known >>>> limitation, and currently being worked on. >>>> >>>>> >>>>> Best regards >>>>> Adam >> From eblock at nde.ag Tue Mar 30 14:53:55 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 30 Mar 2021 14:53:55 +0000 Subject: [ops][victoria] Windows Server 2012 R2 - Can't install - unbootable In-Reply-To: <0670B960225633449A24709C291A52524FBACB34@COM01.performair.local> Message-ID: <20210330145355.Horde.2A1PgyEIzWxPzN36eh0muw0@webmail.nde.ag> Hi, how are your images configured wrt hw_disk_bus and hw_scsi_model? Just last week I tried to update a windows image (Win 8) in Glance since it's recommended to use hw_scsi_model=virtio-scsi and hw_disk_bus=scsi. But the vm kept showing a bluescreen and didn't start until I removed those glance properties. Maybe that's what you're hitting, too? Regards, Eugen Zitat von DHilsbos at performair.com: > All; > > We have a recently built Victoria cluster with storage provided by a > Ceph Nautilus cluster. > > I'm working through the following tutorial: > https://medium.com/@yooniks9/openstack-install-windows-server-2016-2019-with-image-iso-d8c17c8cfc36 > > When I get to step 4 under "Windows Server Installation," it says > "Windows can't be installed on this drive." Details are: " Windows > cannot be installed to this disk. This computer's hardware may not > support booting to this disk. Ensure the disk's controller is > enabled in the computer's BIOS menu. > > Any recommendations on what I'm missing here would be appreciated. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com From rosmaita.fossdev at gmail.com Tue Mar 30 15:24:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 30 Mar 2021 11:24:43 -0400 Subject: [ops][cinder][nova] os-brick upcoming releases Message-ID: Hello operators, You may have heard about a potential data-loss bug [0] that was recently discovered. It has been fixed in the upcoming wallaby release and we are planning to backport to all stable branches and do new os-brick releases from the releasable stable branches. In the meantime, the bug occurs if the multipath configuration option on a compute is changed while volumes are attached to instances on that compute. The possible data loss may occur when the volumes are detached (migration, volume-detach, etc.). Thus, before the new os-brick releases are available, the issue can nonetheless be averted by not making such a configuration change under those circumstances. The new os-brick releases will be: - victoria: 4.0.3 - ussuri: 3.0.6 - train: 2.10.6 The stein, rocky, and queens branches are in Extended Maintenance mode and are no longer released from, but critical fixes are backported to them when possible, though it may take a while before these are merged. [0] https://launchpad.net/bugs/1921381 From ralonsoh at redhat.com Tue Mar 30 15:33:40 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 30 Mar 2021 17:33:40 +0200 Subject: [neutron] oslo.privsep migration in Neutron Message-ID: Hello Neutrinos: During the last cycles we have been migrating the Neutron code from oslo.rootwrap to oslo.privsep. Those efforts are aimed at reaching the goal defined in [1] and are tracked in [2]. At this point, starting Xena developing cycle, we can state that we have migrated all short lived commands from oslo.rootwrap to oslo.privsep or to a native implementation (that could also use oslo.privsep to elevate the permissions if needed). The problem are the daemons or services (long lived processes) that Neutron spawns using "ProcessManager"; this is why "ProcessManager.enable" is the only code calling "utils.execute" without "privsep_exec" parameter. Those process cannot be executed using oslo.privsep because the privsep root daemon has a limited number of executing threads. The remaining processes are [3]. Although we didn't reach the Completion Criteria defined in [1], that is remove the oslo.rootwrap dependency, I think we don't have an alternative to run those services and we should keep rootwrap for them. If there are no objections, once [3] is merged we can consider that Neutron (not other Stadium projects) finished the efforts on [1]. Please, any feedback is always welcome. Regards. [1]https://review.opendev.org/c/openstack/governance/+/718177 [2]https://storyboard.openstack.org/#!/story/2007686 [3] https://review.opendev.org/c/openstack/neutron/+/778444/2/etc/neutron/rootwrap.d/rootwrap.filters -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Mar 30 16:45:48 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 30 Mar 2021 17:45:48 +0100 Subject: [nova] Nominate sean-k-mooney for nova-specs-core Message-ID: Hey, Sean has been working on nova for what seems like yonks now. Each cycle, they spend a significant amount of time reviewing proposed specs and contributing to discussions at the PTG. This is important work and their contributions provide everyone with a deep pool of knowledge on all things networking and hardware upon which to draw. I think the nova project would benefit from their addition to the specs core reviewer team and I therefore propose we add Sean to nova- specs-core. Assuming there are no objections, I'll work with gibi to add Sean to nova-specs- core next week. Cheers, Stephen From alex.kavanagh at canonical.com Tue Mar 30 16:50:11 2021 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Tue, 30 Mar 2021 17:50:11 +0100 Subject: [charms] Charms libraries freeze on Friday Message-ID: Hi All Just a reminder that this week is the last opportunity to get features into the libraries for the release. The libraries affected by this freeze are: - charm-helpers - charms.openstack - charms.ceph - zaza - zaza-openstack-tests We will be taking a stable cut (stable/21.04) of these libraries on Friday and doing a batch review of the charms to point to these cuts during the testing period with a final charm-helpers sync at this point. The reactive charms will have a build.lock file introduced to lock their versions of the layers, interfaces and python modules (the above libraries will be tracked by stable branch). Only bug fixes and features-by-exception will be accepted into the stable cuts of these libraries after that point. For the avoidance of doubt, we will be testing and releasing against the current latest stable of: - juju 2.8.10 - python-libjuju 2.8.6 - charm-tools 2.8.3 Many thanks Alex. -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Mar 30 16:51:37 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 30 Mar 2021 18:51:37 +0200 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: <16KSQQ.5VMRD7VXVLI71@est.tech> On Tue, Mar 30, 2021 at 17:45, Stephen Finucane wrote: > Hey, > > Sean has been working on nova for what seems like yonks now. Each > cycle, they > spend a significant amount of time reviewing proposed specs and > contributing to > discussions at the PTG. This is important work and their > contributions provide > everyone with a deep pool of knowledge on all things networking and > hardware > upon which to draw. I think the nova project would benefit from their > addition > to the specs core reviewer team and I therefore propose we add Sean > to nova- > specs-core. +1, I rely on Sean's reviews frequently. Cheers, gibi > > Assuming there are no objections, I'll work with gibi to add Sean to > nova-specs- > core next week. > > Cheers, > Stephen > > > From noonedeadpunk at ya.ru Tue Mar 30 17:09:15 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 30 Mar 2021 20:09:15 +0300 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <759905868.77860.1617022916703@ox.dhbw-mannheim.de> References: <759905868.77860.1617022916703@ox.dhbw-mannheim.de> Message-ID: <38341617124089@mail.yandex.ru> So according to the issue, you get 503 while trying to reach 10.0.3.212:6002/os-objects, which is swift_account_port. Are there any logs specificly for swift-account? Also I guess some adjustments are required for swift as well for this mechanism to work. Eventually I believe the original issue you saw might be related to this doc: https://docs.openstack.org/keystone/latest/admin/manage-services.html#configuring-service-tokens Might be that swift also needs applying changes to accept service tokens... 29.03.2021, 16:06, "Oliver Wenz" : >>  Oh, I have a guess what this might actually be. During snapshot upload process >>  user token that is used for the upload might get expired. If that's the case, >>  following changes in user_variables might help to resolve the issue: >> >>  glance_glance_api_conf_overrides: >>  keystone_authtoken: >>  service_token_roles_required: True >>  service_token_roles: service > > I found out that after inserting the above the problems now occurs every time I > try to take a snapshot of an instance. Below are my logs: --  Kind Regards, Dmitriy Rabotyagov From dvd at redhat.com Tue Mar 30 17:26:08 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Tue, 30 Mar 2021 13:26:08 -0400 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: <16KSQQ.5VMRD7VXVLI71@est.tech> References: <16KSQQ.5VMRD7VXVLI71@est.tech> Message-ID: Sean is very knowledgeable in a vast array of sectors and they are always very helpful. They have my vote for all the cores. +1 DVD On Tue, Mar 30, 2021 at 12:54 PM Balazs Gibizer wrote: > > > On Tue, Mar 30, 2021 at 17:45, Stephen Finucane > wrote: > > Hey, > > > > Sean has been working on nova for what seems like yonks now. Each > > cycle, they > > spend a significant amount of time reviewing proposed specs and > > contributing to > > discussions at the PTG. This is important work and their > > contributions provide > > everyone with a deep pool of knowledge on all things networking and > > hardware > > upon which to draw. I think the nova project would benefit from their > > addition > > to the specs core reviewer team and I therefore propose we add Sean > > to nova- > > specs-core. > > +1, I rely on Sean's reviews frequently. > > Cheers, > gibi > > > > > Assuming there are no objections, I'll work with gibi to add Sean to > > nova-specs- > > core next week. > > > > Cheers, > > Stephen > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 30 17:54:16 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Mar 2021 17:54:16 +0000 Subject: [oslo] fix flake8-hacking inconsistences on xena/wallaby In-Reply-To: References: <20210324160917.nzzfcuanytmrbujh@yuggoth.org> Message-ID: <20210330175416.jhe5cjf4swz77dt3@yuggoth.org> On 2021-03-30 05:42:25 -0400 (-0400), Sorin Sbarnea wrote: > This is all is needed to make hacking being directly consumable by > pre-commit (tool, not git hook): > https://review.opendev.org/c/openstack/hacking/+/783820 > > I already did some tests and it worked fine, that is how I made bashate and > doc8 long time ago: > > https://github.com/openstack/bashate/blob/master/.pre-commit-hooks.yaml > https://github.com/PyCQA/doc8/blob/master/.pre-commit-hooks.yaml > > Once that is merged and released, you could replace the normal flake8 usage > inside your repo .pre-commit-config.yaml file: > > - repo: https://review.opendev.org/openstack/hacking > rev: 4.1.0 # or whatever release number we will pick > hooks: > - id: hacking > > If you want to add your own extra plugins you can still do it using the > additional_dependencies key. [...] I'm confused as to why you'd call it hacking when you're invoking flake8 with multiple plugins only one of which is hacking. It's just flake8 and the plugins you've installed for it, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lyarwood at redhat.com Tue Mar 30 18:24:24 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 30 Mar 2021 19:24:24 +0100 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: On Tue, 30 Mar 2021 at 17:50, Stephen Finucane wrote: > > Hey, > > Sean has been working on nova for what seems like yonks now. Each cycle, they > spend a significant amount of time reviewing proposed specs and contributing to > discussions at the PTG. This is important work and their contributions provide > everyone with a deep pool of knowledge on all things networking and hardware > upon which to draw. I think the nova project would benefit from their addition > to the specs core reviewer team and I therefore propose we add Sean to nova- > specs-core. > > Assuming there are no objections, I'll work with gibi to add Sean to nova-specs- > core next week. +1 from me. From gmann at ghanshyammann.com Tue Mar 30 18:52:06 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 30 Mar 2021 13:52:06 -0500 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: <178847a8bc0.c286c1f11306773.6313638353690934445@ghanshyammann.com> ---- On Tue, 30 Mar 2021 11:45:48 -0500 Stephen Finucane wrote ---- > Hey, > > Sean has been working on nova for what seems like yonks now. Each cycle, they > spend a significant amount of time reviewing proposed specs and contributing to > discussions at the PTG. This is important work and their contributions provide > everyone with a deep pool of knowledge on all things networking and hardware > upon which to draw. I think the nova project would benefit from their addition > to the specs core reviewer team and I therefore propose we add Sean to nova- > specs-core. > > Assuming there are no objections, I'll work with gibi to add Sean to nova-specs- > core next week. +1 from me. -gmann > > Cheers, > Stephen > > > > From zigo at debian.org Tue Mar 30 19:30:19 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 30 Mar 2021 21:30:19 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: <37df3b61-f0cd-b139-c132-5640c23bbb4e@linaro.org> References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> <37df3b61-f0cd-b139-c132-5640c23bbb4e@linaro.org> Message-ID: On 3/30/21 3:20 PM, Marcin Juszkiewicz wrote: > W dniu 29.03.2021 o 19:48, Thomas Goirand pisze: >> On 3/29/21 5:34 PM, Marcin Juszkiewicz wrote: > >>>> Getting binary support is just a mater of rebuilding packages >>>> for arm64. I once did that for you setting-up a Jenkins machine >>>> just for rebuilding. It's really a shame that you guys aren't >>>> following that road. You'd have all of my support if you did. I >>>> personally don't have access to the necessary hardware for it, and >>>> wont use the arm64 repos... >>> >>> The thing is - no one asked for Debian/binary images so far so I did >>> not bothered you. >>> >>> Your team provides packages for already released OpenStack stuff. >>> From Kolla point of view it means that at beginning of a new >>> development cycle we would need to get images for previous release >>> with Debian binary packages built, tested and backported. >> >> I used to package every beta (b1, b2, b3). But then, nobody was >> consuming them, so I stopped. And the harsh reality right now, is that >> most projects stopped producing these bX releases. >> >> Today, I'm already done with the packaging of all RC1 releases, and >> I'm already beginning to test installing them. This is 2 days after >> the RCs. > > Are they available somewhere online so they can be tested? Of course: either through extrepo (for example: "extrepo enable openstack_wallaby") or directly at: http://osbpo.debian.net/debian/ Look into the "dists" folder to see all what's available: at the moment, all from jessie-liberty to bullseye-wallaby. >> So I'm not sure what more I could do... If you wish to participate and >> provide intermediary releases using Git, that's possible, but that >> means someone needs to participate and do it. I can't be full time >> doing packaging like I used to when I was employed to do only that >> (these days, I maintain my own installer and 8 clusters in >> production...). > > I am aware that building OpenStack packages is not your daily job anymore. It is part of my daily job, but not 100% of my time. Maybe 20%... >> Again, I very much would love to collaborate and help you doing more >> with the Debian binary stuff for ARM. > I can check are there free resources left for providing you AArch64 > machine for builds. I'm ok to do the setup, and explain how to use the box, however, someone else will have to babysit the build process: I don't want to just do it for free, with no return, when I'm not using ARM boxes myself. Note that I need one Jenkins builder VM per OpenStack release, and that the setup is done using Ansible. Cheers, Thomas Goirand (zigo) From zigo at debian.org Tue Mar 30 19:32:40 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 30 Mar 2021 21:32:40 +0200 Subject: [neutron] oslo.privsep migration in Neutron In-Reply-To: References: Message-ID: <0b6c1c71-bedd-f6ef-30cb-b11c7dd9bff8@debian.org> On 3/30/21 5:33 PM, Rodolfo Alonso Hernandez wrote: > Hello Neutrinos:  >   > During the last cycles we have been migrating the Neutron code from > oslo.rootwrap to oslo.privsep. Those efforts are aimed at reaching the > goal defined in [1] and are tracked in [2]. > > At this point, starting Xena developing cycle, we can state that we have > migrated all short lived commands from oslo.rootwrap to oslo.privsep or > to a native implementation (that could also use oslo.privsep to elevate > the permissions if needed). Really: THANKS ! Cheers, Thomas Goirand From mkopec at redhat.com Tue Mar 30 20:44:30 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 30 Mar 2021 22:44:30 +0200 Subject: [qa][tempest] Announcing scenario.manager stable interface Message-ID: Hi all, we would like to announce that tempest.scenario.manager has been declared stable in Tempest 27.0.0 and it's ready to be consumed by tempest plugins. All related patches were tracked under 'tempest-scenario-manager-stable' blueprint [1] and the whole effort was tracked in [2]. A little history: Some time ago, tempest/scenario/manager.py got copied to most of the plugins and therefore, it diverged - every plugin's copy had slight differences. In the latest release, we pushed changes to unify the manager's methods and improved their APIs in order to have them easier to consume. I would also like to thank Soniya Vyas for an exceptional job moving this effort forward. [1] https://review.opendev.org/q/topic:bp/tempest-scenario-manager-stable [2] https://etherpad.opendev.org/p/tempest-scenario-manager Regards, -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Mar 30 20:47:53 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 30 Mar 2021 22:47:53 +0200 Subject: [qa][hacking] Proposing new core reviewers Message-ID: Hi all, I'd like to propose Sorin Sbarnea (IRC: zbr) and Radosław Piliszek (IRC: yoctozepto) to hacking core. They both are doing a great upstream work among multiple different projects and volunteered to help us with maintenance of hacking project as well. You can vote/feedback in this email thread. If no objection by 6th of April, we will add them to the list. Regards, -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Mar 30 21:10:20 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 30 Mar 2021 16:10:20 -0500 Subject: [qa][tempest] Announcing scenario.manager stable interface In-Reply-To: References: Message-ID: <17884f9199d.10e492a891309893.5359473419855789463@ghanshyammann.com> ---- On Tue, 30 Mar 2021 15:44:30 -0500 Martin Kopec wrote ---- > Hi all, > we would like to announce that tempest.scenario.manager has been declared stablein Tempest 27.0.0 and it's ready to be consumed by tempest plugins.All related patches were tracked under 'tempest-scenario-manager-stable' blueprint [1] > and the whole effort was tracked in [2]. > A little history:Some time ago, tempest/scenario/manager.py got copied to most of the plugins and > therefore, it diverged - every plugin's copy had slight differences. In the latest release, > we pushed changes to unify the manager's methods and improved their APIs in orderto have them easier to consume. > I would also like to thank Soniya Vyas for an exceptional job moving this effort forward. > To add here, this is stable for plugins which means we will take care of backward compatibility for any interface change in this file. Import path stays the same "from tempest.scenario import manager" [1]. We encourage each plugin to remove the copy of it and start using it from the tempest. Thanks, Soniya, Lukas, and Martin for working hard on this, this was a long-pending thing in Tempest. [1] https://opendev.org/openstack/tempest/src/commit/c0a408b803ba8df8e5570b9d877e15ccabb52fb2/tempest/scenario/manager.py -gmann > [1] https://review.opendev.org/q/topic:bp/tempest-scenario-manager-stable[2] https://etherpad.opendev.org/p/tempest-scenario-manager > Regards, > -- > Martin Kopec > > > From gmann at ghanshyammann.com Tue Mar 30 21:10:39 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 30 Mar 2021 16:10:39 -0500 Subject: [qa][hacking] Proposing new core reviewers In-Reply-To: References: Message-ID: <17884f963df.10a3434921309904.7860238401031818012@ghanshyammann.com> ---- On Tue, 30 Mar 2021 15:47:53 -0500 Martin Kopec wrote ---- > Hi all, > I'd like to propose Sorin Sbarnea (IRC: zbr) and Radosław Piliszek (IRC: yoctozepto) to hacking > core. They both are doing a great upstream work among multiple different projects andvolunteered to help us with maintenance of hacking project as well. > > You can vote/feedback in this email thread. If no objection by 6th of April, we will add them > to the list. +1 from me. -gmann > > Regards, > -- > Martin Kopec > > > From Arkady.Kanevsky at dell.com Tue Mar 30 22:14:22 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 30 Mar 2021 22:14:22 +0000 Subject: [qa][tempest] Announcing scenario.manager stable interface In-Reply-To: References: Message-ID: Very nice.. Thanks Martin From: Martin Kopec Sent: Tuesday, March 30, 2021 3:45 PM To: openstack-discuss Subject: [qa][tempest] Announcing scenario.manager stable interface [EXTERNAL EMAIL] Hi all, we would like to announce that tempest.scenario.manager has been declared stable in Tempest 27.0.0 and it's ready to be consumed by tempest plugins. All related patches were tracked under 'tempest-scenario-manager-stable' blueprint [1] and the whole effort was tracked in [2]. A little history: Some time ago, tempest/scenario/manager.py [manager.py] got copied to most of the plugins and therefore, it diverged - every plugin's copy had slight differences. In the latest release, we pushed changes to unify the manager's methods and improved their APIs in order to have them easier to consume. I would also like to thank Soniya Vyas for an exceptional job moving this effort forward. [1] https://review.opendev.org/q/topic:bp/tempest-scenario-manager-stable [review.opendev.org] [2] https://etherpad.opendev.org/p/tempest-scenario-manager [etherpad.opendev.org] Regards, -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Mar 30 22:26:45 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 30 Mar 2021 17:26:45 -0500 Subject: [security sig] PTG Timeslot Message-ID: Hello, The security SIG has a spot at the PTG from 1500 to 1700 UTC on April 19th. The agenda[0] is up for anyone to add if they have something to discuss. Hope to see you all there. [0] https://etherpad.opendev.org/p/security-sig-ptg-xena -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Mar 30 22:29:26 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 30 Mar 2021 17:29:26 -0500 Subject: [security sig] April meeting cancelled Message-ID: Hey team, The security sig meeting for April 1st is cancelled, we will meet again next month at the usual time. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Mar 31 06:32:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 31 Mar 2021 08:32:31 +0200 Subject: Networking-midonet deprecated Message-ID: <20210331063231.m4i2hq42bqnyehtg@p1.localdomain> Hi, As it was discussed and decided in the other thread [1] due to lack of maintainers and not good shape of the project, networking-midonet was deprecated and is not part of the official Neutron stadium. Almost all required patches from [2] are already merged. Also patch [3] is merged already so there will be no official Wallaby release of the networking-midonet project. Last stable, supported release is Victroria. Project can be revived in the x/ namespace if there will be someone who wants to maintain it. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021400.html [2] https://review.opendev.org/q/topic:%22deprecate-networking-midonet%22+(status:open%20OR%20status:merged) [3] https://review.opendev.org/c/openstack/releases/+/781713 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From soulxu at gmail.com Wed Mar 31 06:51:08 2021 From: soulxu at gmail.com (Alex Xu) Date: Wed, 31 Mar 2021 14:51:08 +0800 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: +1 Stephen Finucane 于2021年3月31日周三 上午12:54写道: > > Hey, > > Sean has been working on nova for what seems like yonks now. Each cycle, they > spend a significant amount of time reviewing proposed specs and contributing to > discussions at the PTG. This is important work and their contributions provide > everyone with a deep pool of knowledge on all things networking and hardware > upon which to draw. I think the nova project would benefit from their addition > to the specs core reviewer team and I therefore propose we add Sean to nova- > specs-core. > > Assuming there are no objections, I'll work with gibi to add Sean to nova-specs- > core next week. > > Cheers, > Stephen > > > From skaplons at redhat.com Wed Mar 31 06:53:56 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 31 Mar 2021 08:53:56 +0200 Subject: [neutron] oslo.privsep migration in Neutron In-Reply-To: References: Message-ID: <20210331065356.mdldig54kpd3rj2t@p1.localdomain> Hi, On Tue, Mar 30, 2021 at 05:33:40PM +0200, Rodolfo Alonso Hernandez wrote: > Hello Neutrinos: > > During the last cycles we have been migrating the Neutron code from > oslo.rootwrap to oslo.privsep. Those efforts are aimed at reaching the goal > defined in [1] and are tracked in [2]. > > At this point, starting Xena developing cycle, we can state that we have > migrated all short lived commands from oslo.rootwrap to oslo.privsep or to > a native implementation (that could also use oslo.privsep to elevate the > permissions if needed). Thanks a lot Rodolfo for working on that. Great job! > > The problem are the daemons or services (long lived processes) that Neutron > spawns using "ProcessManager"; this is why "ProcessManager.enable" is the > only code calling "utils.execute" without "privsep_exec" parameter. Those > process cannot be executed using oslo.privsep because the privsep root > daemon has a limited number of executing threads. The remaining processes > are [3]. > > Although we didn't reach the Completion Criteria defined in [1], that is > remove the oslo.rootwrap dependency, I think we don't have an alternative > to run those services and we should keep rootwrap for them. If there are no > objections, once [3] is merged we can consider that Neutron (not other > Stadium projects) finished the efforts on [1]. Sounds good for me. > > Please, any feedback is always welcome. Maybe some oslo.privsep experts can take a look into that and help to solve that problem somehow. If not, then IMO we can live with it like it is now. > > Regards. > > [1]https://review.opendev.org/c/openstack/governance/+/718177 > [2]https://storyboard.openstack.org/#!/story/2007686 > [3] > https://review.opendev.org/c/openstack/neutron/+/778444/2/etc/neutron/rootwrap.d/rootwrap.filters -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From marcin.juszkiewicz at linaro.org Wed Mar 31 07:34:06 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 31 Mar 2021 09:34:06 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> <37df3b61-f0cd-b139-c132-5640c23bbb4e@linaro.org> Message-ID: W dniu 30.03.2021 o 21:30, Thomas Goirand pisze: > On 3/30/21 3:20 PM, Marcin Juszkiewicz wrote: >>> Today, I'm already done with the packaging of all RC1 releases, and >>> I'm already beginning to test installing them. This is 2 days after >>> the RCs. >> >> Are they available somewhere online so they can be tested? > > Of course: either through extrepo (for example: "extrepo enable > openstack_wallaby") or directly at: > http://osbpo.debian.net/debian/ > > Look into the "dists" folder to see all what's available: at the moment, > all from jessie-liberty to bullseye-wallaby. While we are buster-wallaby ;D Move to bullseye will happen once Xena cycle opens - I waited for full freeze with it and had several other things to do. >>> Again, I very much would love to collaborate and help you doing more >>> with the Debian binary stuff for ARM. >> I can check are there free resources left for providing you AArch64 >> machine for builds. > > I'm ok to do the setup, and explain how to use the box, however, someone > else will have to babysit the build process: I don't want to just do it > for free, with no return, when I'm not using ARM boxes myself. Understood. > Note that I need one Jenkins builder VM per OpenStack release, and that > the setup is done using Ansible. Will ask around. From mark at stackhpc.com Wed Mar 31 08:14:13 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 31 Mar 2021 09:14:13 +0100 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller In-Reply-To: <63db4377c36d4200a9864c0eb9bba7e7@ncwmexgp009.CORP.CHARTERCOM.com> References: <63db4377c36d4200a9864c0eb9bba7e7@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From zigo at debian.org Wed Mar 31 08:15:26 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 31 Mar 2021 10:15:26 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> <37df3b61-f0cd-b139-c132-5640c23bbb4e@linaro.org> Message-ID: <9958f956-c706-cb89-d235-383ab1fb42a6@debian.org> On 3/31/21 9:34 AM, Marcin Juszkiewicz wrote: > W dniu 30.03.2021 o 21:30, Thomas Goirand pisze: >> On 3/30/21 3:20 PM, Marcin Juszkiewicz wrote: > >>>> Today, I'm already done with the packaging of all RC1 releases, and >>>> I'm already beginning to test installing them. This is 2 days after >>>> the RCs. >>> >>> Are they available somewhere online so they can be tested? >> >> Of course: either through extrepo (for example: "extrepo enable >> openstack_wallaby") or directly at: >> http://osbpo.debian.net/debian/ >> >> Look into the "dists" folder to see all what's available: at the moment, >> all from jessie-liberty to bullseye-wallaby. > > While we are buster-wallaby ;D Move to bullseye will happen once Xena > cycle opens - I waited for full freeze with it and had several other > things to do. Well, that's not how we do things in the Debian OpenStack team. We generally have one single OpenStack release which we both build for stable and stable+1. This release is Victoria. So we do have non-official repositories for Victoria under Buster and Bullseye: - buster-victoria - bullseye-victoria The reason we do that, is that we want our users to be able to switch Debian release without changing OpenStack release. Wallaby is currently pushed to Debian Experimental, and we will only be able to upload to unstable once Bullseye is released, which normally should happen before Xena. Hopefully, you guys at Linaro can align with this policy? >>>> Again, I very much would love to collaborate and help you doing more >>>> with the Debian binary stuff for ARM. >>> I can check are there free resources left for providing you AArch64 >>> machine for builds. >> >> I'm ok to do the setup, and explain how to use the box, however, someone >> else will have to babysit the build process: I don't want to just do it >> for free, with no return, when I'm not using ARM boxes myself. > > Understood. Just one more thing: of course, I can do the first bootstraping to make sure we have at least Victoria + Wallaby. If you didn't know, we have a status page here: http://osbpo.debian.net/deb-status/ This isn't really official, this is more some private tooling so we know what work we need to do. (This was written by Michal Arbet and his brother Jan, both from ultimum.io.) Hopefully, we can also add ARM support there. Also, have you tried Victoria with plain Bullseye? It should work out of the box from the Debian official repositories normally... That's the good thing about this timing: there shouldn't be much to backport to an ARM specific repository since we're just after the freeze of Bullseye. >> Note that I need one Jenkins builder VM per OpenStack release, and that >> the setup is done using Ansible. > > Will ask around. Once you have the VMs, I can make debian.net point to it. I generally setup a CNAME pointer. This is important so we can setup a letsencrypt SSL certificate for the Jenkins URLs. BTW, what kind of 1U / 2U servers would you recommend for ARM in production, for storage and/or compute workload? Cheers, Thomas Goirand (zigo) From ikatzir at infinidat.com Wed Mar 31 08:28:29 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Wed, 31 Mar 2021 11:28:29 +0300 Subject: [E] [ironic] How to move nodes from a 'clean failed' state into 'Available' In-Reply-To: References: Message-ID: Hello Forum, Just for the record, the problem was resolved by restarting all the ironic containers, I believe that restarting the UC node entirely would have also fixed that. So after the ironic containers started fresh, the PXE worked well, and after running 'openstack overcloud node introspect --all-manageable --provide' it shows: +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 588bc3f6-dc14-4a07-8e38-202540d046f8 | interop025 | None | power off | available | False | | dceab84b-1d99-49b5-8f79-c589c0884269 | interop026 | None | power off | available | False | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ I now ready for deployment of overcloud. thanks, Igal On Thu, Mar 25, 2021 at 12:48 AM Igal Katzir wrote: > Thanks Jay, > It gets into 'clean failed' state because it fails to boot into PXE mode. > I don't understand why the DHCP does not respond to the clients request, > it's like it remembers that the same client already received an IP in the > past. > Is there a way to clear the dnsmasq database of reservations? > Igal > > On Wed, Mar 24, 2021 at 5:26 PM Jay Faulkner < > jay.faulkner at verizonmedia.com> wrote: > >> A node in CLEAN FAILED must be moved to MANAGEABLE state before it can be >> told to "provide" (which eventually puts it back in AVAILABLE). >> >> Try this: >> `openstack baremetal node manage UUID`, then run the command with >> "provide" as you did before. >> >> The available states and their transitions are documented here: >> https://docs.openstack.org/ironic/latest/contributor/states.html >> >> I'll note that if cleaning failed, it's possible the node is >> misconfigured in such a way that will cause all deployments and cleanings >> to fail (e.g.; if you're using Ironic with Nova, and you attempt to >> provision a machine and it errors during deploy; Nova will by default >> attempt to clean that node, which may be why you see it end up in clean >> failed). So I strongly suggest you look at the last_error field on the node >> and attempt to determine why the failure happened before retrying. >> >> Good luck! >> >> -Jay Faulkner >> >> On Wed, Mar 24, 2021 at 8:20 AM Igal Katzir >> wrote: >> >>> Hello Team, >>> >>> I had a situation where my *undercloud-node *had a problem with it’s >>> disk and has disconnected from overcloud. >>> I couldn’t restore the undercloud controller and ended up re-installing >>> it (running 'openstack undercloud install’). >>> The installation ended successfully but now I’m in a situation where >>> Cleanup of the overcloud deployed nodes fails: >>> >>> (undercloud) [stack at interop010 ~]$ openstack baremetal node list >>> >>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >>> | UUID | Name | Instance >>> UUID | Power State | Provisioning State | Maintenance | >>> >>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >>> | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | >>> power on | clean failed | True | >>> | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | >>> power on | clean failed | True | >>> >>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >>> >>> I’ve tried to move node to available state but cannot: >>> (undercloud) [stack at interop010 ~]$ openstack baremetal node provide >>> 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 >>> The requested action "provide" can not be performed on node >>> "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". >>> (HTTP 400) >>> >>> My question is: >>> *How do I make the nodes available again?* >>> as the deployment of overcloud fails with: >>> ERROR due to "Message: No valid host was found. , Code: 500” >>> >>> Thanks, >>> Igal >>> >> > > -- > Regards, > > *Igal Katzir* > Cell +972-54-5597086 > Interoperability Team > *INFINIDAT* > > > > > -- Regards, *Igal Katzir* Cell +972-54-5597086 Interoperability Team *INFINIDAT* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Wed Mar 31 09:24:21 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 31 Mar 2021 18:24:21 +0900 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core Message-ID: Hello, I'd like to propose Alan Bishop (abishop) for the core team of puppet-cinder and puppet-glance. Alan has been actively involved in these 2 modules for a few years and has implemented some nice features like multiple backend support in glance, cinder s3 backup driver and etc, which expanded adoption of puppet-openstack. He has also provided good reviews on patches for these 2 repos based on his understanding about our code, puppet and serverspec. He is an active contributor to cinder and has deep knowledge about it. In addition He is also a core review in TripleO, which consumes our puppet modules, and mainly covers storage components like cinder and glance, so he is familiar with the way how these two components are deployed and configured. I believe adding him to our board helps us improve our review of these two modules. I'll wait for one week to hear any feedback from other core reviewers. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar at redhat.com Wed Mar 31 09:40:52 2021 From: chkumar at redhat.com (Chandan Kumar) Date: Wed, 31 Mar 2021 15:10:52 +0530 Subject: [qa][tempest] Announcing scenario.manager stable interface In-Reply-To: <17884f9199d.10e492a891309893.5359473419855789463@ghanshyammann.com> References: <17884f9199d.10e492a891309893.5359473419855789463@ghanshyammann.com> Message-ID: On Wed, Mar 31, 2021 at 2:41 AM Ghanshyam Mann wrote: > > ---- On Tue, 30 Mar 2021 15:44:30 -0500 Martin Kopec wrote ---- > > Hi all, > > we would like to announce that tempest.scenario.manager has been declared stablein Tempest 27.0.0 and it's ready to be consumed by tempest plugins.All related patches were tracked under 'tempest-scenario-manager-stable' blueprint [1] > > and the whole effort was tracked in [2]. > > A little history:Some time ago, tempest/scenario/manager.py got copied to most of the plugins and > > therefore, it diverged - every plugin's copy had slight differences. In the latest release, > > we pushed changes to unify the manager's methods and improved their APIs in orderto have them easier to consume. > > I would also like to thank Soniya Vyas for an exceptional job moving this effort forward. > > > > To add here, this is stable for plugins which means we will take care of backward compatibility for any interface change in this file. > Import path stays the same "from tempest.scenario import manager" [1]. We encourage each plugin to remove the copy of it > and start using it from the tempest. > > Thanks, Soniya, Lukas, and Martin for working hard on this, this was a long-pending thing in Tempest. > Great Job everyone, It is a major accomplishment! > [1] https://opendev.org/openstack/tempest/src/commit/c0a408b803ba8df8e5570b9d877e15ccabb52fb2/tempest/scenario/manager.py > > -gmann > > > [1] https://review.opendev.org/q/topic:bp/tempest-scenario-manager-stable[2] https://etherpad.opendev.org/p/tempest-scenario-manager Thanks, Chandan Kumar From tobias.urdin at binero.com Wed Mar 31 10:49:38 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Wed, 31 Mar 2021 10:49:38 +0000 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: References: Message-ID: <20f86856b19d4a7d801c4d5e7bdc5933@binero.com> big +1 :) Best regards ________________________________ From: Takashi Kajinami Sent: Wednesday, March 31, 2021 11:24:21 AM To: openstack-discuss Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core Hello, I'd like to propose Alan Bishop (abishop) for the core team of puppet-cinder and puppet-glance. Alan has been actively involved in these 2 modules for a few years and has implemented some nice features like multiple backend support in glance, cinder s3 backup driver and etc, which expanded adoption of puppet-openstack. He has also provided good reviews on patches for these 2 repos based on his understanding about our code, puppet and serverspec. He is an active contributor to cinder and has deep knowledge about it. In addition He is also a core review in TripleO, which consumes our puppet modules, and mainly covers storage components like cinder and glance, so he is familiar with the way how these two components are deployed and configured. I believe adding him to our board helps us improve our review of these two modules. I'll wait for one week to hear any feedback from other core reviewers. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Mar 31 12:04:23 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Mar 2021 06:04:23 -0600 Subject: [qa][tempest] Announcing scenario.manager stable interface In-Reply-To: References: <17884f9199d.10e492a891309893.5359473419855789463@ghanshyammann.com> Message-ID: On Wed, Mar 31, 2021 at 3:47 AM Chandan Kumar wrote: > On Wed, Mar 31, 2021 at 2:41 AM Ghanshyam Mann > wrote: > > > > ---- On Tue, 30 Mar 2021 15:44:30 -0500 Martin Kopec > wrote ---- > > > Hi all, > > > we would like to announce that tempest.scenario.manager has been > declared stablein Tempest 27.0.0 and it's ready to be consumed by tempest > plugins.All related patches were tracked under > 'tempest-scenario-manager-stable' blueprint [1] > > > and the whole effort was tracked in [2]. > > > A little history:Some time ago, tempest/scenario/manager.py got > copied to most of the plugins and > > > therefore, it diverged - every plugin's copy had slight differences. > In the latest release, > > > we pushed changes to unify the manager's methods and improved their > APIs in orderto have them easier to consume. > > > I would also like to thank Soniya Vyas for an exceptional job moving > this effort forward. > > > > > > > To add here, this is stable for plugins which means we will take care of > backward compatibility for any interface change in this file. > > Import path stays the same "from tempest.scenario import manager" [1]. > We encourage each plugin to remove the copy of it > > and start using it from the tempest. > > > > Thanks, Soniya, Lukas, and Martin for working hard on this, this was a > long-pending thing in Tempest. > > > > Great Job everyone, It is a major accomplishment! > I concur!! Well done! > > > [1] > https://opendev.org/openstack/tempest/src/commit/c0a408b803ba8df8e5570b9d877e15ccabb52fb2/tempest/scenario/manager.py > > > > -gmann > > > > > [1] > https://review.opendev.org/q/topic:bp/tempest-scenario-manager-stable[2] > https://etherpad.opendev.org/p/tempest-scenario-manager > > Thanks, > > Chandan Kumar > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Mar 31 12:19:07 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 31 Mar 2021 08:19:07 -0400 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: References: Message-ID: <66baf178-2a82-dd87-b6bb-21d0c8ad3563@gmail.com> Not really my business, but "cinder" appears in the subject line, so ... +1 from me -- the Cinder team appreciates the work Alan does to make our deliverables deployable. Additionally, Alan's reviews in the cinder project are always thoughtful and helpful to the reviewee, and I'm sure he brings the same care to puppet-* reviews. On 3/31/21 5:24 AM, Takashi Kajinami wrote: > Hello, > > > I'd like to propose Alan Bishop (abishop) for the core team of puppet-cinder > and puppet-glance. > Alan has been actively involved in these 2 modules for a few years > and has implemented some nice features like multiple backend support in > glance, > cinder s3 backup driver and etc, which expanded adoption of > puppet-openstack. > He has also provided good reviews on patches for these 2 repos based > on his understanding about our code, puppet and serverspec. > > He is an active contributor to cinder and has deep knowledge about it. > In addition He is also a core review in TripleO, which consumes our > puppet modules, > and mainly covers storage components like cinder and glance, so he is > familiar > with the way how these two components are deployed and configured. > > I believe adding him to our board helps us improve our review of these > two modules. > > I'll wait for one week to hear any feedback from other core reviewers. > > Thank you, > Takashi > From C-Albert.Braden at charter.com Wed Mar 31 12:30:53 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 31 Mar 2021 12:30:53 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <5f4f5755d65649c5a275c16b4d1789e0@ncwmexgp009.CORP.CHARTERCOM.com> Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From marcin.juszkiewicz at linaro.org Wed Mar 31 12:43:53 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 31 Mar 2021 14:43:53 +0200 Subject: [kolla] does someone uses AArch64 images? In-Reply-To: <9958f956-c706-cb89-d235-383ab1fb42a6@debian.org> References: <1898a939-ed8a-2af1-9436-fc13811f0e51@linaro.org> <140dee5f-5596-f0c2-4d4d-f26ebcc18181@debian.org> <37df3b61-f0cd-b139-c132-5640c23bbb4e@linaro.org> <9958f956-c706-cb89-d235-383ab1fb42a6@debian.org> Message-ID: <0543cefc-17a1-5b3a-8ae4-891ac2585684@linaro.org> W dniu 31.03.2021 o 10:15, Thomas Goirand pisze: > On 3/31/21 9:34 AM, Marcin Juszkiewicz wrote: >>> Of course: either through extrepo (for example: "extrepo enable >>> openstack_wallaby") or directly at: >>> http://osbpo.debian.net/debian/ >>> >>> Look into the "dists" folder to see all what's available: at the moment, >>> all from jessie-liberty to bullseye-wallaby. >> >> While we are buster-wallaby ;D Move to bullseye will happen once Xena >> cycle opens - I waited for full freeze with it and had several other >> things to do. > > Well, that's not how we do things in the Debian OpenStack team. We > generally have one single OpenStack release which we both build for > stable and stable+1. This release is Victoria. So we do have > non-official repositories for Victoria under Buster and Bullseye: > > - buster-victoria > - bullseye-victoria > > The reason we do that, is that we want our users to be able to switch > Debian release without changing OpenStack release. > > Wallaby is currently pushed to Debian Experimental, and we will only be > able to upload to unstable once Bullseye is released, which normally > should happen before Xena. > > Hopefully, you guys at Linaro can align with this policy? We only use Kolla images to deploy. And Wallaby Kolla is on Buster. Linaro cloud is on Ussuri still iirc. Doing setup using distro packages is something we had during Liberty times. Then moved to some weird in-house solution and then to Kolla containers (using source). > Also, have you tried Victoria with plain Bullseye? It should work out of > the box from the Debian official repositories normally... All our setup are done with Kolla containers. > BTW, what kind of 1U / 2U servers would you recommend for ARM in > production, for storage and/or compute workload? Kevin Zhao (cc:) would probably be better on hardware recommendations. I mostly complain that we still use v8.0 (2012 tech) ones ;D From mdemaced at redhat.com Wed Mar 31 12:57:08 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Wed, 31 Mar 2021 09:57:08 -0300 Subject: [kuryr] vPTG April 2021 In-Reply-To: References: Message-ID: Hello everyone, The April 20nd session needed to be rescheduled to April 23rd at the same time. Feel free to include any topic that is desired to be discussed at the Kuryr etherpad[1]. Cheers, Maysa. [1] https://etherpad.opendev.org/p/xena-ptg-kuryr On Mon, Mar 22, 2021 at 3:40 PM Maysa De Macedo Souza wrote: > Hello, > > Two sessions were scheduled for Kuryr on the upcoming PTG: > - 7-8 UTC on April 20 > - 2-3 UTC on April 22 > > Everyone is more than welcome to join the sessions and check our future > plans, give feedback or discuss anything regarding Kuryr. > > Even though participation is free registration is needed[1]. > > Regards, > Maysa Macedo. > > [1] https://april2021-ptg.eventbrite.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Mar 31 13:20:01 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 31 Mar 2021 10:20:01 -0300 Subject: [cinder] Bug deputy report for week of 2021-03-31 Message-ID: Hello, This is a bug report from 2021-03-16 to 2021-03-24. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC in #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical: - https://bugs.launchpad.net/os-brick/+bug/1921381 "iSCSI: Flushing issues when multipath config has changed". Stable/ussuri backport assigned to Luigi Toscano (ltoscano) High: - Medium: - https://bugs.launchpad.net/cinder/+bug/1922013 "IBM SVF driver: Add vols to a group (consitent_group_replication enabled) fails sometimes if rccg and rcrel are in different states" Unassigned. Low: - https://bugs.launchpad.net/cinder/+bug/1921584: "backup/drivers/s3.py uses vendored requests from botocore". Unassigned. Incomplete: - Regards Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Wed Mar 31 13:39:52 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 31 Mar 2021 13:39:52 +0000 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: <66baf178-2a82-dd87-b6bb-21d0c8ad3563@gmail.com> References: <66baf178-2a82-dd87-b6bb-21d0c8ad3563@gmail.com> Message-ID: Dell Customer Communication - Confidential +1 -----Original Message----- From: Brian Rosmaita Sent: Wednesday, March 31, 2021 7:19 AM To: openstack-discuss at lists.openstack.org Subject: Re: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core [EXTERNAL EMAIL] Not really my business, but "cinder" appears in the subject line, so ... +1 from me -- the Cinder team appreciates the work Alan does to make our deliverables deployable. Additionally, Alan's reviews in the cinder project are always thoughtful and helpful to the reviewee, and I'm sure he brings the same care to puppet-* reviews. On 3/31/21 5:24 AM, Takashi Kajinami wrote: > Hello, > > > I'd like to propose Alan Bishop (abishop) for the core team of > puppet-cinder and puppet-glance. > Alan has been actively involved in these 2 modules for a few years and > has implemented some nice features like multiple backend support in > glance, cinder s3 backup driver and etc, which expanded adoption of > puppet-openstack. > He has also provided good reviews on patches for these 2 repos based > on his understanding about our code, puppet and serverspec. > > He is an active contributor to cinder and has deep knowledge about it. > In addition He is also a core review in TripleO, which consumes our > puppet modules, and mainly covers storage components like cinder and > glance, so he is familiar with the way how these two components are > deployed and configured. > > I believe adding him to our board helps us improve our review of these > two modules. > > I'll wait for one week to hear any feedback from other core reviewers. > > Thank you, > Takashi > From aschultz at redhat.com Wed Mar 31 13:48:12 2021 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 31 Mar 2021 07:48:12 -0600 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: References: Message-ID: +1 On Wed, Mar 31, 2021 at 3:30 AM Takashi Kajinami wrote: > Hello, > > > I'd like to propose Alan Bishop (abishop) for the core team of > puppet-cinder > and puppet-glance. > Alan has been actively involved in these 2 modules for a few years > and has implemented some nice features like multiple backend support in > glance, > cinder s3 backup driver and etc, which expanded adoption of > puppet-openstack. > He has also provided good reviews on patches for these 2 repos based > on his understanding about our code, puppet and serverspec. > > He is an active contributor to cinder and has deep knowledge about it. > In addition He is also a core review in TripleO, which consumes our puppet > modules, > and mainly covers storage components like cinder and glance, so he is > familiar > with the way how these two components are deployed and configured. > > I believe adding him to our board helps us improve our review of these two > modules. > > I'll wait for one week to hear any feedback from other core reviewers. > > Thank you, > Takashi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Wed Mar 31 14:08:57 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Wed, 31 Mar 2021 16:08:57 +0200 Subject: [kolla-ansible][horizon][keystone] policy.yaml/json files In-Reply-To: References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: <2D9BBE85-B5FB-409D-AD22-D45C49ED7DB1@poczta.onet.pl> Red Hat did really good documentation on security (and policies) which can be easily used with kolla - thanks to that my policies now work: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/pdf/security_and_hardening_guide/Red_Hat_OpenStack_Platform-13-Security_and_Hardening_Guide-en-US.pdf Best regards, Adam > Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 12:51: > > On Tue, 30 Mar 2021 at 10:52, Adam Tomas wrote: >> >> Hi, >> thank you for the answers, but I still have more questions :) >> >> Without any custom policies when I look inside the horizon container I see (in /etc/openstack-dashboard) current/default policies. If I override (for example keystone_policy.json) with a file placed in /etc/kolla/config/horizon which contains only 3 rules, then after kolla-ansible reconfigure inside horizon container there is of course keystone_police.json file, but only with my 3 rules - should I assume, that previously seen default rules (other than the ones overridden by my rules) still works, whether I see them in the file or not? > > I'd assume Horizon works in the same way as other services, and you > only need to include changes. Please test and report back. > >> >> And another question - I need a rule, that will allow some „special” user (which I call project_admin) to see,create, update and delete users inside a project (but not elsewhere). How should the policy look like? >> >> „project_admin_required”: „role:project_admin and default_project_id:%(target.project_id)s" >> „identity:list_user”: „rule: admin_required or project_admin_required” >> „identity:create_user”: „rule: admin_required or project_admin_required” >> „identity:update_user”: „rule: admin_required or project_admin_required” >> „identity:delete_user”: „rule: admin_required or project_admin_required” >> > > As I mentioned before, admin is global in OpenStack for now. There may > be various ways to achieve what you want. One is to introduce a role, > and use it in the rules. It's a bit of a can of worms though, since > there are many API endpoints which might need to be updated to catch > all corner cases. I added keystone to the subject, in case anyone from > that team wants to comment. > >> ? >> Best regards, >> Adam >> >>> Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 11:05: >>> >>> On Tue, 30 Mar 2021 at 09:24, Mark Goddard wrote: >>>> >>>> On Mon, 29 Mar 2021 at 15:36, Adam Tomas wrote: >>>>> >>>>> Hi, >>>> >>>> Hi, Looks like we need some more/better docs on this in Kolla. >>> >>> Proposed some docs improvements: >>> https://review.opendev.org/c/openstack/kolla-ansible/+/783809 >>> >>>> >>>>> Im not quite clear about policy.yaml/json files in kolla-ansible. Let assume, that I need to allow one of project users to add other users to the project. So I create „project_admin” role and assign it to this user. Then I found /etc/kolla/keystone/policy.json.test file, which I use as template. There is rule „identity:create_credential” : „(role:admin and system_scope:all)” so I add „or role:project_admin” and put file in /etc/kolla/config/keystone/ and reconfigure kolla. And now few questions: >>>>> >>>>> 1. policy.json (or policy.yaml) always overwrite all default policies? I mean if I only add one rule to this file then other rules will „disappear” or will have default values? Is there any way to only overwrite some default rules and leave rest with defaults? Like with .conf files >>>> >>>> For a few releases now, OpenStack supports policy in code. This means >>>> that you only need to include the rules you want to override in your >>>> JSON/YAML file. >>>> >>>>> >>>>> 2. what about Horizon and visibility of options? In mentioned case putting the same policy.json file in /etc/kolla/config/keystone/ and /etc/kolla/config/horizon/ should „unblock” Add User button for user with project_admin role? Or how to achieve it? >>>> >>>> For keystone policy in horizon, you need to use: >>>> >>>> /etc/kolla/config/horizon/keystone_policy.json >>>> >>>>> >>>>> 3. does Horizon need the duplicated policy.json files from other services in it’s configuration folder or is it enough to write policy.json for services I want to change? >>>> >>>> Only the ones you want to change. >>>> >>>>> >>>>> 4. when I assign admin role to a user with projectID (openstack role add —project PROJECT_ID —user SOME_USER admin) this user sees in Horizon everything systemwide, not only inside this project… Which rules should be created to allow him to see only users and resources which belongs to this project? >>>> >>>> Currently admin is generally global in OpenStack. It's a known >>>> limitation, and currently being worked on. >>>> >>>>> >>>>> Best regards >>>>> Adam >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Mar 31 14:49:00 2021 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 31 Mar 2021 09:49:00 -0500 Subject: [neutron] oslo.privsep migration in Neutron In-Reply-To: <20210331065356.mdldig54kpd3rj2t@p1.localdomain> References: <20210331065356.mdldig54kpd3rj2t@p1.localdomain> Message-ID: <586ee31b-d586-4053-fcfc-8d686c722a3c@nemebean.com> On 3/31/21 1:53 AM, Slawek Kaplonski wrote: > Hi, > > On Tue, Mar 30, 2021 at 05:33:40PM +0200, Rodolfo Alonso Hernandez wrote: >> Hello Neutrinos: >> >> During the last cycles we have been migrating the Neutron code from >> oslo.rootwrap to oslo.privsep. Those efforts are aimed at reaching the goal >> defined in [1] and are tracked in [2]. >> >> At this point, starting Xena developing cycle, we can state that we have >> migrated all short lived commands from oslo.rootwrap to oslo.privsep or to >> a native implementation (that could also use oslo.privsep to elevate the >> permissions if needed). > > Thanks a lot Rodolfo for working on that. Great job! > >> >> The problem are the daemons or services (long lived processes) that Neutron >> spawns using "ProcessManager"; this is why "ProcessManager.enable" is the >> only code calling "utils.execute" without "privsep_exec" parameter. Those >> process cannot be executed using oslo.privsep because the privsep root >> daemon has a limited number of executing threads. The remaining processes >> are [3]. >> >> Although we didn't reach the Completion Criteria defined in [1], that is >> remove the oslo.rootwrap dependency, I think we don't have an alternative >> to run those services and we should keep rootwrap for them. If there are no >> objections, once [3] is merged we can consider that Neutron (not other >> Stadium projects) finished the efforts on [1]. > > Sounds good for me. > >> >> Please, any feedback is always welcome. > > Maybe some oslo.privsep experts can take a look into that and help to solve that > problem somehow. If not, then IMO we can live with it like it is now. One possibility is to start a separate privsep daemon for the long-running services. I believe privsep was designed with that in mind so you could have privileged calls running in a daemon with just the necessary permissions for that call, not the permissions for every privileged call in the service. That said, I'm not sure how much, if at all, it has been used. > >> >> Regards. >> >> [1]https://review.opendev.org/c/openstack/governance/+/718177 >> [2]https://storyboard.openstack.org/#!/story/2007686 >> [3] >> https://review.opendev.org/c/openstack/neutron/+/778444/2/etc/neutron/rootwrap.d/rootwrap.filters > From hberaud at redhat.com Wed Mar 31 14:54:07 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 31 Mar 2021 16:54:07 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: <78MQQQ.FMXMGIRLYEMQ@est.tech> References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> <78MQQQ.FMXMGIRLYEMQ@est.tech> Message-ID: Hello Balazs, Now that the os-brick changes on nova are merged do you plan to propose a RC2? https://review.opendev.org/c/openstack/nova/+/783674 Le lun. 29 mars 2021 à 17:43, Balazs Gibizer a écrit : > > > On Mon, Mar 29, 2021 at 16:05, Balazs Gibizer > wrote: > > > > > > On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita > > wrote: > >> Hello Requirements Team, > >> > >> The Cinder team recently became aware of a potential data-loss bug > >> [0] that has been fixed in os-brick master [1] and backported to > >> os-brick stable/wallaby [2]. We've proposed a release of os-brick > >> 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to > >> include 4.4.0 in the wallaby release. > >> > >> We have three jobs running tempest with os-brick source in master > >> that have passed with [1]: os-brick-src-devstack-plugin-ceph [4], > >> os-brick-src-tempest-lvm-lio-barbican [5],and > >> os-brick-src-tempest-nfs [6]. The difference between os-brick > >> master (at the time the tests were run) and stable/wallaby since > >> the 4.3.0 tag is as follows: > >> > >> master: > >> d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg change > >> (Gorka Eguileor) > >> 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to > >> modifying offset" (Zuul) > >> 28545c7 4 months ago RBD: catch read exceptions prior to modifying > >> offset (Jon Bernard) > >> 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" > >> (Zuul) > >> 7cfdb76 6 weeks ago Dropping explicit unicode literal > >> (tushargite96) > >> 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack Release > >> Bot) > >> ab57392 3 weeks ago Update master for stable/wallaby (OpenStack > >> Release Bot) > >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > >> connection information compatibility fix" (Zuul) > >> > >> stable/wallaby: > >> f86944b 3 days ago Add release note prelude for os-brick 4.4.0 > >> (Brian Rosmaita) > >> c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg change > >> (Gorka Eguileor) > >> 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for stable/wallaby > >> (OpenStack Release Bot) > >> f3f93dc 3 weeks ago Update .gitreview for stable/wallaby > >> (OpenStack Release Bot) > >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > >> connection information compatibility fix" (Zuul) > >> > >> This gives us very high confidence that the results of the tests run > >> against master also apply to stable/wallaby at f86944b. > >> > >> Thank you for considering this request. > >> > >> (I've included Nova here because the bug occurs when the > >> configuration option that enables multipath connections on a > >> compute is changed while volumes are attached, so if this RFE is > >> approved, nova might want to raise the minimum version of os-brick > >> in wallaby to 4.4.0.) > >> > > > > Thanks for the heads up. After the new os-brick version is released I > > will prepare a version bump patch in nova on master and > > stable/wallaby. This also means that nova will release an RC2. > > I've proposed the nova patch on master to bump min os-brick to 4.3.1 in > nova[1] > > [1] https://review.opendev.org/c/openstack/nova/+/783674 > > > > > Cheers, > > gibi > > > >> > >> [0] https://launchpad.net/bugs/1921381 > >> [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > >> [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > >> [3] https://review.opendev.org/c/openstack/releases/+/783641 > >> [4] > >> > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > >> [5] > >> > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > >> [6] > >> > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa > > > > > > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Mar 31 15:55:56 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 31 Mar 2021 17:55:56 +0200 Subject: [release] Meeting Time Poll In-Reply-To: References: Message-ID: Hello deliveryers, Don't forget to vote for our new meeting time. Thank you Le ven. 26 mars 2021 à 13:43, Herve Beraud a écrit : > Hello > > We have a few regular attendees of the Release Management meeting who > have conflicts > with the current meeting time. As a result, we would like to find a new > time to hold the meeting. I've created a Doodle poll[1] for everyone to > give their input on times. It's mostly limited to times that reasonably > overlap the working day in the US and Europe since that's where most of > our attendees are located. > > If you attend the Release Management meeting, please fill out the poll so > we can hopefully find a time that works better for everyone. > > For the sake of organization and to allow everyone to schedule his agenda > accordingly, the poll will be closed on April 5th. On that date, I will > announce the time of this meeting and the date on which it will take effect > . > > Thanks! > > [1] https://doodle.com/poll/ip6tg4fvznz7p3qx > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Mar 31 17:02:37 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Mar 2021 11:02:37 -0600 Subject: [all] Gate resources and performance In-Reply-To: References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: On Wed, Feb 10, 2021 at 1:05 PM Dan Smith wrote: > > Here's the timing I see locally: > > Vanilla devstack: 775 > > Client service alone: 529 > > Parallel execution: 527 > > Parallel client service: 465 > > > > Most of the difference between the last two is shorter async_wait > > times because the deployment steps are taking less time. So not quite > > as much as before, but still a decent increase in speed. > > Yeah, cool, I think you're right that we'll just serialize the > calls. It may not be worth the complexity, but if we make the OaaS > server able to do a few things in parallel, then we'll re-gain a little > more perf because we'll go back to overlapping the *server* side of > things. Creating flavors, volume types, networks and uploading the image > to glance are all things that should be doable in parallel in the server > projects. > > 465s for a devstack is awesome. Think of all the developer time in > $local_fiat_currency we could have saved if we did this four years > ago... :) > > --Dan > > Hey folks, Just wanted to check back in on the resource consumption topic. Looking at my measurements the TripleO group has made quite a bit of progress keeping our enqued zuul time lower than our historical average. Do you think we can measure where things stand now and have some new numbers available at the PTG? /me notes we had a blip on 3/25 but there was a one off issue w/ nodepool in our gate. Marios Andreou has put a lot of time into this, and others as well. Kudo's Marios! Thanks all! -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Mar 30 08:48:45 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 30 Mar 2021 10:48:45 +0200 Subject: [tc][release] Networking-midonet current status and Wallaby release In-Reply-To: <20210330070308.eltpfavg44rawo7e@p1.localdomain> References: <59893229.6jtkhXVcMD@p1> <20210329110631.5gnmro77saxnf64p@p1.localdomain> <1787e433b4f.d2499d951227382.1791926814696642761@ghanshyammann.com> <20210329142631.ojzui3gwcysdwgmt@p1.localdomain> <1787e785a14.caa7688d1232290.6209945891760877537@ghanshyammann.com> <20210329151415.a7u7alxqkypjyiau@p1.localdomain> <20210330070308.eltpfavg44rawo7e@p1.localdomain> Message-ID: Accordingly to our choice, I updated the release patch. https://review.opendev.org/c/openstack/releases/+/781713 Thanks Slawek. Le mar. 30 mars 2021 à 09:04, Slawek Kaplonski a écrit : > Hi, > > As Takashi told me that he will not have time to work on this anytime soon > and I > don't get any info from Sam, I just proposed patches to deprecate > networking-midonet project [1]. > I know that it's very late in the cycle but I think that this will be the > best > to do based on current circumstances and discussion in this thread. > > [1] > https://review.opendev.org/q/topic:%22deprecate-networking-midonet%22+(status:open%20OR%20status:merged) > > On Mon, Mar 29, 2021 at 05:23:25PM +0200, Herve Beraud wrote: > > Excellent! Thank you Slavek. > > > > Le lun. 29 mars 2021 à 17:15, Slawek Kaplonski a > > écrit : > > > > > Hi, > > > > > > On Mon, Mar 29, 2021 at 05:07:15PM +0200, Herve Beraud wrote: > > > > If we decide to follow the depreciation process we can wait until > > > tomorrow > > > > or Wednesday early in the morning. > > > > > > I also pinged Sam Morrison and YAMAMOTO Takashi about that today. If I > > > will not > > > have any reply from them until tomorrow morning CEST time, I will > propose > > > patches to deprecate it in this cycle. > > > > > > > > > > > Le lun. 29 mars 2021 à 16:52, Ghanshyam Mann < > gmann at ghanshyammann.com> a > > > > écrit : > > > > > > > > > ---- On Mon, 29 Mar 2021 09:26:31 -0500 Slawek Kaplonski < > > > > > skaplons at redhat.com> wrote ---- > > > > > > Hi, > > > > > > > > > > > > On Mon, Mar 29, 2021 at 08:53:58AM -0500, Ghanshyam Mann wrote: > > > > > > > ---- On Mon, 29 Mar 2021 06:14:06 -0500 Akihiro Motoki < > > > > > amotoki at gmail.com> wrote ---- > > > > > > > > On Mon, Mar 29, 2021 at 8:07 PM Slawek Kaplonski < > > > > > skaplons at redhat.com> wrote: > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > On Mon, Mar 29, 2021 at 11:52:59AM +0200, Herve Beraud > wrote: > > > > > > > > > > Hello, > > > > > > > > > > > > > > > > > > > > The main question is, does the previous Victoria > version > > > [1] > > > > > will be > > > > > > > > > > compatible with the latest neutron changes and with the > > > latest > > > > > engine > > > > > > > > > > facade introduced during Wallaby? > > > > > > > > > > > > > > > > > > It won't be compatible. Networking-midonet from Victoria > will > > > > > not work properly > > > > > > > > > with Neutron Wallaby. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Releasing an unfixed engine facade code is useless, so > we > > > > > shouldn't release > > > > > > > > > > a new version of networking-midonet, because the > project > > > code > > > > > won't be > > > > > > > > > > compatible with the rest of our projects (AFAIK > neutron), > > > > > unless, the > > > > > > > > > > previous version will not compatible either, and, > unless, > > > not > > > > > releasing a > > > > > > > > > > Wallaby version leave the project branch uncut and so > > > leave the > > > > > > > > > > corresponding series unmaintainable, and so unfixable a > > > > > posteriori. > > > > > > > > > > > > > > > > > > > > If we do not release a new version then we will use a > > > previous > > > > > version of > > > > > > > > > > networking-midonet. This version will be the last > Victoria > > > > > version [1]. > > > > > > > > > > > > > > > > > > > > I suppose that this version (the victoria version) > isn't > > > > > compatible with > > > > > > > > > > the new facade engine either, isn't it? > > > > > > > > > > > > > > > > > > Correct. It's not compatible. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > So release or not release a new version won't solve the > > > facade > > > > > engine > > > > > > > > > > problem, isn't? > > > > > > > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You said that neutron evolved and networking-midonet > > > didn't, > > > > > hence even if > > > > > > > > > > we release networking-midonet in the current state it > will > > > > > fail too, isn't > > > > > > > > > > it? > > > > > > > > > > > > > > > > > > Also yes :) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > However, releasing a new version and branching on it > can > > > give > > > > > you the > > > > > > > > > > needed maintenance window to allow you to fix the issue > > > later, > > > > > when your > > > > > > > > > > gates will be fixed and then patches backported. git > tags > > > are > > > > > cheap. > > > > > > > > > > > > > > > > > > > > We should notice that since Victoria some patches have > been > > > > > merged in > > > > > > > > > > Wallaby so even if they aren't ground breaking changes > they > > > > > are changes > > > > > > > > > > that it is worth to release. > > > > > > > > > > > > > > > > > > > > From a release point of view I think it's worth it to > > > release > > > > > a new version > > > > > > > > > > and to cut Wallaby. We are close to the it's deadline. > That > > > > > will land the > > > > > > > > > > available delta between Victoria and Wallaby. That will > > > allow > > > > > to fix the > > > > > > > > > > engine facade by opening a maintenance window. If the > > > project > > > > > is still > > > > > > > > > > lacking maintainers in a few weeks / months, this will > > > allow a > > > > > more smooth > > > > > > > > > > deprecation of this one. > > > > > > > > > > > > > > > > > > > > Thoughts? > > > > > > > > > > > > > > > > > > Based on Your feedback I agree that we should release now > > > what > > > > > we have. Even if > > > > > > > > > it's broken we can then fix it and backport fixes to > > > > > stable/wallaby branch. > > > > > > > > > > > > > > > > > > @Akihiro: are You ok with that too? > > > > > > > > > > > > > > > > I was writing another reply and did not notice this mail. > > > > > > > > While I still have a doubt on releasing the broken code > (which > > > we > > > > > are > > > > > > > > not sure can be fixed soon or not), > > > > > > > > I am okay with either decision. > > > > > > > > > > > > > > Yeah, releasing broken code and especially where we do not if > > > there > > > > > will be > > > > > > > maintainer to fix it or not seems risky for me too. > > > > > > > > > > > > > > One option is to deprecate it for wallaby which means follow > the > > > > > deprecation steps > > > > > > > mentioned in project-team-guide[1]. If maintainers show up > then it > > > > > can be un-deprecated. > > > > > > > With that, we will not have any compatible wallaby version > which I > > > > > think is a better > > > > > > > choice than releasing the broken code. > > > > > > > > > > > > We were asking about that some time ago already and then some > new > > > > > maintainers > > > > > > stepped in. But as now there is that problem again with > > > > > networking-midonet I'm > > > > > > fine to deprecate it (or ask about it again at least). > > > > > > But isn't it too late in the cycle now? Last time when we were > doing > > > > > that it was > > > > > > before milestone-2 IIRC. Now we are almost at the end of the > cycle. > > > > > Should we do > > > > > > it still now? > > > > > > > > > > As there is nothing released for Wallaby[1], we can still do this. > As > > > per > > > > > the process, TC > > > > > can merge the required patches on that repo if no core is > available to > > > +A. > > > > > > > > > > > If yes, how much time do we really have to e.g. ask for some new > > > > > maintainers? > > > > > > > > > > > > > > > > I will say asap :) but I think the release team can set the > deadline as > > > > > they have to take care > > > > > of release things. > > > > > > > > > > [1] > > > > > > > > > https://opendev.org/openstack/releases/src/commit/30492c964f5d7eb85d806086c5b1c656b5c9e9f9/deliverables/wallaby/networking-midonet.yaml > > > > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > Releasing the broken code now with the hope of someone will > come > > > up > > > > > and fix it with > > > > > > > backport makes me a little uncomfortable and if it did not > get fix > > > > > then we will live > > > > > > > with broken release forever. > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > > > > > > > > > > > https://opendev.org/openstack/releases/src/branch/master/deliverables/victoria/networking-midonet.yaml > > > > > > > > > > > > > > > > > > > > Le lun. 29 mars 2021 à 10:32, Slawek Kaplonski < > > > > > skaplons at redhat.com> a > > > > > > > > > > écrit : > > > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > > > We have opened release patch for networking-midonet > [1] > > > but > > > > > our concern > > > > > > > > > > > about > > > > > > > > > > > that project is that its gate is completly broken > since > > > some > > > > > time thus we > > > > > > > > > > > don't really know if the project is still working and > > > valid > > > > > to be released. > > > > > > > > > > > In Wallaby cycle Neutron for example finished > transition > > > to > > > > > the engine > > > > > > > > > > > facade, > > > > > > > > > > > and patch to adjust that in networking-midonet is > still > > > > > opened [2] (and > > > > > > > > > > > red as > > > > > > > > > > > there were some unrelated issues with most of the > jobs > > > > > there). > > > > > > > > > > > > > > > > > > > > > > In the past we had discussion about > networking-midonet > > > > > project and it's > > > > > > > > > > > status > > > > > > > > > > > as the official Neutron stadium project. Then some > new > > > folks > > > > > stepped in to > > > > > > > > > > > maintain it but now it seems a bit like (again) it > lacks > > > of > > > > > maintainers. > > > > > > > > > > > I know that it is very late in the cycle now so my > > > question > > > > > to the TC and > > > > > > > > > > > release teams is: should we release stable/wallaby > with > > > its > > > > > current state, > > > > > > > > > > > even if it's broken or should we maybe don't release > it > > > at > > > > > all until its > > > > > > > > > > > gate > > > > > > > > > > > will be up and running? > > > > > > > > > > > > > > > > > > > > > > [1] > > > https://review.opendev.org/c/openstack/releases/+/781713 > > > > > > > > > > > [2] > > > > > https://review.opendev.org/c/openstack/networking-midonet/+/770797 > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > Slawek Kaplonski > > > > > > > > > > > Principal Software Engineer > > > > > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > Hervé Beraud > > > > > > > > > > Senior Software Engineer at Red Hat > > > > > > > > > > irc: hberaud > > > > > > > > > > https://github.com/4383/ > > > > > > > > > > https://twitter.com/4383hberaud > > > > > > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > > > > > > > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > > > > > > > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > > > > > > > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > > > > > > > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > > > > > > > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > > > > > > > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > > > > > > > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > > > > > > > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > > > > > > > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > > > > > > > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > > > > > > > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > > > > > > =ECSj > > > > > > > > > > -----END PGP SIGNATURE----- > > > > > > > > > > > > > > > > > > -- > > > > > > > > > Slawek Kaplonski > > > > > > > > > Principal Software Engineer > > > > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Hervé Beraud > > > > Senior Software Engineer at Red Hat > > > > irc: hberaud > > > > https://github.com/4383/ > > > > https://twitter.com/4383hberaud > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > v6rDpkeNksZ9fFSyoY2o > > > > =ECSj > > > > -----END PGP SIGNATURE----- > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tacingiht at gmail.com Tue Mar 30 10:30:54 2021 From: tacingiht at gmail.com (Evan Zhao) Date: Tue, 30 Mar 2021 18:30:54 +0800 Subject: How to customize the xml used in libvirt from GUI/opestack command line? Message-ID: Hi there, I googled this question and found two major answers: 1. ssh to the compute node and use `virsh dumpxml` and `virsh undefine/define`, etc. 2. edit nova/virt/libvrit/config.py directly. However, it's trivial to ssh to each node and do the modification, and I prefer not to touch the nova source code, is there any better ways to achieve this? I expect to edit the namespace of a certain element and append an additional element to the xml file. Any information will be appreciated. From jmlineb at sandia.gov Tue Mar 30 13:23:31 2021 From: jmlineb at sandia.gov (Linebarger, John) Date: Tue, 30 Mar 2021 13:23:31 +0000 Subject: How to debug silent live migration errors Message-ID: <580b3a3a2bf64d14a8e9d002cfcdfd47@ES07AMSNLNT.srn.sandia.gov> How would I debug silent (or mostly silent) live migration errors? We're using the Stein release of Canonical's Charmed OpenStack. I have configured it for live migration per the instructions at this link: https://docs.openstack.org/nova/pike/admin/configuring-migrations.html#section-configuring-compute-migrations Specifically: 1. I did not specify vncserver_listen=0.0.0.0 in nova.conf because we are not running VNC on our instances 2. instances_path is /var/lib/nova/instances on all compute nodes 3. I believe that MAAS is "the sole provider of DHCP and DNS for the network hosting the MAAS cluster", per https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-maas.html 4. Identical authorized_keys files are present on all compute nodes with keys from all compute nodes by default 5. I manually configured the firewalls on all compute nodes to allow libvirt to communicate between compute hosts with: sudo ufw allow 49152:49261/tcp 6. The following settings are specified in nova.conf on each compute node: live_migration_downtime = 500 live_migration_downtime_steps = 10 live_migration_downtime_delay = 75 live_migration_permit_post_copy=true Here's what happens when I try to Live Migrate from the Horizon Dashboard: 1. As admin, in the Admin --> Instances menu, I select the dropdown arrow to the right of the instance. Live Migrate Instance appears (but in black, unlike Migrate Instance, which appears in red). I select Live Migrate Instance, and whether or not I Automatically schedule new host or manually select a new host the Task column says "Migrating" and then it stops and reverts to None. The server never changes. The Action Log shows the live migration request but the Message column is blank. 2. I do the very same thing but this time select Disk Over Commit. Same results. Migrating reverts back to None and the server never changes. 3. I do the very same thing but this time select Block Migration. This time I do get an error: "Failed to live migrate instance to host 'AUTO_SCHEDULE'". And this time the Action Log has "Error" in the Message column. Same behavior with the CLI. For example, this CLI command below completes silently, yet the server for the instance never changes. john at vm-dev-john:~/bin$ openstack server migrate --live [Silent failure] john at vm-dev-john:~/bin$ openstack server show [Still running on original server] Note that I *can* successfully Migrate, both using the Horizon Dashboard and the CLI. What fails is Live Migration. I just have no idea why, and no error is displayed in the Action Log for the instance. For reference, the instance is an m1.small with 2GB of RAM, 1 VCPU, and a 20GB Cinder disk volume attached on /dev/vda. Any and all debugging ideas would be most welcome. Without logs I am simply guessing in the dark at this point. Thanks! Enjoy! John M. Linebarger, PhD, MBA Principal Member of Technical Staff Sandia National Laboratories (Office) 505-845-8282 (Cell) 505-681-4879 [cid:image002.jpg at 01D72535.958BCAE0][AWS Certified Solutions Architect - Professional][AWS Certified Solutions Architect - Associate][AWS Certified Developer - Associate][cid:image003.png at 01D72531.072F13F0][cid:image005.png at 01D72535.958BCAE0] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 7238 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2425 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 9291 bytes Desc: image005.png URL: From Stefan.Kelber at gmx.de Tue Mar 30 16:02:40 2021 From: Stefan.Kelber at gmx.de (Stefan Kelber) Date: Tue, 30 Mar 2021 18:02:40 +0200 Subject: [Swift] swift client querying capabilites fails and may affect Horizon's container management Message-ID: Hello, i have a Kolla-Ansible AIO stack i added SWIFT to after the initial installation. CLI management of containers work on CLI, but Horizon is troubled: Clicking "container" throws "Error: Unable to fetch the policy details", and no horizon based container management is possible therefore. In an effort to find the root problem, i stumbled upon the following, which i assume to be related: it looks to me like there are issues with keystone authentication of SWIFT client (3.10.1) when attempting to fetch capabilities on OS Victoria Kolla-Ansible, but *only* when querying capabilities: ## 1 (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 stat Account: AUTH_4334f282aa3a498aac95d2cf9aa9fa91 Containers: 1 Objects: 2 Bytes: 0 Containers in policy "policy-0": 1 Objects in policy "policy-0": 2 Bytes in policy "policy-0": 0 Content-Type: text/plain; charset=utf-8 X-Timestamp: 1616964437.15820 Accept-Ranges: bytes X-Account-Project-Domain-Id: default Vary: Accept X-Trans-Id: tx2638af9bd98646ce92dfe-00606342a1 X-Openstack-Request-Id: tx2638af9bd98646ce92dfe-00606342a1 Transfer-Encoding: chunked ## 2 (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 list mycontainer ## 3 (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 capabilities Capabilities GET failed: http://10.10.10.82:8080/info 401 Unauthorized [first 60 chars of response] b'{"error": {"code": 401, "title": "Unauthorized", "message": ' Failed Transaction ID: txeb57fccc09e54e3fa2fa5-00606342af Am i on the wrong track? Best Stefan Here is the entire debug output of the failing transaction: (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 capabilities --debug DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to http://10.10.10.82:35357/v3/auth/tokens DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.10.10.82:35357 DEBUG:urllib3.connectionpool:http://10.10.10.82:35357 "POST /v3/auth/tokens HTTP/1.1" 201 7299 DEBUG:keystoneclient.auth.identity.v3.base:{"token": {"methods": ["password"], "user": {"domain": {"id": "default", "name": "Default"}, "id": "12940206c0a1464f96a47fe74d19c893", "name": "admin", "password_expires_at": null}, "audit_ids": ["WjAnZyn5TaGufxZGqnvW3Q"], "expires_at": "2021-03-31T14:22:13.000000Z", "issued_at": "2021-03-30T14:22:13.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "4334f282aa3a498aac95d2cf9aa9fa91", "name": "admin"}, "is_domain": false, "roles": [{"id": "b740e2667f034d02b6af2114ec06e20a", "name": "admin"}, {"id": "ceb68a7214c74a969cd83b9146e1fbda", "name": "heat_stack_owner"}, {"id": "6b2ae10793c34fb287dd994a8dd2e8a1", "name": "member"}, {"id": "edb76a75b2fc4af79bf09e38e07be5f1", "name": "reader"}], "catalog": [{"endpoints": [{"id": "05e0b6b67f094b64b1caa9cf78dd34b2", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8780", "region": "RegionOne"}, {"id": "9575e60e3a034ff282bbeda2287b3d01", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8780", "region": "RegionOne"}, {"id": "f5fb2fffde5d4b88b230397560ac6952", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8780", "region": "RegionOne"}], "id": "16edbecc93484568a2f5b46db9909293", "type": "placement", "name": "placement"}, {"endpoints": [{"id": "09275022c3de4405adb6bb25071a49ba", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:9292", "region": "RegionOne"}, {"id": "3a6fa026c7c9422f8cb013492f5ba0e0", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:9292", "region": "RegionOne"}, {"id": "a4fa0d097eed4c3b97ff4ed0f27096d6", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:9292", "region": "RegionOne"}], "id": "2a71f9f773734edb8ea6b711c844b334", "type": "image", "name": "glance"}, {"endpoints": [{"id": "a20fbce4737540ce91039175c7409811", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2.1", "region": "RegionOne"}, {"id": "b16bad7fdf1342329a59e231cf56851d", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2.1", "region": "RegionOne"}, {"id": "ce27e7239706458c80e42a68a13383b2", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2.1", "region": "RegionOne"}], "id": "2c56db642dde46a2ab2f766dbd0ba9c9", "type": "compute", "name": "nova"}, {"endpoints": [{"id": "8c183a6cd8244b2cb1630a662c3a292e", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8004/v1/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "958487dfae194f69b5f56f0b6e901163", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8004/v1/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "a8e0db3b2e1c49a99ed97ec18a64ece6", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8004/v1/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "5428e2837ccf413886140c0603867ee3", "type": "orchestration", "name": "heat"}, {"endpoints": [{"id": "2e9a6a2fc78340b7bdc18041dec79a86", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "894ab56c6c7140ebaac8e7ec502e9b03", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "8e86543bfeb547d78cf707100d0ef7f0", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "5639441ae48441ed9c8163fc537f0c1a", "type": "compute_legacy", "name": "nova_legacy"}, {"endpoints": [{"id": "32092c9ac4634ece9d89edc2b146af54", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:9696", "region": "RegionOne"}, {"id": "dad19d0e0d8e420caa5b1276d82917e4", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:9696", "region": "RegionOne"}, {"id": "f085e59dae874005b8e6fb1df55db5d5", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:9696", "region": "RegionOne"}], "id": "56e63ef1e3b24fbca6a2efb7394eb77b", "type": "network", "name": "neutron"}, {"endpoints": [{"id": "0ccb4189837f482aac1ee67a0f5b69b7", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8000/v1", "region": "RegionOne"}, {"id": "be4fab7e4ae04786817702817ea09c34", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8000/v1", "region": "RegionOne"}, {"id": "eb8ccdc41ab94aaa9567e29e728e9109", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8000/v1", "region": "RegionOne"}], "id": "83cee8b745d74b1d9a3bc1e37d3e21bf", "type": "cloudformation", "name": "heat-cfn"}, {"endpoints": [{"id": "917c36b07fae4b278a2f63d3886933c3", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:5000", "region": "RegionOne"}, {"id": "9d40ecfea4a04327b64e5751ed8043ce", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:5000", "region": "RegionOne"}, {"id": "d4941b09a2cb40648f3b019bb1653120", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:35357", "region": "RegionOne"}], "id": "94dce13398fd42e4805ae1a7d9976e78", "type": "identity", "name": "keystone"}, {"endpoints": [{"id": "10297840af2b4370a2f63908aea3ff9c", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "a1a617f8996d439b8de091a4f15a88ae", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "a930062cb2da497bbeb727c9fec6bba0", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "d7f1cd74b2294126ac939b7d705369a3", "type": "volumev2", "name": "cinderv2"}, {"endpoints": [{"id": "1bb18181af9b4dd28bbf16491adb16fe", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8080/v1/AUTH_4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "686b82631d4d40b1b62ed1a7cb8fb1cf", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8080/v1", "region": "RegionOne"}, {"id": "e0cc80d07c6246668ca9cce9d736af62", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8080/v1/AUTH_4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "eabcc4d34b3c4ce499025816e2302f34", "type": "object-store", "name": "swift"}, {"endpoints": [{"id": "4c2b1dbe8d3544c9add475d63b0492d4", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v3/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "77634a04b3644d0eb2abab41f3325a1c", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v3/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "7f11d1c28644413e8cfe5ff19b91ebd2", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v3/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "fda50e1a62564367bca385a64088334d", "type": "volumev3", "name": "cinderv3"}]}} DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.10.10.82:8080 DEBUG:urllib3.connectionpool:http://10.10.10.82:8080 "GET /info HTTP/1.1" 401 114 INFO:swiftclient:REQ: curl -i http://10.10.10.82:8080/info -X GET -H "Accept-Encoding: gzip" INFO:swiftclient:RESP STATUS: 401 Unauthorized INFO:swiftclient:RESP HEADERS: {'Content-Type': 'application/json', 'Content-Length': '114', 'WWW-Authenticate': 'Keystone uri="http://10.10.10.82:5000"', 'X-Trans-Id': 'txac041bfff30342729c2dd-0060633415', 'X-Openstack-Request-Id': 'txac041bfff30342729c2dd-0060633415', 'Date': 'Tue, 30 Mar 2021 14:22:13 GMT'} INFO:swiftclient:RESP BODY: b'{"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}' Capabilities GET failed: http://10.10.10.82:8080/info 401 Unauthorized [first 60 chars of response] b'{"error": {"code": 401, "title": "Unauthorized", "message": ' Failed Transaction ID: txac041bfff30342729c2dd-0060633415 From tacingiht at gmail.com Wed Mar 31 03:31:50 2021 From: tacingiht at gmail.com (Evan Zhao) Date: Wed, 31 Mar 2021 11:31:50 +0800 Subject: How to customize the xml used in libvirt from GUI/opestack command line? Message-ID: Hi there, I googled this question and found two major answers: 1. ssh to the compute node and use `virsh dumpxml` and `virsh undefine/define`, etc. 2. edit nova/virt/libvrit/config.py directly. However, it's trivial to ssh to each node and do the modification, and I prefer not to touch the nova source code, is there any better ways to achieve this? I expect to edit the namespace of a certain element and append an additional element to the xml file. Any information will be appreciated. From Stefan.Kelber at gmx.de Wed Mar 31 11:44:14 2021 From: Stefan.Kelber at gmx.de (Stefan Kelber) Date: Wed, 31 Mar 2021 13:44:14 +0200 Subject: [Swift] swift client querying capabilites fails and may affect Horizon's container management References: Message-ID: Hello, i just wanted to feed back that i found the solution apparently. According to the Swift's authentication documentation, "delay_auth_decision" must be set to true, otherwise authenticated capbility requests will fail. It also worked for me. Best Stefan > Gesendet: Dienstag, 30. März 2021 um 18:02 Uhr > Von: "Stefan Kelber" > An: "Discuss OpenStack" > Betreff: [Swift] swift client querying capabilites fails and may affect Horizon's container management > > Hello, > > i have a Kolla-Ansible AIO stack i added SWIFT to after the initial installation. > CLI management of containers work on CLI, but Horizon is troubled: > Clicking "container" throws "Error: Unable to fetch the policy details", and no horizon based container management is possible therefore. > > In an effort to find the root problem, i stumbled upon the following, which i assume to be related: > it looks to me like there are issues with keystone authentication of SWIFT client (3.10.1) when attempting to fetch capabilities on OS Victoria Kolla-Ansible, but *only* when querying capabilities: > > > ## 1 > (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 stat > Account: AUTH_4334f282aa3a498aac95d2cf9aa9fa91 > Containers: 1 > Objects: 2 > Bytes: 0 > Containers in policy "policy-0": 1 > Objects in policy "policy-0": 2 > Bytes in policy "policy-0": 0 > Content-Type: text/plain; charset=utf-8 > X-Timestamp: 1616964437.15820 > Accept-Ranges: bytes > X-Account-Project-Domain-Id: default > Vary: Accept > X-Trans-Id: tx2638af9bd98646ce92dfe-00606342a1 > X-Openstack-Request-Id: tx2638af9bd98646ce92dfe-00606342a1 > Transfer-Encoding: chunked > > ## 2 > (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 list > mycontainer > > ## 3 > (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 capabilities > Capabilities GET failed: http://10.10.10.82:8080/info 401 Unauthorized [first 60 chars of response] b'{"error": {"code": 401, "title": "Unauthorized", "message": ' > Failed Transaction ID: txeb57fccc09e54e3fa2fa5-00606342af > > > Am i on the wrong track? > > Best > > Stefan > > Here is the entire debug output of the failing transaction: > > (swift-container-server)[root at testhost ~]# swift --os-auth-url http://10.10.10.82:35357/v3 --auth-version 3 --os-project-name admin --os-project-domain-name Default --os-username admin --os-user-domain-name Default --os-password T-STRING-s6 capabilities --debug > DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to http://10.10.10.82:35357/v3/auth/tokens > DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.10.10.82:35357 > DEBUG:urllib3.connectionpool:http://10.10.10.82:35357 "POST /v3/auth/tokens HTTP/1.1" 201 7299 > DEBUG:keystoneclient.auth.identity.v3.base:{"token": {"methods": ["password"], "user": {"domain": {"id": "default", "name": "Default"}, "id": "12940206c0a1464f96a47fe74d19c893", "name": "admin", "password_expires_at": null}, "audit_ids": ["WjAnZyn5TaGufxZGqnvW3Q"], "expires_at": "2021-03-31T14:22:13.000000Z", "issued_at": "2021-03-30T14:22:13.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "4334f282aa3a498aac95d2cf9aa9fa91", "name": "admin"}, "is_domain": false, "roles": [{"id": "b740e2667f034d02b6af2114ec06e20a", "name": "admin"}, {"id": "ceb68a7214c74a969cd83b9146e1fbda", "name": "heat_stack_owner"}, {"id": "6b2ae10793c34fb287dd994a8dd2e8a1", "name": "member"}, {"id": "edb76a75b2fc4af79bf09e38e07be5f1", "name": "reader"}], "catalog": [{"endpoints": [{"id": "05e0b6b67f094b64b1caa9cf78dd34b2", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8780", "region": "RegionOne"}, {"id": "9575e60e3a034ff282bbeda2287b3d01", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8780", "region": "RegionOne"}, {"id": "f5fb2fffde5d4b88b230397560ac6952", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8780", "region": "RegionOne"}], "id": "16edbecc93484568a2f5b46db9909293", "type": "placement", "name": "placement"}, {"endpoints": [{"id": "09275022c3de4405adb6bb25071a49ba", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:9292", "region": "RegionOne"}, {"id": "3a6fa026c7c9422f8cb013492f5ba0e0", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:9292", "region": "RegionOne"}, {"id": "a4fa0d097eed4c3b97ff4ed0f27096d6", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:9292", "region": "RegionOne"}], "id": "2a71f9f773734edb8ea6b711c844b334", "type": "image", "name": "glance"}, {"endpoints": [{"id": "a20fbce4737540ce91039175c7409811", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2.1", "region": "RegionOne"}, {"id": "b16bad7fdf1342329a59e231cf56851d", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2.1", "region": "RegionOne"}, {"id": "ce27e7239706458c80e42a68a13383b2", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2.1", "region": "RegionOne"}], "id": "2c56db642dde46a2ab2f766dbd0ba9c9", "type": "compute", "name": "nova"}, {"endpoints": [{"id": "8c183a6cd8244b2cb1630a662c3a292e", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8004/v1/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "958487dfae194f69b5f56f0b6e901163", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8004/v1/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "a8e0db3b2e1c49a99ed97ec18a64ece6", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8004/v1/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "5428e2837ccf413886140c0603867ee3", "type": "orchestration", "name": "heat"}, {"endpoints": [{"id": "2e9a6a2fc78340b7bdc18041dec79a86", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "894ab56c6c7140ebaac8e7ec502e9b03", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "8e86543bfeb547d78cf707100d0ef7f0", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8774/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "5639441ae48441ed9c8163fc537f0c1a", "type": "compute_legacy", "name": "nova_legacy"}, {"endpoints": [{"id": "32092c9ac4634ece9d89edc2b146af54", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:9696", "region": "RegionOne"}, {"id": "dad19d0e0d8e420caa5b1276d82917e4", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:9696", "region": "RegionOne"}, {"id": "f085e59dae874005b8e6fb1df55db5d5", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:9696", "region": "RegionOne"}], "id": "56e63ef1e3b24fbca6a2efb7394eb77b", "type": "network", "name": "neutron"}, {"endpoints": [{"id": "0ccb4189837f482aac1ee67a0f5b69b7", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8000/v1", "region": "RegionOne"}, {"id": "be4fab7e4ae04786817702817ea09c34", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8000/v1", "region": "RegionOne"}, {"id": "eb8ccdc41ab94aaa9567e29e728e9109", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8000/v1", "region": "RegionOne"}], "id": "83cee8b745d74b1d9a3bc1e37d3e21bf", "type": "cloudformation", "name": "heat-cfn"}, {"endpoints": [{"id": "917c36b07fae4b278a2f63d3886933c3", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:5000", "region": "RegionOne"}, {"id": "9d40ecfea4a04327b64e5751ed8043ce", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:5000", "region": "RegionOne"}, {"id": "d4941b09a2cb40648f3b019bb1653120", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:35357", "region": "RegionOne"}], "id": "94dce13398fd42e4805ae1a7d9976e78", "type": "identity", "name": "keystone"}, {"endpoints": [{"id": "10297840af2b4370a2f63908aea3ff9c", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "a1a617f8996d439b8de091a4f15a88ae", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "a930062cb2da497bbeb727c9fec6bba0", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v2/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "d7f1cd74b2294126ac939b7d705369a3", "type": "volumev2", "name": "cinderv2"}, {"endpoints": [{"id": "1bb18181af9b4dd28bbf16491adb16fe", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8080/v1/AUTH_4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "686b82631d4d40b1b62ed1a7cb8fb1cf", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8080/v1", "region": "RegionOne"}, {"id": "e0cc80d07c6246668ca9cce9d736af62", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8080/v1/AUTH_4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "eabcc4d34b3c4ce499025816e2302f34", "type": "object-store", "name": "swift"}, {"endpoints": [{"id": "4c2b1dbe8d3544c9add475d63b0492d4", "interface": "public", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v3/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "77634a04b3644d0eb2abab41f3325a1c", "interface": "internal", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v3/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}, {"id": "7f11d1c28644413e8cfe5ff19b91ebd2", "interface": "admin", "region_id": "RegionOne", "url": "http://10.10.10.82:8776/v3/4334f282aa3a498aac95d2cf9aa9fa91", "region": "RegionOne"}], "id": "fda50e1a62564367bca385a64088334d", "type": "volumev3", "name": "cinderv3"}]}} > DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.10.10.82:8080 > DEBUG:urllib3.connectionpool:http://10.10.10.82:8080 "GET /info HTTP/1.1" 401 114 > INFO:swiftclient:REQ: curl -i http://10.10.10.82:8080/info -X GET -H "Accept-Encoding: gzip" > INFO:swiftclient:RESP STATUS: 401 Unauthorized > INFO:swiftclient:RESP HEADERS: {'Content-Type': 'application/json', 'Content-Length': '114', 'WWW-Authenticate': 'Keystone uri="http://10.10.10.82:5000"', 'X-Trans-Id': 'txac041bfff30342729c2dd-0060633415', 'X-Openstack-Request-Id': 'txac041bfff30342729c2dd-0060633415', 'Date': 'Tue, 30 Mar 2021 14:22:13 GMT'} > INFO:swiftclient:RESP BODY: b'{"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}' > Capabilities GET failed: http://10.10.10.82:8080/info 401 Unauthorized [first 60 chars of response] b'{"error": {"code": 401, "title": "Unauthorized", "message": ' > Failed Transaction ID: txac041bfff30342729c2dd-0060633415 > > From juliaashleykreger at gmail.com Wed Mar 31 17:45:20 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 31 Mar 2021 10:45:20 -0700 Subject: [ironic] PTL on vacation - Week of April 12th Message-ID: Greetings folks! I'm going to take a little time off before we get to the Project Teams Gathering, so aside from the two calls I've committed myself to during that week, I'm going to be sitting on a mountain enjoying a change of scenery. If someone will volunteer to run the weekly meeting during the week of the 12th, I will be greatly appreciative. I'm hoping to have a rough PTG schedule up in advance of that meeting as well as the topic list refined, so it will just be links in the weekly agenda. Thanks, -Julia From DHilsbos at performair.com Wed Mar 31 17:55:17 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 31 Mar 2021 17:55:17 +0000 Subject: How to debug silent live migration errors In-Reply-To: <580b3a3a2bf64d14a8e9d002cfcdfd47@ES07AMSNLNT.srn.sandia.gov> References: <580b3a3a2bf64d14a8e9d002cfcdfd47@ES07AMSNLNT.srn.sandia.gov> Message-ID: <0670B960225633449A24709C291A52524FBB95A4@COM01.performair.local> John; I recently had to work through a similar issue, though I am working with Victoria, so take this with a grain of salt. I finally found the correct path by looking in the hypervisor's logs on the machines sending and receiving the live migration. For us that is KVM. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Linebarger, John [mailto:jmlineb at sandia.gov] Sent: Tuesday, March 30, 2021 6:24 AM To: openstack-discuss at lists.openstack.org Cc: Hostetler, Sarah N; Shurtz, Peter; Urbaniak, Kendrick Subject: How to debug silent live migration errors How would I debug silent (or mostly silent) live migration errors? We're using the Stein release of Canonical's Charmed OpenStack. I have configured it for live migration per the instructions at this link: https://docs.openstack.org/nova/pike/admin/configuring-migrations.html#section-configuring-compute-migrations Specifically: 1. I did not specify vncserver_listen=0.0.0.0 in nova.conf because we are not running VNC on our instances 2. instances_path is /var/lib/nova/instances on all compute nodes 3. I believe that MAAS is "the sole provider of DHCP and DNS for the network hosting the MAAS cluster", per https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-maas.html 4. Identical authorized_keys files are present on all compute nodes with keys from all compute nodes by default 5. I manually configured the firewalls on all compute nodes to allow libvirt to communicate between compute hosts with: sudo ufw allow 49152:49261/tcp 6. The following settings are specified in nova.conf on each compute node: live_migration_downtime = 500 live_migration_downtime_steps = 10 live_migration_downtime_delay = 75 live_migration_permit_post_copy=true Here's what happens when I try to Live Migrate from the Horizon Dashboard: 1. As admin, in the Admin --> Instances menu, I select the dropdown arrow to the right of the instance. Live Migrate Instance appears (but in black, unlike Migrate Instance, which appears in red). I select Live Migrate Instance, and whether or not I Automatically schedule new host or manually select a new host the Task column says "Migrating" and then it stops and reverts to None. The server never changes. The Action Log shows the live migration request but the Message column is blank. 2. I do the very same thing but this time select Disk Over Commit. Same results. Migrating reverts back to None and the server never changes. 3. I do the very same thing but this time select Block Migration. This time I do get an error: "Failed to live migrate instance to host 'AUTO_SCHEDULE'". And this time the Action Log has "Error" in the Message column. Same behavior with the CLI. For example, this CLI command below completes silently, yet the server for the instance never changes. john at vm-dev-john:~/bin$ openstack server migrate --live [Silent failure] john at vm-dev-john:~/bin$ openstack server show [Still running on original server] Note that I *can* successfully Migrate, both using the Horizon Dashboard and the CLI. What fails is Live Migration. I just have no idea why, and no error is displayed in the Action Log for the instance. For reference, the instance is an m1.small with 2GB of RAM, 1 VCPU, and a 20GB Cinder disk volume attached on /dev/vda. Any and all debugging ideas would be most welcome. Without logs I am simply guessing in the dark at this point. Thanks! Enjoy! John M. Linebarger, PhD, MBA Principal Member of Technical Staff Sandia National Laboratories (Office) 505-845-8282 (Cell) 505-681-4879 [https://www.certmetrics.com/api/ob/image/amazon/c/4] [https://www.certmetrics.com/api/ob/image/amazon/c/1] [https://www.certmetrics.com/api/ob/image/amazon/c/2] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Picture (Device Independent Bitmap) 1.jpg Type: image/jpeg Size: 2322 bytes Desc: Picture (Device Independent Bitmap) 1.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Picture (Device Independent Bitmap) 2.jpg Type: image/jpeg Size: 2395 bytes Desc: Picture (Device Independent Bitmap) 2.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Picture (Device Independent Bitmap) 3.jpg Type: image/jpeg Size: 2941 bytes Desc: Picture (Device Independent Bitmap) 3.jpg URL: From lyarwood at redhat.com Wed Mar 31 18:10:29 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 31 Mar 2021 19:10:29 +0100 Subject: How to debug silent live migration errors In-Reply-To: <6031f20aea0a436ab1c61da4af7facba@ES07AMSNLNT.srn.sandia.gov> References: <580b3a3a2bf64d14a8e9d002cfcdfd47@ES07AMSNLNT.srn.sandia.gov> <6031f20aea0a436ab1c61da4af7facba@ES07AMSNLNT.srn.sandia.gov> Message-ID: Live migration is an asynchronous operation so without --wait on the command line it returns once the API initially returns 202 to indicate the request was accepted [1]. As an admin you can use the server migrations API to track the status of the migration [2] via openstackclient: $ openstack server migration list --server $instance_uuid $ openstack server migration show $instance_uuid $migration_id You also have the event list so you can find the specific request-id associated with the live migration and trace that through your logs: $ openstack server event list $instance_uuid $ openstack server event show $instance_uuid $request-id Hope that helps, Lee [1] https://docs.openstack.org/api-ref/compute/?expanded=live-migrate-server-os-migratelive-action-detail#live-migrate-server-os-migratelive-action [2] https://docs.openstack.org/api-ref/compute/?expanded=show-migration-details-detail#show-migration-details [3] https://docs.openstack.org/api-guide/compute/faults.html On Tue, 30 Mar 2021 at 14:33, Linebarger, John wrote: > > How would I debug silent (or mostly silent) live migration errors? We’re using the Stein release of Canonical’s Charmed OpenStack. I have configured it for live migration per the instructions at this link: > > > > https://docs.openstack.org/nova/pike/admin/configuring-migrations.html#section-configuring-compute-migrations > > > > Specifically: > > > > 1. I did not specify vncserver_listen=0.0.0.0 in nova.conf because we are not running VNC on our instances > > 2. instances_path is /var/lib/nova/instances on all compute nodes > > 3. I believe that MAAS is “the sole provider of DHCP and DNS for the network hosting the MAAS cluster”, per https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-maas.html > > 4. Identical authorized_keys files are present on all compute nodes with keys from all compute nodes by default > > 5. I manually configured the firewalls on all compute nodes to allow libvirt to communicate between compute hosts with: > > sudo ufw allow 49152:49261/tcp > > 6. The following settings are specified in nova.conf on each compute node: > > live_migration_downtime = 500 > > live_migration_downtime_steps = 10 > > live_migration_downtime_delay = 75 > > live_migration_permit_post_copy=true > > > > Here’s what happens when I try to Live Migrate from the Horizon Dashboard: > > > > 1. As admin, in the Admin à Instances menu, I select the dropdown arrow to the right of the instance. Live Migrate Instance appears (but in black, unlike Migrate Instance, which appears in red). I select Live Migrate Instance, and whether or not I Automatically schedule new host or manually select a new host the Task column says “Migrating” and then it stops and reverts to None. The server never changes. The Action Log shows the live migration request but the Message column is blank. > > > > 2. I do the very same thing but this time select Disk Over Commit. Same results. Migrating reverts back to None and the server never changes. > > > > 3. I do the very same thing but this time select Block Migration. This time I do get an error: “Failed to live migrate instance to host ‘AUTO_SCHEDULE’”. And this time the Action Log has “Error” in the Message column. > > > > Same behavior with the CLI. For example, this CLI command below completes silently, yet the server for the instance never changes. > > > > openstack server migrate --live > > [Silent failure] > > openstack server show > > [Still running on original server] > > > > Note that I *can* successfully Migrate, both using the Horizon Dashboard and the CLI. What fails is Live Migration. I just have no idea why, and no error is displayed in the Action Log for the instance. > > > > For reference, the instance is an m1.small with 2GB of RAM, 1 VCPU, and a 20GB Cinder disk volume attached on /dev/vda. > > > > Any and all debugging ideas would be most welcome. Without logs I am simply guessing in the dark at this point. > > > > Thanks! Enjoy! > > > > John M. Linebarger, PhD, MBA > > Principal Member of Technical Staff > > Sandia National Laboratories > > (Office) 505-845-8282 From juliaashleykreger at gmail.com Wed Mar 31 18:15:59 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 31 Mar 2021 11:15:59 -0700 Subject: [ironic] PTG and topics oh my Message-ID: As everyone hopefully knows by now, the PTG[0] is coming up. We've had an etherpad[1] posted for about a month. During this time, the etherpad has collected 22 topics. I'm 99.95% sure there is some topic overlap, but what we need to make a schedule... or at least begin to form an agenda which requires us understanding what community members perceive as priorities or important topics. So if everyone over the next couple days could do the following two tasks, it would help tremendously for consensus building. 1) Add any missing topics you feel to be important 2) Add +1's to topics you feel are important for yourself OR the community. As always, pressing topics and topics which have the most votes will tend to have more time allocated. -Julia [0]: https://www.eventbrite.com/e/project-teams-gathering-april-2021-tickets-143360351671 [1]: https://etherpad.opendev.org/p/ironic-xena-ptg From juliaashleykreger at gmail.com Wed Mar 31 18:25:25 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 31 Mar 2021 11:25:25 -0700 Subject: [E] [ironic] How to move nodes from a 'clean failed' state into 'Available' In-Reply-To: References: Message-ID: Out of curiosity, is this a very new version of dnsmasq? or an older version? I ask because there have been some fixes and regressions related to dnsmasq updating its configuration and responding to machines appropriately. A version might be helpful, just to enable those of us who are curious to go double check things at a minimum. On Wed, Mar 31, 2021 at 1:28 AM Igal Katzir wrote: > > Hello Forum, > Just for the record, the problem was resolved by restarting all the ironic containers, I believe that restarting the UC node entirely would have also fixed that. > So after the ironic containers started fresh, the PXE worked well, and after running 'openstack overcloud node introspect --all-manageable --provide' it shows: > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > | 588bc3f6-dc14-4a07-8e38-202540d046f8 | interop025 | None | power off | available | False | > | dceab84b-1d99-49b5-8f79-c589c0884269 | interop026 | None | power off | available | False | > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > I now ready for deployment of overcloud. > thanks, > Igal > > On Thu, Mar 25, 2021 at 12:48 AM Igal Katzir wrote: >> >> Thanks Jay, >> It gets into 'clean failed' state because it fails to boot into PXE mode. >> I don't understand why the DHCP does not respond to the clients request, it's like it remembers that the same client already received an IP in the past. >> Is there a way to clear the dnsmasq database of reservations? >> Igal >> >> On Wed, Mar 24, 2021 at 5:26 PM Jay Faulkner wrote: >>> >>> A node in CLEAN FAILED must be moved to MANAGEABLE state before it can be told to "provide" (which eventually puts it back in AVAILABLE). >>> >>> Try this: >>> `openstack baremetal node manage UUID`, then run the command with "provide" as you did before. >>> >>> The available states and their transitions are documented here: https://docs.openstack.org/ironic/latest/contributor/states.html >>> >>> I'll note that if cleaning failed, it's possible the node is misconfigured in such a way that will cause all deployments and cleanings to fail (e.g.; if you're using Ironic with Nova, and you attempt to provision a machine and it errors during deploy; Nova will by default attempt to clean that node, which may be why you see it end up in clean failed). So I strongly suggest you look at the last_error field on the node and attempt to determine why the failure happened before retrying. >>> >>> Good luck! >>> >>> -Jay Faulkner >>> >>> On Wed, Mar 24, 2021 at 8:20 AM Igal Katzir wrote: >>>> >>>> Hello Team, >>>> >>>> I had a situation where my undercloud-node had a problem with it’s disk and has disconnected from overcloud. >>>> I couldn’t restore the undercloud controller and ended up re-installing it (running 'openstack undercloud install’). >>>> The installation ended successfully but now I’m in a situation where Cleanup of the overcloud deployed nodes fails: >>>> >>>> (undercloud) [stack at interop010 ~]$ openstack baremetal node list >>>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >>>> | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | >>>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >>>> | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean failed | True | >>>> | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | clean failed | True | >>>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >>>> >>>> I’ve tried to move node to available state but cannot: >>>> (undercloud) [stack at interop010 ~]$ openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 >>>> The requested action "provide" can not be performed on node "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". (HTTP 400) >>>> >>>> My question is: >>>> How do I make the nodes available again? >>>> as the deployment of overcloud fails with: >>>> ERROR due to "Message: No valid host was found. , Code: 500” >>>> >>>> Thanks, >>>> Igal >> >> >> >> -- >> Regards, >> Igal Katzir >> Cell +972-54-5597086 >> Interoperability Team >> INFINIDAT >> >> >> >> > > > -- > Regards, > Igal Katzir > Cell +972-54-5597086 > Interoperability Team > INFINIDAT > > > > From smooney at redhat.com Wed Mar 31 19:18:04 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 31 Mar 2021 20:18:04 +0100 Subject: [neutron] oslo.privsep migration in Neutron In-Reply-To: <586ee31b-d586-4053-fcfc-8d686c722a3c@nemebean.com> References: <20210331065356.mdldig54kpd3rj2t@p1.localdomain> <586ee31b-d586-4053-fcfc-8d686c722a3c@nemebean.com> Message-ID: <953567a5-0523-c909-5b33-7e9b57ba6e2e@redhat.com> On 31/03/2021 15:49, Ben Nemec wrote: > > > On 3/31/21 1:53 AM, Slawek Kaplonski wrote: >> Hi, >> >> On Tue, Mar 30, 2021 at 05:33:40PM +0200, Rodolfo Alonso Hernandez >> wrote: >>> Hello Neutrinos: >>> >>> During the last cycles we have been migrating the Neutron code from >>> oslo.rootwrap to oslo.privsep. Those efforts are aimed at reaching >>> the goal >>> defined in [1] and are tracked in [2]. >>> >>> At this point, starting Xena developing cycle, we can state that we >>> have >>> migrated all short lived commands from oslo.rootwrap to oslo.privsep >>> or to >>> a native implementation (that could also use oslo.privsep to elevate >>> the >>> permissions if needed). >> >> Thanks a lot Rodolfo for working on that. Great job! >> >>> >>> The problem are the daemons or services (long lived processes) that >>> Neutron >>> spawns using "ProcessManager"; this is why "ProcessManager.enable" >>> is the >>> only code calling "utils.execute" without "privsep_exec" parameter. >>> Those >>> process cannot be executed using oslo.privsep because the privsep root >>> daemon has a limited number of executing threads. im not sure i understand why this is a problem for you. you can define a context that is only used for those long running proces ot have them execute in  sperate deamon with its own threads. >>> The remaining processes >>> are [3]. >>> >>> Although we didn't reach the Completion Criteria defined in [1], >>> that is >>> remove the oslo.rootwrap dependency, I think we don't have an >>> alternative >>> to run those services and we should keep rootwrap for them. If there >>> are no >>> objections, once [3] is merged we can consider that Neutron (not other >>> Stadium projects) finished the efforts on [1]. >> >> Sounds good for me. >> >>> >>> Please, any feedback is always welcome. >> >> Maybe some oslo.privsep experts can take a look into that and help to >> solve that >> problem somehow. If not, then IMO we can live with it like it is now. > > One possibility is to start a separate privsep daemon for the > long-running services. I believe privsep was designed with that in > mind so you could have privileged calls running in a daemon with just > the necessary permissions for that call, not the permissions for every > privileged call in the service. yes you are meen to have a deam process per privesep context and multiple context per service so that you only have the permissions you need. so neutron should have mutiple privsep context as should nova and use the correct one for the calls it uses. the other anti patter to avoid is centralising all privsep cunciton in a set of common modules that are called into form differnt moduels. nova and neutorn  are unfrotunetly both guilty of that too but eventully i would hope that that will be fixed. > That said, I'm not sure how much, if at all, it has been used. > >> >>> >>> Regards. >>> >>> [1]https://review.opendev.org/c/openstack/governance/+/718177 >>> [2]https://storyboard.openstack.org/#!/story/2007686 >>> [3] >>> https://review.opendev.org/c/openstack/neutron/+/778444/2/etc/neutron/rootwrap.d/rootwrap.filters >>> >> > From ikatzir at infinidat.com Wed Mar 31 19:24:12 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Wed, 31 Mar 2021 22:24:12 +0300 Subject: [E] [ironic] How to move nodes from a 'clean failed' state into 'Available' In-Reply-To: References: Message-ID: Hi Julia, How can I easily tell the ironic version? This is an rhosp 16.1 installation so its pretty much new. Igal בתאריך יום ד׳, 31 במרץ 2021, 21:25, מאת Julia Kreger ‏< juliaashleykreger at gmail.com>: > Out of curiosity, is this a very new version of dnsmasq? or an older > version? I ask because there have been some fixes and regressions > related to dnsmasq updating its configuration and responding to > machines appropriately. A version might be helpful, just to enable > those of us who are curious to go double check things at a minimum. > > On Wed, Mar 31, 2021 at 1:28 AM Igal Katzir wrote: > > > > Hello Forum, > > Just for the record, the problem was resolved by restarting all the > ironic containers, I believe that restarting the UC node entirely would > have also fixed that. > > So after the ironic containers started fresh, the PXE worked well, and > after running 'openstack overcloud node introspect --all-manageable > --provide' it shows: > > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance UUID | > Power State | Provisioning State | Maintenance | > > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > | 588bc3f6-dc14-4a07-8e38-202540d046f8 | interop025 | None | > power off | available | False | > > | dceab84b-1d99-49b5-8f79-c589c0884269 | interop026 | None | > power off | available | False | > > > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > > > > I now ready for deployment of overcloud. > > thanks, > > Igal > > > > On Thu, Mar 25, 2021 at 12:48 AM Igal Katzir > wrote: > >> > >> Thanks Jay, > >> It gets into 'clean failed' state because it fails to boot into PXE > mode. > >> I don't understand why the DHCP does not respond to the clients > request, it's like it remembers that the same client already received an IP > in the past. > >> Is there a way to clear the dnsmasq database of reservations? > >> Igal > >> > >> On Wed, Mar 24, 2021 at 5:26 PM Jay Faulkner < > jay.faulkner at verizonmedia.com> wrote: > >>> > >>> A node in CLEAN FAILED must be moved to MANAGEABLE state before it can > be told to "provide" (which eventually puts it back in AVAILABLE). > >>> > >>> Try this: > >>> `openstack baremetal node manage UUID`, then run the command with > "provide" as you did before. > >>> > >>> The available states and their transitions are documented here: > https://docs.openstack.org/ironic/latest/contributor/states.html > >>> > >>> I'll note that if cleaning failed, it's possible the node is > misconfigured in such a way that will cause all deployments and cleanings > to fail (e.g.; if you're using Ironic with Nova, and you attempt to > provision a machine and it errors during deploy; Nova will by default > attempt to clean that node, which may be why you see it end up in clean > failed). So I strongly suggest you look at the last_error field on the node > and attempt to determine why the failure happened before retrying. > >>> > >>> Good luck! > >>> > >>> -Jay Faulkner > >>> > >>> On Wed, Mar 24, 2021 at 8:20 AM Igal Katzir > wrote: > >>>> > >>>> Hello Team, > >>>> > >>>> I had a situation where my undercloud-node had a problem with it’s > disk and has disconnected from overcloud. > >>>> I couldn’t restore the undercloud controller and ended up > re-installing it (running 'openstack undercloud install’). > >>>> The installation ended successfully but now I’m in a situation where > Cleanup of the overcloud deployed nodes fails: > >>>> > >>>> (undercloud) [stack at interop010 ~]$ openstack baremetal node list > >>>> > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > >>>> | UUID | Name | Instance > UUID | Power State | Provisioning State | Maintenance | > >>>> > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > >>>> | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | > power on | clean failed | True | > >>>> | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | > power on | clean failed | True | > >>>> > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ > >>>> > >>>> I’ve tried to move node to available state but cannot: > >>>> (undercloud) [stack at interop010 ~]$ openstack baremetal node provide > 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 > >>>> The requested action "provide" can not be performed on node > "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". > (HTTP 400) > >>>> > >>>> My question is: > >>>> How do I make the nodes available again? > >>>> as the deployment of overcloud fails with: > >>>> ERROR due to "Message: No valid host was found. , Code: 500” > >>>> > >>>> Thanks, > >>>> Igal > >> > >> > >> > >> -- > >> Regards, > >> Igal Katzir > >> Cell +972-54-5597086 > >> Interoperability Team > >> INFINIDAT > >> > >> > >> > >> > > > > > > -- > > Regards, > > Igal Katzir > > Cell +972-54-5597086 > > Interoperability Team > > INFINIDAT > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Mar 31 19:37:45 2021 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 31 Mar 2021 14:37:45 -0500 Subject: [neutron] oslo.privsep migration in Neutron In-Reply-To: <953567a5-0523-c909-5b33-7e9b57ba6e2e@redhat.com> References: <20210331065356.mdldig54kpd3rj2t@p1.localdomain> <586ee31b-d586-4053-fcfc-8d686c722a3c@nemebean.com> <953567a5-0523-c909-5b33-7e9b57ba6e2e@redhat.com> Message-ID: On 3/31/21 2:18 PM, Sean Mooney wrote: > the other anti patter to avoid is centralising all privsep cunciton in a > set of common modules that are called into form differnt moduels. > nova and neutorn  are unfrotunetly both guilty of that too but eventully > i would hope that that will be fixed. I thought this was intentional. It's even mentioned in the docs as a way to make it clear that privileged calls will run in a different context: https://docs.openstack.org/oslo.privsep/victoria/user/index.html#using-a-privileged-function From juliaashleykreger at gmail.com Wed Mar 31 19:49:38 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 31 Mar 2021 12:49:38 -0700 Subject: [E] [ironic] How to move nodes from a 'clean failed' state into 'Available' In-Reply-To: References: Message-ID: In that case, file a case with Red Hat support and provide them an sosreport. Basically, you shouldn't have to reboot or restart dnsmasq to get things to wake up. It is not about the version of ironic, but more about the version of dnsmasq, but if there is an issue, their support org needs that visibility so we can track it and get it remedied because it is not an upstream issue in that case, but likely a downstream issue. On Wed, Mar 31, 2021 at 12:24 PM Igal Katzir wrote: > > Hi Julia, > How can I easily tell the ironic version? > This is an rhosp 16.1 installation so its pretty much new. > Igal > > בתאריך יום ד׳, 31 במרץ 2021, 21:25, מאת Julia Kreger ‏: >> >> Out of curiosity, is this a very new version of dnsmasq? or an older >> version? I ask because there have been some fixes and regressions >> related to dnsmasq updating its configuration and responding to >> machines appropriately. A version might be helpful, just to enable >> those of us who are curious to go double check things at a minimum. >> >> On Wed, Mar 31, 2021 at 1:28 AM Igal Katzir wrote: >> > >> > Hello Forum, >> > Just for the record, the problem was resolved by restarting all the ironic containers, I believe that restarting the UC node entirely would have also fixed that. >> > So after the ironic containers started fresh, the PXE worked well, and after running 'openstack overcloud node introspect --all-manageable --provide' it shows: >> > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> > | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | >> > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> > | 588bc3f6-dc14-4a07-8e38-202540d046f8 | interop025 | None | power off | available | False | >> > | dceab84b-1d99-49b5-8f79-c589c0884269 | interop026 | None | power off | available | False | >> > +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> > >> > I now ready for deployment of overcloud. >> > thanks, >> > Igal >> > >> > On Thu, Mar 25, 2021 at 12:48 AM Igal Katzir wrote: >> >> >> >> Thanks Jay, >> >> It gets into 'clean failed' state because it fails to boot into PXE mode. >> >> I don't understand why the DHCP does not respond to the clients request, it's like it remembers that the same client already received an IP in the past. >> >> Is there a way to clear the dnsmasq database of reservations? >> >> Igal >> >> >> >> On Wed, Mar 24, 2021 at 5:26 PM Jay Faulkner wrote: >> >>> >> >>> A node in CLEAN FAILED must be moved to MANAGEABLE state before it can be told to "provide" (which eventually puts it back in AVAILABLE). >> >>> >> >>> Try this: >> >>> `openstack baremetal node manage UUID`, then run the command with "provide" as you did before. >> >>> >> >>> The available states and their transitions are documented here: https://docs.openstack.org/ironic/latest/contributor/states.html >> >>> >> >>> I'll note that if cleaning failed, it's possible the node is misconfigured in such a way that will cause all deployments and cleanings to fail (e.g.; if you're using Ironic with Nova, and you attempt to provision a machine and it errors during deploy; Nova will by default attempt to clean that node, which may be why you see it end up in clean failed). So I strongly suggest you look at the last_error field on the node and attempt to determine why the failure happened before retrying. >> >>> >> >>> Good luck! >> >>> >> >>> -Jay Faulkner >> >>> >> >>> On Wed, Mar 24, 2021 at 8:20 AM Igal Katzir wrote: >> >>>> >> >>>> Hello Team, >> >>>> >> >>>> I had a situation where my undercloud-node had a problem with it’s disk and has disconnected from overcloud. >> >>>> I couldn’t restore the undercloud controller and ended up re-installing it (running 'openstack undercloud install’). >> >>>> The installation ended successfully but now I’m in a situation where Cleanup of the overcloud deployed nodes fails: >> >>>> >> >>>> (undercloud) [stack at interop010 ~]$ openstack baremetal node list >> >>>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> >>>> | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | >> >>>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> >>>> | 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 | interop025 | None | power on | clean failed | True | >> >>>> | 4b02703a-f765-4ebb-85ed-75e88b4cbea5 | interop026 | None | power on | clean failed | True | >> >>>> +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ >> >>>> >> >>>> I’ve tried to move node to available state but cannot: >> >>>> (undercloud) [stack at interop010 ~]$ openstack baremetal node provide 97b9a603-f64f-47c1-9fb4-6c68a5b38ff6 >> >>>> The requested action "provide" can not be performed on node "97b9a603-f64f-47c1-9fb4-6c68a5b38ff6" while it is in state "clean failed". (HTTP 400) >> >>>> >> >>>> My question is: >> >>>> How do I make the nodes available again? >> >>>> as the deployment of overcloud fails with: >> >>>> ERROR due to "Message: No valid host was found. , Code: 500” >> >>>> >> >>>> Thanks, >> >>>> Igal >> >> >> >> >> >> >> >> -- >> >> Regards, >> >> Igal Katzir >> >> Cell +972-54-5597086 >> >> Interoperability Team >> >> INFINIDAT >> >> >> >> >> >> >> >> >> > >> > >> > -- >> > Regards, >> > Igal Katzir >> > Cell +972-54-5597086 >> > Interoperability Team >> > INFINIDAT >> > >> > >> > >> > From smooney at redhat.com Wed Mar 31 20:28:53 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 31 Mar 2021 21:28:53 +0100 Subject: [neutron] oslo.privsep migration in Neutron In-Reply-To: References: <20210331065356.mdldig54kpd3rj2t@p1.localdomain> <586ee31b-d586-4053-fcfc-8d686c722a3c@nemebean.com> <953567a5-0523-c909-5b33-7e9b57ba6e2e@redhat.com> Message-ID: <538046e5-f55b-2206-e7b1-7f4976164353@redhat.com> On 31/03/2021 20:37, Ben Nemec wrote: > > > On 3/31/21 2:18 PM, Sean Mooney wrote: >> the other anti patter to avoid is centralising all privsep cunciton >> in a set of common modules that are called into form differnt moduels. >> nova and neutorn  are unfrotunetly both guilty of that too but >> eventully i would hope that that will be fixed. > > I thought this was intentional. It's even mentioned in the docs as a > way to make it clear that privileged calls will run in a different > context: > https://docs.openstack.org/oslo.privsep/victoria/user/index.html#using-a-privileged-function yep we have talked about this before and said we shoudl fix that before. i dont have my old mailing list post but what we have found at least in nova centralising the calls encurages overly broad functions with far to much privaldge to be written. case in point novas mount functjion https://github.com/openstack/nova/blob/master/nova/privsep/fs.py#L30-L38 which take andy files system devnec enad mount point with any options you care to pass. our our chown , chmod and writefile commands https://github.com/openstack/nova/blob/master/nova/privsep/path.py#L28-L63 we even had to write a hacking check to stope people form importing modules form privesep using from nova.privsep import path since at the call path.writefile(...) would give no indication that its actuly privaldaged. we have a hacking check to enfore no aliasing of privsep import in nova https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L832-L858 privsep contntexts should be defiled at the top of project module namespace e.g. nova and ideally targeted privladge fucntion shoud be written in the module where they are used, using a context that provides only the permissions they need. ideally you should have privaldged in the name of the function too. os-vif does not quite get the nameing right but we do at least use a seperate prive spec context for the plugins vs common code and use inline privesep function instead of centralisting them https://github.com/openstack/os-vif/blob/master/vif_plug_linux_bridge/privsep.py https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/privsep.py https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/linux_net.py#L78-L90 one of the imporant thing to not for example in the ovs plugin is we restrict the context to only be used by submodules of the plugin vif_plug = priv_context.PrivContext(     "vif_plug_ovs",   <----- this is what resticts what module the decorator can be used in.     cfg_section="vif_plug_ovs_privileged",     pypath=__name__ + ".vif_plug",     capabilities=[c.CAP_NET_ADMIN], ) the enforcement is done by https://github.com/openstack/oslo.privsep/blob/83870bd2655f3250bb5d5aed7c9865ba0b5e4770/oslo_privsep/priv_context.py#L219-L221 it also only has cap_net_admin since  that is the only capablity the ovs plugin needed. if the linuxbridge plugin need CAP_SYS_ADMIN for some reason we could add it to its context without affacting ovs's althoguht the correct thing to do would be to define a new context instead of extending the exsiging one. novas privsep context has far to many permission and shoudl be broken into at least 3 contexts https://github.com/openstack/nova/blob/master/nova/privsep/__init__.py sys_admin_pctxt = priv_context.PrivContext(     'nova',     cfg_section='nova_sys_admin',     pypath=__name__ + '.sys_admin_pctxt',     capabilities=[capabilities.CAP_CHOWN,                   capabilities.CAP_DAC_OVERRIDE,                   capabilities.CAP_DAC_READ_SEARCH,                   capabilities.CAP_FOWNER,                   capabilities.CAP_NET_ADMIN,                   capabilities.CAP_SYS_ADMIN], ) should be sys_admin_pctxt = priv_context.PrivContext(     'nova',     cfg_section='nova_sys_admin',     pypath=__name__ + '.sys_admin_pctxt',     capabilities=[capabilities.CAP_SYS_ADMIN], ) file_admin_pctxt = priv_context.PrivContext(     'nova',     cfg_section='nova_sys_admin',     pypath=__name__ + '.sys_admin_pctxt',     capabilities=[capabilities.CAP_CHOWN,                   capabilities.CAP_DAC_OVERRIDE,                   capabilities.CAP_DAC_READ_SEARCH,                   capabilities.CAP_FOWNER,        ], ) net_admin_pctxt = priv_context.PrivContext(     'nova',     cfg_section='nova_sys_admin',     pypath=__name__ + '.sys_admin_pctxt',     capabilities=[capabilities.CAP_NET_ADMIN], ) neutron is in a slightly better shape since it has 3 contexts but its not really devied up correctly IMO https://github.com/openstack/neutron/blob/master/neutron/privileged/__init__.py the default context still has far to many caps as noted in the to do. privsep as used today in nova and neutron provides far less security then the libary is capably of as the current usage is very primatie. unfortunetlly i dont think https://docs.openstack.org/oslo.privsep/victoria/user/index.html actully models the best practicies for defining, contexts or function or using them. its based on what nova did before we had learned how to actully use it. we should rewrite all of nova privsep useage but we have never had the time to do that. neutron is in better shape but it can still delete any file on disk or spawn any process with  caps.CAP_SYS_ADMIN,  caps.CAP_NET_ADMIN,  caps.CAP_DAC_OVERRIDE, caps.CAP_DAC_READ_SEARCH,   caps.CAP_SYS_PTRACE https://github.com/openstack/neutron/blob/master/neutron/privileged/agent/linux/utils.py#L49-L62 that process would not be fulll root but it would be close enough. having a centralised privsep module and 1 or few context tends to make people lazy and either not think what is the smalles set of permission i need or what is the smallest amount of function inputs i need. so neutron and nova might have almost got rid fo rootwap but they are not secure by any means in there current usage. i also dont know of any active explitis as a result of this, its greate that neutron is almost done moveing but i just wanted to highlight htat once all the calls are moved the real work or actully hardinging neutron and using privesep properly need to be done. if you just use privsep blinindly its really easy to end up with a less secure system then when you had rootwarp. From iurygregory at gmail.com Wed Mar 31 21:42:27 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Wed, 31 Mar 2021 23:42:27 +0200 Subject: [ironic] PTL on vacation - Week of April 12th In-Reply-To: References: Message-ID: Hi Julia, I can run the weekly meeting without problems during this week. Enjoy the PTO =) Em qua., 31 de mar. de 2021 às 19:48, Julia Kreger < juliaashleykreger at gmail.com> escreveu: > Greetings folks! > > I'm going to take a little time off before we get to the Project Teams > Gathering, so aside from the two calls I've committed myself to during > that week, I'm going to be sitting on a mountain enjoying a change of > scenery. > > If someone will volunteer to run the weekly meeting during the week of > the 12th, I will be greatly appreciative. I'm hoping to have a rough > PTG schedule up in advance of that meeting as well as the topic list > refined, so it will just be links in the weekly agenda. > > Thanks, > > -Julia > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Mar 31 23:19:52 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 31 Mar 2021 18:19:52 -0500 Subject: [tc] Technical Committee weekly meeting In-Reply-To: <1787f0b2273.b33e94cc1241611.4906469719374016487@ghanshyammann.com> References: <1787f0b2273.b33e94cc1241611.4906469719374016487@ghanshyammann.com> Message-ID: <1788a960cd8.b74ae1bb1374090.5826600918675653034@ghanshyammann.com> ---- On Mon, 29 Mar 2021 12:32:19 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for April 1st at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, March 31st, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting Hello Everyone, Below is the agenda for tomorrow's TC meeting and I will be your chair. * Follow up on past action items * PTG ** https://etherpad.opendev.org/p/tc-xena-ptg *Gate performance and heavy job configs (dansmith) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * PTL assignment for Xena cycle leaderless projects (gmann) ** https://etherpad.opendev.org/p/xena-leaderless * Election for one Vacant TC seat (gmann) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021334.html * Elect Vice Chair ** https://review.opendev.org/c/openstack/governance/+/783409 ** https://review.opendev.org/c/openstack/governance/+/783411 * Community newsletter: "OpenStack project news" snippets ** https://etherpad.opendev.org/p/newsletter-openstack-news * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann > > -gmann > > From tonyliu0592 at hotmail.com Wed Mar 31 23:59:36 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Wed, 31 Mar 2021 23:59:36 +0000 Subject: launch VM on volume vs. image Message-ID: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony