From dms at danplanet.com Thu Apr 1 00:00:12 2021 From: dms at danplanet.com (Dan Smith) Date: Wed, 31 Mar 2021 17:00:12 -0700 Subject: [all] Gate resources and performance In-Reply-To: (Wesley Hayutin's message of "Wed, 31 Mar 2021 11:02:37 -0600") References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: Hi Wes, > Just wanted to check back in on the resource consumption topic. > Looking at my measurements the TripleO group has made quite a bit of > progress keeping our enqued zuul time lower than our historical > average. Do you think we can measure where things stand now and have > some new numbers available at the PTG? Yeah, in the last few TC meetings I've been saying things like "let's not sample right now because we're in such a weird high-load situation with the release" and "...but we seem to be chewing through a lot of patches, so things seem better." I definitely think the changes made by tripleo and others are helping. Life definitely "feels" better lately. I'll try to circle back and generate a new set of numbers with my script, and also see if I can get updated numbers from Clark on the overall percentages. Thanks! --Dan From missile0407 at gmail.com Thu Apr 1 00:59:58 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 1 Apr 2021 08:59:58 +0800 Subject: launch VM on volume vs. image In-Reply-To: References: Message-ID: Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: > Hi, > > With Ceph as the backend storage, launching a VM on volume takes much > longer than launching on image. Why is that? > Could anyone elaborate the high level workflow for those two cases? > > > Thanks! > Tony > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Apr 1 01:15:49 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Mar 2021 19:15:49 -0600 Subject: [all] Gate resources and performance In-Reply-To: References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: On Wed, Mar 31, 2021 at 6:00 PM Dan Smith wrote: > Hi Wes, > > > Just wanted to check back in on the resource consumption topic. > > Looking at my measurements the TripleO group has made quite a bit of > > progress keeping our enqued zuul time lower than our historical > > average. Do you think we can measure where things stand now and have > > some new numbers available at the PTG? > > Yeah, in the last few TC meetings I've been saying things like "let's > not sample right now because we're in such a weird high-load situation > with the release" and "...but we seem to be chewing through a lot of > patches, so things seem better." I definitely think the changes made by > tripleo and others are helping. Life definitely "feels" better > lately. I'll try to circle back and generate a new set of numbers with > my script, and also see if I can get updated numbers from Clark on the > overall percentages. > > Thanks! > > --Dan > > Sounds good.. I'm keeping an eye in the meantime w/ http://dashboard-ci.tripleo.org/d/Z4vLSmOGk/cockpit?viewPanel=71&orgId=1 SELECT max("enqueued_time") FROM "zuul-queue-status" WHERE ("pipeline" = 'gate' AND "queue" = 'tripleo') and http://dashboard-ci.tripleo.org/d/Z4vLSmOGk/cockpit?viewPanel=398&orgId=1&from=now-6M&to=now SELECT max("enqueued_time") FROM "zuul-queue-status" WHERE ("pipeline" = 'gate') AND time >= 1601514835817ms GROUP BY time(10m) fill(0);SELECT mean("enqueued_time") FROM "zuul-queue-status" WHERE ("pipeline" = 'check') AND time >= 1601514835817ms GROUP BY time(10m) fill(0) 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Apr 1 01:32:29 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Mar 2021 19:32:29 -0600 Subject: [tripleo][ci] jobs in retry_limit or skipped Message-ID: Greetings, Just FYI.. I believe we hit a bump in the road in upstream infra ( not sure yet ). It appears to be global and not isolated to tripleo or centos based jobs. I have a tripleo bug to track it. https://bugs.launchpad.net/tripleo/+bug/1922148 See #opendev for details, it looks like infra is very busy working and fixing the issues atm. http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2021-03-31.log.html#t2021-03-31T10:34:51 http://eavesdrop.openstack.org/irclogs/%23opendev/latest.log.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From dvd at redhat.com Thu Apr 1 02:04:00 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Wed, 31 Mar 2021 22:04:00 -0400 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: References: Message-ID: +1 On Wed, Mar 31, 2021 at 9:51 AM Alex Schultz wrote: > +1 > > On Wed, Mar 31, 2021 at 3:30 AM Takashi Kajinami > wrote: > >> Hello, >> >> >> I'd like to propose Alan Bishop (abishop) for the core team of >> puppet-cinder >> and puppet-glance. >> Alan has been actively involved in these 2 modules for a few years >> and has implemented some nice features like multiple backend support in >> glance, >> cinder s3 backup driver and etc, which expanded adoption of >> puppet-openstack. >> He has also provided good reviews on patches for these 2 repos based >> on his understanding about our code, puppet and serverspec. >> >> He is an active contributor to cinder and has deep knowledge about it. >> In addition He is also a core review in TripleO, which consumes our >> puppet modules, >> and mainly covers storage components like cinder and glance, so he is >> familiar >> with the way how these two components are deployed and configured. >> >> I believe adding him to our board helps us improve our review of these >> two modules. >> >> I'll wait for one week to hear any feedback from other core reviewers. >> >> Thank you, >> Takashi >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Thu Apr 1 02:18:42 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 1 Apr 2021 02:18:42 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , Message-ID: Thank you Eddie! It makes sense. Creating a snapshot is much faster than copying image to a volume. Tony ________________________________________ From: Eddie Yen Sent: March 31, 2021 05:59 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu > 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From iwienand at redhat.com Thu Apr 1 03:17:11 2021 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Apr 2021 14:17:11 +1100 Subject: Retiring the planet.openstack.org service Message-ID: Hello, We plan to retire the planet.openstack.org RSS aggregation service soon. The host is running an unsupported distribution, and we have not found any open source alternatives that seem to be currently maintained and deployable. On consideration of our limited infra resources, we feel it will be better to sunset this service at this time. It is likely that effort will be better spent getting information to more relevant channels in 2021, such as social media sites, etc. I have extracted an OPML file of the active blogs in [1]; most every feed reader can import this file. Thanks, -i [1] https://review.opendev.org/c/opendev/system-config/+/784191 From missile0407 at gmail.com Thu Apr 1 05:47:06 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 1 Apr 2021 13:47:06 +0800 Subject: launch VM on volume vs. image In-Reply-To: References: Message-ID: BTW, If the source image is based on compression or thin provision type (like VDI, QCOW2, VMDK, etc.) It will take a long time to create no matter boot on image or volume. Nova will convert the image based on these type first during creation. Because Ceph RBD doesn't support. Make sure all the images you upload is based on RBD format (or RAW format in other word), unless the virtual size of image is small. . Tony Liu 於 2021年4月1日 週四 上午10:18寫道: > Thank you Eddie! It makes sense. Creating a snapshot is much faster > than copying image to a volume. > > Tony > ________________________________________ > From: Eddie Yen > Sent: March 31, 2021 05:59 PM > To: Tony Liu > Cc: openstack-discuss at lists.openstack.org > Subject: Re: launch VM on volume vs. image > > Hi Tony, > > In Ceph layer, IME, launching VM on image is creating a snapshot from > source image in Nova ephemeral pool. > If you check the RBD image created in Nova ephemeral pool, all images have > their own parents from glance images. > > For launching VM on volume, it will "copy" the image to volume pool first, > resize to specified disk size, then connect and boot. > Because it's not create a snapshot from image, so it will take much longer. > > Eddie. > > Tony Liu > 於 > 2021年4月1日 週四 上午8:09寫道: > Hi, > > With Ceph as the backend storage, launching a VM on volume takes much > longer than launching on image. Why is that? > Could anyone elaborate the high level workflow for those two cases? > > > Thanks! > Tony > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Apr 1 05:58:45 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 1 Apr 2021 08:58:45 +0300 Subject: [all] Gate resources and performance In-Reply-To: References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: On Wed, Mar 31, 2021 at 8:04 PM Wesley Hayutin wrote: > > > On Wed, Feb 10, 2021 at 1:05 PM Dan Smith wrote: > >> > Here's the timing I see locally: >> > Vanilla devstack: 775 >> > Client service alone: 529 >> > Parallel execution: 527 >> > Parallel client service: 465 >> > >> > Most of the difference between the last two is shorter async_wait >> > times because the deployment steps are taking less time. So not quite >> > as much as before, but still a decent increase in speed. >> >> Yeah, cool, I think you're right that we'll just serialize the >> calls. It may not be worth the complexity, but if we make the OaaS >> server able to do a few things in parallel, then we'll re-gain a little >> more perf because we'll go back to overlapping the *server* side of >> things. Creating flavors, volume types, networks and uploading the image >> to glance are all things that should be doable in parallel in the server >> projects. >> >> 465s for a devstack is awesome. Think of all the developer time in >> $local_fiat_currency we could have saved if we did this four years >> ago... :) >> >> --Dan >> >> > Hey folks, > Just wanted to check back in on the resource consumption topic. > Looking at my measurements the TripleO group has made quite a bit of > progress keeping our enqued zuul time lower than our historical average. > Do you think we can measure where things stand now and have some new > numbers available at the PTG? > > /me notes we had a blip on 3/25 but there was a one off issue w/ nodepool > in our gate. > > Marios Andreou has put a lot of time into this, and others as well. > Kudo's Marios! > Thanks all! > o/ thanks for the shout out ;) Big thanks to Sagi (sshnaidm), Chandan (chkumar), Wes (weshay), Alex (mwhahaha) and everyone else who helped us merge those things https://review.opendev.org/q/topic:tripleo-ci-reduce - things like tightening files/irrelevant_files matches, removal of older/non voting jobs, removal of upgrade master jobs and removal of layout overrides across tripleo repos (using the centralised tripleo-ci repo templates everywhere instead) to make maintenance easier so it is more likely that we will notice and fix new issues moving forward regards, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Apr 1 06:52:08 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 1 Apr 2021 08:52:08 +0200 Subject: [neutron] Drivers team meeting 02.04.2021 cancelled Message-ID: <20210401065208.6l7c3g4fnweqsy4m@p1.localdomain> Hi, As tomorrow's Good Friday is public holiday in many countries, at least in Europe, and I will also be on PTO, let's cancel the drivers meeting. Have a great holidays and see You on the meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From balazs.gibizer at est.tech Thu Apr 1 07:26:54 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 01 Apr 2021 09:26:54 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> <78MQQQ.FMXMGIRLYEMQ@est.tech> Message-ID: On Wed, Mar 31, 2021 at 16:54, Herve Beraud wrote: > Hello Balazs, > > Now that the os-brick changes on nova are merged do you plan to > propose a RC2? > > https://review.opendev.org/c/openstack/nova/+/783674 Hi Herve, Yes we will propose an RC2 from nova to release the os-brick fix. I've now created the release patch[1] but we agreed with Sylvain that we are not rushing to actually make a release this week so that if anything else pops up then we can include those as well into the RC2. Cheers, gibi [1] https://review.opendev.org/c/openstack/releases/+/784201 > > Le lun. 29 mars 2021 à 17:43, Balazs Gibizer > a écrit : >> >> >> On Mon, Mar 29, 2021 at 16:05, Balazs Gibizer >> >> wrote: >> > >> > >> > On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita >> > wrote: >> >> Hello Requirements Team, >> >> >> >> The Cinder team recently became aware of a potential data-loss >> bug >> >> [0] that has been fixed in os-brick master [1] and backported to >> >> os-brick stable/wallaby [2]. We've proposed a release of >> os-brick >> >> 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to >> >> include 4.4.0 in the wallaby release. >> >> >> >> We have three jobs running tempest with os-brick source in master >> >> that have passed with [1]: os-brick-src-devstack-plugin-ceph >> [4], >> >> os-brick-src-tempest-lvm-lio-barbican [5],and >> >> os-brick-src-tempest-nfs [6]. The difference between os-brick >> >> master (at the time the tests were run) and stable/wallaby since >> >> the 4.3.0 tag is as follows: >> >> >> >> master: >> >> d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg >> change >> >> (Gorka Eguileor) >> >> 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to >> >> modifying offset" (Zuul) >> >> 28545c7 4 months ago RBD: catch read exceptions prior to >> modifying >> >> offset (Jon Bernard) >> >> 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" >> >> (Zuul) >> >> 7cfdb76 6 weeks ago Dropping explicit unicode literal >> >> (tushargite96) >> >> 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack >> Release >> >> Bot) >> >> ab57392 3 weeks ago Update master for stable/wallaby (OpenStack >> >> Release Bot) >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver >> >> connection information compatibility fix" (Zuul) >> >> >> >> stable/wallaby: >> >> f86944b 3 days ago Add release note prelude for os-brick 4.4.0 >> >> (Brian Rosmaita) >> >> c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg >> change >> >> (Gorka Eguileor) >> >> 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for >> stable/wallaby >> >> (OpenStack Release Bot) >> >> f3f93dc 3 weeks ago Update .gitreview for stable/wallaby >> >> (OpenStack Release Bot) >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver >> >> connection information compatibility fix" (Zuul) >> >> >> >> This gives us very high confidence that the results of the tests >> run >> >> against master also apply to stable/wallaby at f86944b. >> >> >> >> Thank you for considering this request. >> >> >> >> (I've included Nova here because the bug occurs when the >> >> configuration option that enables multipath connections on a >> >> compute is changed while volumes are attached, so if this RFE is >> >> approved, nova might want to raise the minimum version of >> os-brick >> >> in wallaby to 4.4.0.) >> >> >> > >> > Thanks for the heads up. After the new os-brick version is >> released I >> > will prepare a version bump patch in nova on master and >> > stable/wallaby. This also means that nova will release an RC2. >> >> I've proposed the nova patch on master to bump min os-brick to >> 4.3.1 in >> nova[1] >> >> [1] https://review.opendev.org/c/openstack/nova/+/783674 >> >> > >> > Cheers, >> > gibi >> > >> >> >> >> [0] https://launchpad.net/bugs/1921381 >> >> [1] https://review.opendev.org/c/openstack/os-brick/+/782992 >> >> [2] https://review.opendev.org/c/openstack/os-brick/+/783207 >> >> [3] https://review.opendev.org/c/openstack/releases/+/783641 >> >> [4] >> >> >> https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb >> >> [5] >> >> >> https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 >> >> [6] >> >> >> https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa >> > >> > >> > >> > >> >> >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From hberaud at redhat.com Thu Apr 1 07:30:22 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 09:30:22 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> <78MQQQ.FMXMGIRLYEMQ@est.tech> Message-ID: Make sense thank you Le jeu. 1 avr. 2021 à 09:27, Balazs Gibizer a écrit : > > > On Wed, Mar 31, 2021 at 16:54, Herve Beraud wrote: > > Hello Balazs, > > > > Now that the os-brick changes on nova are merged do you plan to > > propose a RC2? > > > > https://review.opendev.org/c/openstack/nova/+/783674 > > Hi Herve, > > Yes we will propose an RC2 from nova to release the os-brick fix. I've > now created the release patch[1] but we agreed with Sylvain that we are > not rushing to actually make a release this week so that if anything > else pops up then we can include those as well into the RC2. > > Cheers, > gibi > > [1] https://review.opendev.org/c/openstack/releases/+/784201 > > > > > Le lun. 29 mars 2021 à 17:43, Balazs Gibizer > > a écrit : > >> > >> > >> On Mon, Mar 29, 2021 at 16:05, Balazs Gibizer > >> > >> wrote: > >> > > >> > > >> > On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita > >> > wrote: > >> >> Hello Requirements Team, > >> >> > >> >> The Cinder team recently became aware of a potential data-loss > >> bug > >> >> [0] that has been fixed in os-brick master [1] and backported to > >> >> os-brick stable/wallaby [2]. We've proposed a release of > >> os-brick > >> >> 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to > >> >> include 4.4.0 in the wallaby release. > >> >> > >> >> We have three jobs running tempest with os-brick source in master > >> >> that have passed with [1]: os-brick-src-devstack-plugin-ceph > >> [4], > >> >> os-brick-src-tempest-lvm-lio-barbican [5],and > >> >> os-brick-src-tempest-nfs [6]. The difference between os-brick > >> >> master (at the time the tests were run) and stable/wallaby since > >> >> the 4.3.0 tag is as follows: > >> >> > >> >> master: > >> >> d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg > >> change > >> >> (Gorka Eguileor) > >> >> 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to > >> >> modifying offset" (Zuul) > >> >> 28545c7 4 months ago RBD: catch read exceptions prior to > >> modifying > >> >> offset (Jon Bernard) > >> >> 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" > >> >> (Zuul) > >> >> 7cfdb76 6 weeks ago Dropping explicit unicode literal > >> >> (tushargite96) > >> >> 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack > >> Release > >> >> Bot) > >> >> ab57392 3 weeks ago Update master for stable/wallaby (OpenStack > >> >> Release Bot) > >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > >> >> connection information compatibility fix" (Zuul) > >> >> > >> >> stable/wallaby: > >> >> f86944b 3 days ago Add release note prelude for os-brick 4.4.0 > >> >> (Brian Rosmaita) > >> >> c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg > >> change > >> >> (Gorka Eguileor) > >> >> 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for > >> stable/wallaby > >> >> (OpenStack Release Bot) > >> >> f3f93dc 3 weeks ago Update .gitreview for stable/wallaby > >> >> (OpenStack Release Bot) > >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > >> >> connection information compatibility fix" (Zuul) > >> >> > >> >> This gives us very high confidence that the results of the tests > >> run > >> >> against master also apply to stable/wallaby at f86944b. > >> >> > >> >> Thank you for considering this request. > >> >> > >> >> (I've included Nova here because the bug occurs when the > >> >> configuration option that enables multipath connections on a > >> >> compute is changed while volumes are attached, so if this RFE is > >> >> approved, nova might want to raise the minimum version of > >> os-brick > >> >> in wallaby to 4.4.0.) > >> >> > >> > > >> > Thanks for the heads up. After the new os-brick version is > >> released I > >> > will prepare a version bump patch in nova on master and > >> > stable/wallaby. This also means that nova will release an RC2. > >> > >> I've proposed the nova patch on master to bump min os-brick to > >> 4.3.1 in > >> nova[1] > >> > >> [1] https://review.opendev.org/c/openstack/nova/+/783674 > >> > >> > > >> > Cheers, > >> > gibi > >> > > >> >> > >> >> [0] https://launchpad.net/bugs/1921381 > >> >> [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > >> >> [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > >> >> [3] https://review.opendev.org/c/openstack/releases/+/783641 > >> >> [4] > >> >> > >> > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > >> >> [5] > >> >> > >> > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > >> >> [6] > >> >> > >> > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa > >> > > >> > > >> > > >> > > >> > >> > >> > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar at redhat.com Thu Apr 1 08:23:46 2021 From: chkumar at redhat.com (Chandan Kumar) Date: Thu, 1 Apr 2021 13:53:46 +0530 Subject: [tripleo][ci] jobs in retry_limit or skipped In-Reply-To: References: Message-ID: On Thu, Apr 1, 2021 at 7:02 AM Wesley Hayutin wrote: > > Greetings, > > Just FYI.. I believe we hit a bump in the road in upstream infra ( not sure yet ). It appears to be global and not isolated to tripleo or centos based jobs. > > I have a tripleo bug to track it. > https://bugs.launchpad.net/tripleo/+bug/1922148 > > See #opendev for details, it looks like infra is very busy working and fixing the issues atm. > > http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2021-03-31.log.html#t2021-03-31T10:34:51 > http://eavesdrop.openstack.org/irclogs/%23opendev/latest.log.html > Zuul got restarted, jobs have started working fine now. if there is no job running against the patches, please recheck your patches slowly as it might flood the gates. Thanks, Chandan Kumar From oliver.wenz at dhbw-mannheim.de Thu Apr 1 09:38:48 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Thu, 1 Apr 2021 11:38:48 +0200 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <5b1439a3-06f5-633c-cce3-08ae62a8ddc3@dhbw-mannheim.de> > So according to the issue, you get 503 while trying to reach > 10.0.3.212:6002/os-objects, which is swift_account_port. > Are there any logs specificly for swift-account? > > Also I guess some adjustments are required for swift as well for this > mechanism to work. > > Eventually I believe the original issue you saw might be related to this > doc: > https://docs.openstack.org/keystone/latest/admin/manage-services.html#configuring-service-tokens Hi Dmitriy, I tried setting 'service_token_roles_required = false' in glance-api.conf but the error is still there. I also checked the account server and I'm seeing lots of 404's: Apr 01 09:31:33 bc1bl12 account-server[694822]: 10.0.3.212 - - [01/Apr/2021:09:31:33 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx3794c20a6ef5476f9fcf6-00606592f5" "proxy-server 13882" 0.0007 "-" 694822 - Apr 01 09:32:03 bc1bl12 account-server[694816]: 10.0.3.212 - - [01/Apr/2021:09:32:03 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "tx5491e866bc6447d8917f9-0060659313" "proxy-server 13882" 0.0008 "-" 694816 - Apr 01 09:32:03 bc1bl12 account-server[694814]: 10.0.3.212 - - [01/Apr/2021:09:32:03 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "txc98a949e82c940d19656b-0060659313" "proxy-server 13882" 0.0007 "-" 694814 - Apr 01 09:32:33 bc1bl12 account-server[694817]: 10.0.3.212 - - [01/Apr/2021:09:32:33 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "txd9ec46414b8e4a98a2b94-0060659331" "proxy-server 13882" 0.0008 "-" 694817 - Apr 01 09:32:33 bc1bl12 account-server[694817]: 10.0.3.212 - - [01/Apr/2021:09:32:33 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx972d968e71e94eb683558-0060659331" "proxy-server 13882" 0.0011 "-" 694817 - Apr 01 09:32:43 bc1bl12 account-server[694823]: 10.0.3.212 - - [01/Apr/2021:09:32:43 +0000] "HEAD /os-objects/220/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx4d02362089c4497982940-006065933b" "proxy-server 14171" 0.0008 "-" 694823 - Apr 01 09:33:03 bc1bl12 account-server[694822]: 10.0.3.212 - - [01/Apr/2021:09:33:03 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "tx440963ca3e7948ff872ab-006065934f" "proxy-server 13882" 0.0007 "-" 694822 - Apr 01 09:33:03 bc1bl12 account-server[694823]: 10.0.3.212 - - [01/Apr/2021:09:33:03 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx547157ac468e476395a0e-006065934f" "proxy-server 13882" 0.0006 "-" 694823 - Apr 01 09:33:33 bc1bl12 account-server[694814]: 10.0.3.212 - - [01/Apr/2021:09:33:33 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "txac5c0207e90a4606a7758-006065936d" "proxy-server 13882" 0.0007 "-" 694814 - Apr 01 09:33:33 bc1bl12 account-server[694820]: 10.0.3.212 - - [01/Apr/2021:09:33:33 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx0d3afe2f421647f2a117e-006065936d" "proxy-server 13882" 0.0006 "-" 694820 - Kind regards, Oliver From vuk.gojnic at gmail.com Thu Apr 1 10:11:05 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Thu, 1 Apr 2021 12:11:05 +0200 Subject: [ironic] IPA image does not want to boot with UEFI Message-ID: Hello everybody, I am using Ironic standalone to provision the HPE Gen10+ node via iLO driver. Ironic version is 16.0.1. Server is configured with UEFI boot mode. Everything on Ironic side works fine. It creates ISO image, powers the server on and configures it to boot from it. Here is the what /var/log/ironic/ironic-conductor.log says: 2021-03-31 17:46:25.541 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "cleaning" from state "manageable"; target provision state is "available" 2021-03-31 17:46:32.066 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power off' is completed in 4 seconds. 2021-03-31 17:46:32.088 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power off by power off. 2021-03-31 17:46:34.510 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 pending boot mode is uefi. 2021-03-31 17:46:37.248 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Set the node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ed25569f-c107-4fe0-95cd-74fcad9ab3f0.iso?filename=tmpqze8ogiw.iso successfully. 2021-03-31 17:46:48.367 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power on' is completed in 8 seconds. 2021-03-31 17:46:48.388 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power on by rebooting. 2021-03-31 17:46:48.404 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "clean wait" from state "cleaning"; target provision state is "available" The Grub2 starts and after I select the option “boot_partition", it starts booting and immediately freezes showing just black screen with static red underscore character. I have tried with pre-built IPA images (see below) as well as with custom IPA images made with Ubuntu 18.04 and 20.04 (built using ironic-python-agent-builder) but it is all the same. Does somebody have idea what is the problem with IPA and UEFI in this particular scenario? Output of “openstack baremetal node show” command: allocation_uuid: null automated_clean: null bios_interface: no-bios boot_interface: ilo-uefi-https chassis_uuid: null clean_step: {} conductor: 10.23.137.234 conductor_group: '' console_enabled: false console_interface: no-console created_at: '2021-03-21T13:54:25+00:00' deploy_interface: direct deploy_step: {} description: null driver: ilo5 driver_info: ilo_address: 10.23.137.137 ilo_bootloader: https://ironic-images/Images/esp.img ilo_deploy_kernel: https://ironic-images/Images/ipa-centos8-stable-victoria.kernel ilo_deploy_ramdisk: https://ironic-images/Images/ipa-centos8-stable-victoria.initramfs ilo_password: '******' ilo_username: Administrator snmp_auth_priv_password: '******' snmp_auth_prot_password: '******' snmp_auth_user: iloinspect driver_internal_info: agent_continue_if_ata_erase_failed: false agent_enable_ata_secure_erase: true agent_erase_devices_iterations: 1 agent_erase_devices_zeroize: true agent_erase_skip_read_only: false agent_secret_token: '******' agent_secret_token_pregenerated: true clean_steps: null disk_erasure_concurrency: 1 last_power_state_change: '2021-03-31T17:46:37.894667' extra: {} fault: clean failure inspect_interface: ilo inspection_finished_at: '2021-03-21T13:57:33+00:00' inspection_started_at: null instance_info: deploy_boot_mode: uefi instance_uuid: null last_error: null lessee: null maintenance: true maintenance_reason: management_interface: ilo5 name: null network_data: {} network_interface: noop owner: null power_interface: ilo power_state: power on properties: cpu_arch: x86 cpus: 64 local_gb: 2979 memory_mb: 262144 protected: false protected_reason: null provision_state: clean wait provision_updated_at: '2021-03-31T17:46:48+00:00' raid_config: {} raid_interface: no-raid rescue_interface: no-rescue reservation: null resource_class: null retired: false retired_reason: null storage_interface: noop target_power_state: null target_provision_state: available target_raid_config: {} traits: [] updated_at: '2021-03-31T17:46:48+00:00' uuid: ed25569f-c107-4fe0-95cd-74fcad9ab3f0 vendor_interface: no-vendor Many thanks! Vuk Gojnic Deutsche Telekom Technik GmbH Services & Plattforms (T-SP) Tribe Data Center Infrastructure (T-DCI) Super Squad Cloud Platforms Lifecycle (SSQ-CP) Vuk Gojnic Kubernetes Engine Squad Lead -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Apr 1 11:24:10 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 13:24:10 +0200 Subject: [oslo][release] devstack-plugin-(kafka|amqp1) retirement Message-ID: Hello Osloers, Our devstack plugins (kafka and amqp1) didn't show a great amount of activity since ussuri, does it still make sense to maintain them? The latest available SHAs for the both projects comes from Victoria (merged in this period). Can we retire them or simply retire them from the coordinated releases? We (the release team) would appreciate some feedback about this point. Let's open the debat. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From destienne.maxime at gmail.com Thu Apr 1 12:44:21 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Thu, 1 Apr 2021 14:44:21 +0200 Subject: [neutron][nova] Port binding fails when creating an instance Message-ID: Hello, I spent a lot of time troubleshooting my issue, which I described here : https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding To summarize, when I want to create an instance, binding fails on compute node, the dhcp agent seems to give an ip to the VM but I have an error. I don't know where to dig, besides what I have done. Thanks a lot for your help ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu Apr 1 13:34:03 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 1 Apr 2021 13:34:03 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <91f14b7ab30747fcb4e32c40c4559bd2@ncwmexgp009.CORP.CHARTERCOM.com> I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From stephenfin at redhat.com Thu Apr 1 13:49:23 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 01 Apr 2021 14:49:23 +0100 Subject: How to customize the xml used in libvirt from GUI/opestack command line? In-Reply-To: References: Message-ID: <8318224c1ad723fc092e694a8959acb715ecb0cc.camel@redhat.com> On Tue, 2021-03-30 at 18:30 +0800, Evan Zhao wrote: > Hi there, > > I googled this question and found two major answers: > 1. ssh to the compute node and use `virsh dumpxml` and `virsh > undefine/define`, etc. > 2. edit nova/virt/libvrit/config.py directly. > > However, it's trivial to ssh to each node and do the modification, and > I prefer not to touch the nova source code, is there any better ways > to achieve this? > > I expect to edit the namespace of a certain element and append an > additional element to the xml file. > > Any information will be appreciated. We purposefully don't allow this as it's not feasible to support. If you're using a vendor to provide your OpenStack packages, there's a good chance they won't allow you to do this either. Any modifications to the libvirt XML won't be persisted when you use any operation that results in the XML being rebuilt (cold migration, shelving, hard reboot, ...), while any chances to 'config.py' mean you have a fork on your hands that you're going to have to maintain for the life of the deployment. Neither are great situations to be in. You haven't described the _specific_ problem you're trying to resolve here. There's a possibility that nova may already have a feature to solve this problem. If not, there's a chance that your problem is a problem that other users are facing and therefore could warrant a new feature. If you raise your specific feature request here or IRC (#openstack-nova), we'll be more than happy to provide guidance. Cheers, Stephen From kgiusti at gmail.com Thu Apr 1 13:55:42 2021 From: kgiusti at gmail.com (Ken Giusti) Date: Thu, 1 Apr 2021 09:55:42 -0400 Subject: [oslo][release] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: Message-ID: Hi Herve, On Thu, Apr 1, 2021 at 7:25 AM Herve Beraud wrote: > Hello Osloers, > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > activity since ussuri, does it still make sense to maintain them? > > The latest available SHAs for the both projects comes from Victoria > (merged in this period). > > Can we retire them or simply retire them from the coordinated releases? > > We (the release team) would appreciate some feedback about this point. > > Let's open the debat. > The only consumer of these plugins that I'm aware of is oslo.messaging [0]. They are needed in order to run devstack-tempest testing against the non-rabbitmq backends. Perhaps they should be integrated into the oslo.messaging project itself, if possible? [0] https://codesearch.opendev.org/?q=devstack-plugin-amqp1&i=nope&files=&excludeFiles=&repos= > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Apr 1 14:15:21 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 1 Apr 2021 14:15:21 +0000 Subject: [Interop][Refstack] this Friday meeting Message-ID: Team, This Friday is Good Friday and some people have a day off. Should we cancel this week meeting? Please, respond so we can see if we will have quorum. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 1 14:19:48 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Apr 2021 07:19:48 -0700 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Greetings, Two questions: 1) Are the ESP image contents signed, or are they built using one of the grub commands? 2) Is the machine set to enforce secure boot at this time? On Thu, Apr 1, 2021 at 3:14 AM Vuk Gojnic wrote: > > Hello everybody, > > > > I am using Ironic standalone to provision the HPE Gen10+ node via iLO driver. Ironic version is 16.0.1. Server is configured with UEFI boot mode. > > > > Everything on Ironic side works fine. It creates ISO image, powers the server on and configures it to boot from it. > > > > Here is the what /var/log/ironic/ironic-conductor.log says: > > > > 2021-03-31 17:46:25.541 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "cleaning" from state "manageable"; target provision state is "available" > > 2021-03-31 17:46:32.066 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power off' is completed in 4 seconds. > > 2021-03-31 17:46:32.088 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power off by power off. > > 2021-03-31 17:46:34.510 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 pending boot mode is uefi. > > 2021-03-31 17:46:37.248 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Set the node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ed25569f-c107-4fe0-95cd-74fcad9ab3f0.iso?filename=tmpqze8ogiw.iso successfully. > > 2021-03-31 17:46:48.367 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power on' is completed in 8 seconds. > > 2021-03-31 17:46:48.388 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power on by rebooting. > > 2021-03-31 17:46:48.404 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "clean wait" from state "cleaning"; target provision state is "available" > > > > The Grub2 starts and after I select the option “boot_partition", it starts booting and immediately freezes showing just black screen with static red underscore character. > > > > I have tried with pre-built IPA images (see below) as well as with custom IPA images made with Ubuntu 18.04 and 20.04 (built using ironic-python-agent-builder) but it is all the same. > > > > Does somebody have idea what is the problem with IPA and UEFI in this particular scenario? > > > > Output of “openstack baremetal node show” command: > > > > allocation_uuid: null > > automated_clean: null > > bios_interface: no-bios > > boot_interface: ilo-uefi-https > > chassis_uuid: null > > clean_step: {} > > conductor: 10.23.137.234 > > conductor_group: '' > > console_enabled: false > > console_interface: no-console > > created_at: '2021-03-21T13:54:25+00:00' > > deploy_interface: direct > > deploy_step: {} > > description: null > > driver: ilo5 > > driver_info: > > ilo_address: 10.23.137.137 > > ilo_bootloader: https://ironic-images/Images/esp.img > > ilo_deploy_kernel: https://ironic-images/Images/ipa-centos8-stable-victoria.kernel > > ilo_deploy_ramdisk: https://ironic-images/Images/ipa-centos8-stable-victoria.initramfs > > ilo_password: '******' > > ilo_username: Administrator > > snmp_auth_priv_password: '******' > > snmp_auth_prot_password: '******' > > snmp_auth_user: iloinspect > > driver_internal_info: > > agent_continue_if_ata_erase_failed: false > > agent_enable_ata_secure_erase: true > > agent_erase_devices_iterations: 1 > > agent_erase_devices_zeroize: true > > agent_erase_skip_read_only: false > > agent_secret_token: '******' > > agent_secret_token_pregenerated: true > > clean_steps: null > > disk_erasure_concurrency: 1 > > last_power_state_change: '2021-03-31T17:46:37.894667' > > extra: {} > > fault: clean failure > > inspect_interface: ilo > > inspection_finished_at: '2021-03-21T13:57:33+00:00' > > inspection_started_at: null > > instance_info: > > deploy_boot_mode: uefi > > instance_uuid: null > > last_error: null > > lessee: null > > maintenance: true > > maintenance_reason: > > management_interface: ilo5 > > name: null > > network_data: {} > > network_interface: noop > > owner: null > > power_interface: ilo > > power_state: power on > > properties: > > cpu_arch: x86 > > cpus: 64 > > local_gb: 2979 > > memory_mb: 262144 > > protected: false > > protected_reason: null > > provision_state: clean wait > > provision_updated_at: '2021-03-31T17:46:48+00:00' > > raid_config: {} > > raid_interface: no-raid > > rescue_interface: no-rescue > > reservation: null > > resource_class: null > > retired: false > > retired_reason: null > > storage_interface: noop > > target_power_state: null > > target_provision_state: available > > target_raid_config: {} > > traits: [] > > updated_at: '2021-03-31T17:46:48+00:00' > > uuid: ed25569f-c107-4fe0-95cd-74fcad9ab3f0 > > vendor_interface: no-vendor > > > > > > Many thanks! > > > > Vuk Gojnic > > > > Deutsche Telekom Technik GmbH > > Services & Plattforms (T-SP) > > Tribe Data Center Infrastructure (T-DCI) > > Super Squad Cloud Platforms Lifecycle (SSQ-CP) > > > > Vuk Gojnic > > Kubernetes Engine Squad Lead From gmann at ghanshyammann.com Thu Apr 1 14:24:14 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 01 Apr 2021 09:24:14 -0500 Subject: [Interop][Refstack] this Friday meeting In-Reply-To: References: Message-ID: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- > > Team, > This Friday is Good Friday and some people have a day off. > Should we cancel this week meeting? > Please, respond so we can see if we will have quorum. Thanks Arkady, I will be off from work and would not be able to join. -gmann > Thanks, > Arkady > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > > From openstack at nemebean.com Thu Apr 1 15:02:43 2021 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 1 Apr 2021 10:02:43 -0500 Subject: [oslo][release] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: Message-ID: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> On 4/1/21 6:24 AM, Herve Beraud wrote: > Hello Osloers, > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > activity since ussuri, does it still make sense to maintain them? > > The latest available SHAs for the both projects comes from Victoria > (merged in this period). > > Can we retire them or simply retire them from the coordinated releases? These have never been released and are no longer branched. What is their involvement in the coordinated release at this point? > > We (the release team) would appreciate some feedback about this point. > > Let's open the debat. > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From jimmy at openstack.org Thu Apr 1 15:04:03 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 1 Apr 2021 10:04:03 -0500 Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> Message-ID: <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > From fungi at yuggoth.org Thu Apr 1 15:04:53 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Apr 2021 15:04:53 +0000 Subject: [tripleo][ci][infra] jobs in retry_limit or skipped In-Reply-To: References: Message-ID: <20210401150453.pkdhllydqltpgnhr@yuggoth.org> On 2021-04-01 13:53:46 +0530 (+0530), Chandan Kumar wrote: > On Thu, Apr 1, 2021 at 7:02 AM Wesley Hayutin wrote: > > > > Greetings, > > > > Just FYI.. I believe we hit a bump in the road in upstream infra ( not sure yet ). It appears to be global and not isolated to tripleo or centos based jobs. > > > > I have a tripleo bug to track it. > > https://bugs.launchpad.net/tripleo/+bug/1922148 > > > > See #opendev for details, it looks like infra is very busy working and fixing the issues atm. > > > > http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2021-03-31.log.html#t2021-03-31T10:34:51 > > http://eavesdrop.openstack.org/irclogs/%23opendev/latest.log.html > > > > Zuul got restarted, jobs have started working fine now. > if there is no job running against the patches, please recheck your > patches slowly as it might flood the gates. It's a complex situation with a few problems intermingled. First, the tripleo-ansible-centos-8-molecule-tripleo-modules job seemed to have some bug of its own causing frequent disconnects of the job node leading to retries. Also some recent change in Zuul seems to have introduced a semi-slow memory leak which, when we run into memory pressure on the scheduler, causes Zookeeper disconnects which trigger mass build retries. Further, because the source of the memory leak has been really tough to nail down, live debugging directly in the running process has been applied, and this slows the scheduler by orders of magnitude when engaged, triggering similar Zookeeper disconnects as well. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Thu Apr 1 15:28:25 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 17:28:25 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> Message-ID: Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > Hello Osloers, > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > activity since ussuri, does it still make sense to maintain them? > > > > The latest available SHAs for the both projects comes from Victoria > > (merged in this period). > > > > Can we retire them or simply retire them from the coordinated releases? > > These have never been released and are no longer branched. What is their > involvement in the coordinated release at this point? > Yes these deliverables are tagless so no released at all, however, they are coordinated so they are branched during each series. Though, as said Ben, those deliverables haven't been branched during the previous series (victoria), we suppose that they have been simply forgotten inadvertently, I proposed a patch to fix that point [1]. However we don't have so many things to branch for the current series [2], no new commits have been merged during the last 7 months (Victoria at this period), so, the question is, do we have reasons to keep them under the coordinated releases umbrella if nothing new happens in this area. I proposed a patch for Wallaby too, but I'm not convinced that's the right solution. Maybe Ken is right and maybe that's time to merge these plugins with oslo.messaging, however, I don't know if it's feasible from a devstack point of view. Adding the QA team to this thread topic to discuss that last point with them. Thanks for your replies. [1] https://review.opendev.org/c/openstack/releases/+/784371 [2] https://review.opendev.org/c/openstack/releases/+/784376 > > > > We (the release team) would appreciate some feedback about this point. > > > > Let's open the debat. > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu Apr 1 15:34:19 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 1 Apr 2021 15:34:19 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 9:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From ltoscano at redhat.com Thu Apr 1 15:42:26 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 01 Apr 2021 17:42:26 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> Message-ID: <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > Hello Osloers, > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > > activity since ussuri, does it still make sense to maintain them? > > > > > > The latest available SHAs for the both projects comes from Victoria > > > (merged in this period). > > > > > > Can we retire them or simply retire them from the coordinated releases? > > > > These have never been released and are no longer branched. What is their > > involvement in the coordinated release at this point? > > Yes these deliverables are tagless so no released at all, however, they are > coordinated so they are branched during each series. > > Though, as said Ben, those deliverables haven't been branched during the > previous series (victoria), we suppose that they have been simply forgotten > inadvertently, > > I proposed a patch to fix that point [1]. Other devstack plugins are branchless (devstack-plugin-ceph, devstack-plugin- nfs), couldn't those be branchless too? -- Luigi From hberaud at redhat.com Thu Apr 1 15:44:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 17:44:21 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> Message-ID: >From my point of view I would argue that yes, however, I don't have the big picture. Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a écrit : > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit > : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > > Hello Osloers, > > > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > > > activity since ussuri, does it still make sense to maintain them? > > > > > > > > The latest available SHAs for the both projects comes from Victoria > > > > (merged in this period). > > > > > > > > Can we retire them or simply retire them from the coordinated > releases? > > > > > > These have never been released and are no longer branched. What is > their > > > involvement in the coordinated release at this point? > > > > Yes these deliverables are tagless so no released at all, however, they > are > > coordinated so they are branched during each series. > > > > Though, as said Ben, those deliverables haven't been branched during the > > previous series (victoria), we suppose that they have been simply > forgotten > > inadvertently, > > > > I proposed a patch to fix that point [1]. > > Other devstack plugins are branchless (devstack-plugin-ceph, > devstack-plugin- > nfs), couldn't those be branchless too? > > > -- > Luigi > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Apr 1 16:05:29 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 18:05:29 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> Message-ID: Well, as suggested Luigi let's drop these deliverables (within Wallaby and for the next series). https://review.opendev.org/c/openstack/releases/+/784376 I kept the Victoria branching but that will be the last one. https://review.opendev.org/c/openstack/releases/+/784371 Let me know what you think Le jeu. 1 avr. 2021 à 17:44, Herve Beraud a écrit : > From my point of view I would argue that yes, however, I don't have the > big picture. > > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a écrit : > >> On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: >> > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a >> écrit : >> > > On 4/1/21 6:24 AM, Herve Beraud wrote: >> > > > Hello Osloers, >> > > > >> > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of >> > > > activity since ussuri, does it still make sense to maintain them? >> > > > >> > > > The latest available SHAs for the both projects comes from Victoria >> > > > (merged in this period). >> > > > >> > > > Can we retire them or simply retire them from the coordinated >> releases? >> > > >> > > These have never been released and are no longer branched. What is >> their >> > > involvement in the coordinated release at this point? >> > >> > Yes these deliverables are tagless so no released at all, however, they >> are >> > coordinated so they are branched during each series. >> > >> > Though, as said Ben, those deliverables haven't been branched during the >> > previous series (victoria), we suppose that they have been simply >> forgotten >> > inadvertently, >> > >> > I proposed a patch to fix that point [1]. >> >> Other devstack plugins are branchless (devstack-plugin-ceph, >> devstack-plugin- >> nfs), couldn't those be branchless too? >> >> >> -- >> Luigi >> >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Vuk.Gojnic at telekom.de Thu Apr 1 10:02:19 2021 From: Vuk.Gojnic at telekom.de (Vuk.Gojnic at telekom.de) Date: Thu, 1 Apr 2021 10:02:19 +0000 Subject: [ironic] IPA does not want boot with UEFI Message-ID: Hello everybody, I am using Ironic standalone to provision the HPE Gen10+ node via iLO driver. Ironic version is 16.0.1. Server is configured with UEFI boot mode. Everything on Ironic side works fine. It creates ISO image, powers the server on and configures it to boot from it. Here is the what /var/log/ironic/ironic-conductor.log says: 2021-03-31 17:46:25.541 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "cleaning" from state "manageable"; target provision state is "available" 2021-03-31 17:46:32.066 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power off' is completed in 4 seconds. 2021-03-31 17:46:32.088 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power off by power off. 2021-03-31 17:46:34.510 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 pending boot mode is uefi. 2021-03-31 17:46:37.248 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Set the node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ed25569f-c107-4fe0-95cd-74fcad9ab3f0.iso?filename=tmpqze8ogiw.iso successfully. 2021-03-31 17:46:48.367 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power on' is completed in 8 seconds. 2021-03-31 17:46:48.388 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power on by rebooting. 2021-03-31 17:46:48.404 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "clean wait" from state "cleaning"; target provision state is "available" The Grub2 starts and after I select the option "boot_partition", it starts booting and immediately freezes with following screen (just red static underscore): [cid:image002.jpg at 01D726EE.A920A5C0] I have tried with pre-built IPA images (see below) as well as with custom IPA images made with Ubuntu 18.04 and 20.04 (built using ironic-python-agent-builder) but it is all the same. Does somebody have idea what is the problem with IPA and UEFI in this particular scenario? Output of "openstack baremetal node show" command: allocation_uuid: null automated_clean: null bios_interface: no-bios boot_interface: ilo-uefi-https chassis_uuid: null clean_step: {} conductor: 10.23.137.234 conductor_group: '' console_enabled: false console_interface: no-console created_at: '2021-03-21T13:54:25+00:00' deploy_interface: direct deploy_step: {} description: null driver: ilo5 driver_info: ilo_address: 10.23.137.137 ilo_bootloader: https://ironic-images/Images/esp.img ilo_deploy_kernel: https://ironic-images/Images/ipa-centos8-stable-victoria.kernel ilo_deploy_ramdisk: https://ironic-images/Images/ipa-centos8-stable-victoria.initramfs ilo_password: '******' ilo_username: Administrator snmp_auth_priv_password: '******' snmp_auth_prot_password: '******' snmp_auth_user: iloinspect driver_internal_info: agent_continue_if_ata_erase_failed: false agent_enable_ata_secure_erase: true agent_erase_devices_iterations: 1 agent_erase_devices_zeroize: true agent_erase_skip_read_only: false agent_secret_token: '******' agent_secret_token_pregenerated: true clean_steps: null disk_erasure_concurrency: 1 last_power_state_change: '2021-03-31T17:46:37.894667' extra: {} fault: clean failure inspect_interface: ilo inspection_finished_at: '2021-03-21T13:57:33+00:00' inspection_started_at: null instance_info: deploy_boot_mode: uefi instance_uuid: null last_error: null lessee: null maintenance: true maintenance_reason: management_interface: ilo5 name: null network_data: {} network_interface: noop owner: null power_interface: ilo power_state: power on properties: cpu_arch: x86 cpus: 64 local_gb: 2979 memory_mb: 262144 protected: false protected_reason: null provision_state: clean wait provision_updated_at: '2021-03-31T17:46:48+00:00' raid_config: {} raid_interface: no-raid rescue_interface: no-rescue reservation: null resource_class: null retired: false retired_reason: null storage_interface: noop target_power_state: null target_provision_state: available target_raid_config: {} traits: [] updated_at: '2021-03-31T17:46:48+00:00' uuid: ed25569f-c107-4fe0-95cd-74fcad9ab3f0 vendor_interface: no-vendor Many thanks! Vuk Gojnic Deutsche Telekom Technik GmbH Services & Plattforms (T-SP) Tribe Data Center Infrastructure (T-DCI) Super Squad Cloud Platforms Lifecycle (SSQ-CP) Vuk Gojnic Kubernetes Engine Squad Lead -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 21619 bytes Desc: image002.jpg URL: From gmann at ghanshyammann.com Thu Apr 1 16:23:58 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 01 Apr 2021 11:23:58 -0500 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> Message-ID: <1788e3fa253.bba36eee1422244.3611802948159890528@ghanshyammann.com> ---- On Thu, 01 Apr 2021 10:28:25 -0500 Herve Beraud wrote ---- > > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > Hello Osloers, > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > activity since ussuri, does it still make sense to maintain them? > > > > The latest available SHAs for the both projects comes from Victoria > > (merged in this period). > > > > Can we retire them or simply retire them from the coordinated releases? > > These have never been released and are no longer branched. What is their > involvement in the coordinated release at this point? > > Yes these deliverables are tagless so no released at all, however, they are coordinated so they are branched during each series. > Though, as said Ben, those deliverables haven't been branched during the previous series (victoria), we suppose that they have been simply forgotten inadvertently, > I proposed a patch to fix that point [1]. > However we don't have so many things to branch for the current series [2], no new commits have been merged during the last 7 months (Victoria at this period), so, the question is, do we have reasons to keep them under the coordinated releases umbrella if nothing new happens in this area. > I proposed a patch for Wallaby too, but I'm not convinced that's the right solution. > Maybe Ken is right and maybe that's time to merge these plugins with oslo.messaging, however, I don't know if it's feasible from a devstack point of view. I do not think there is any difference in devstack plugin's location. Those can be part of the project repo or separate repo. Most of the devstack plugins are part of the project repo. Few devstack plugins in QA project are in separate repo because they are in not related to a specific project. So I will say to move them to related project repo like oslo.messaging or retire them if no one need which can be decided by Oslo team i think. -gmann > Adding the QA team to this thread topic to discuss that last point with them. > Thanks for your replies. > > [1] https://review.opendev.org/c/openstack/releases/+/784371[2] https://review.opendev.org/c/openstack/releases/+/784376 > > > > > We (the release team) would appreciate some feedback about this point. > > > > Let's open the debat. > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From gmann at ghanshyammann.com Thu Apr 1 16:27:09 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 01 Apr 2021 11:27:09 -0500 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> Message-ID: <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> ---- On Thu, 01 Apr 2021 11:05:29 -0500 Herve Beraud wrote ---- > Well, as suggested Luigi let's drop these deliverables (within Wallaby and for the next series). > > https://review.opendev.org/c/openstack/releases/+/784376 Having them branchless or branched is completely depends on devstack plugins maintainer and the nature of the setting they do. If they are installing/updating the setting for branched service then yes it makes sense to be branched as devstack is branched. If they are very general setting like ceph or so then branchless also work. >From devstack or QA point of view, both ways are fine. -gmann > I kept the Victoria branching but that will be the last one. > https://review.opendev.org/c/openstack/releases/+/784371 > Let me know what you think > > Le jeu. 1 avr. 2021 à 17:44, Herve Beraud a écrit : > From my point of view I would argue that yes, however, I don't have the big picture. > > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a écrit : > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > > Hello Osloers, > > > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > > > activity since ussuri, does it still make sense to maintain them? > > > > > > > > The latest available SHAs for the both projects comes from Victoria > > > > (merged in this period). > > > > > > > > Can we retire them or simply retire them from the coordinated releases? > > > > > > These have never been released and are no longer branched. What is their > > > involvement in the coordinated release at this point? > > > > Yes these deliverables are tagless so no released at all, however, they are > > coordinated so they are branched during each series. > > > > Though, as said Ben, those deliverables haven't been branched during the > > previous series (victoria), we suppose that they have been simply forgotten > > inadvertently, > > > > I proposed a patch to fix that point [1]. > > Other devstack plugins are branchless (devstack-plugin-ceph, devstack-plugin- > nfs), couldn't those be branchless too? > > > -- > Luigi > > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From hberaud at redhat.com Thu Apr 1 16:36:38 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 18:36:38 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> Message-ID: Le jeu. 1 avr. 2021 à 18:27, Ghanshyam Mann a écrit : > ---- On Thu, 01 Apr 2021 11:05:29 -0500 Herve Beraud > wrote ---- > > Well, as suggested Luigi let's drop these deliverables (within Wallaby > and for the next series). > > > > https://review.opendev.org/c/openstack/releases/+/784376 > > Having them branchless or branched is completely depends on devstack > plugins maintainer and the nature > of the setting they do. If they are installing/updating the setting for > branched service then yes it makes sense to > be branched as devstack is branched. If they are very general setting like > ceph or so then branchless also > work. > > From devstack or QA point of view, both ways are fine. > Thanks Ghanshyam. I've no idea if they need specific settings but given the activity of these projects since two series they don't seem to be in that plugin zone. I personally think that "branchless" could fit well to them. Let's wait for PTL/maintainers reviews. > -gmann > > > I kept the Victoria branching but that will be the last one. > > https://review.opendev.org/c/openstack/releases/+/784371 > > Let me know what you think > > > > Le jeu. 1 avr. 2021 à 17:44, Herve Beraud a écrit > : > > From my point of view I would argue that yes, however, I don't have the > big picture. > > > > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a > écrit : > > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a > écrit : > > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > > > Hello Osloers, > > > > > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount > of > > > > > activity since ussuri, does it still make sense to maintain them? > > > > > > > > > > The latest available SHAs for the both projects comes from > Victoria > > > > > (merged in this period). > > > > > > > > > > Can we retire them or simply retire them from the coordinated > releases? > > > > > > > > These have never been released and are no longer branched. What is > their > > > > involvement in the coordinated release at this point? > > > > > > Yes these deliverables are tagless so no released at all, however, > they are > > > coordinated so they are branched during each series. > > > > > > Though, as said Ben, those deliverables haven't been branched during > the > > > previous series (victoria), we suppose that they have been simply > forgotten > > > inadvertently, > > > > > > I proposed a patch to fix that point [1]. > > > > Other devstack plugins are branchless (devstack-plugin-ceph, > devstack-plugin- > > nfs), couldn't those be branchless too? > > > > > > -- > > Luigi > > > > > > > > > > -- > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > github.com/4383/https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > > > -- > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > github.com/4383/https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Apr 1 17:15:30 2021 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 1 Apr 2021 12:15:30 -0500 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> Message-ID: On 4/1/21 11:36 AM, Herve Beraud wrote: > > > Le jeu. 1 avr. 2021 à 18:27, Ghanshyam Mann > a écrit : > >  ---- On Thu, 01 Apr 2021 11:05:29 -0500 Herve Beraud > > wrote ---- >  > Well, as suggested Luigi let's drop these deliverables (within > Wallaby and for the next series). >  > >  > https://review.opendev.org/c/openstack/releases/+/784376 > > Having them branchless or branched is completely depends on devstack > plugins maintainer and the nature > of the setting they do. If they are installing/updating the setting > for branched service then yes it makes sense to > be branched as devstack is branched. If they are very general > setting like ceph or so then branchless also > work. > > From devstack or QA point of view, both ways are fine. > > > Thanks Ghanshyam. > > I've no idea if they need specific settings but given the activity of > these projects since two series they don't seem to be in that plugin zone. > > I personally think that "branchless" could fit well to them. Let's wait > for PTL/maintainers reviews. I replied on the reviews, but I think branchless is what we intended for the Oslo plugins. We just missed the step of removing the deliverable file. I can't comment on the containers one because that was never ours (AFAIK). > > > -gmann > >  > I kept the Victoria branching but that will be the last one. >  > https://review.opendev.org/c/openstack/releases/+/784371 >  > Let me know what you think >  > >  > Le jeu. 1 avr. 2021 à 17:44, Herve Beraud > a écrit : >  > From my point of view I would argue that yes, however, I don't > have the big picture. >  > >  > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano > a écrit : >  > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: >  > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec > a écrit : >  > > > On 4/1/21 6:24 AM, Herve Beraud wrote: >  > > > > Hello Osloers, >  > > > > >  > > > > Our devstack plugins (kafka and amqp1) didn't show a great > amount of >  > > > > activity since ussuri, does it still make sense to > maintain them? >  > > > > >  > > > > The latest available SHAs for the both projects comes from > Victoria >  > > > > (merged in this period). >  > > > > >  > > > > Can we retire them or simply retire them from the > coordinated releases? >  > > > >  > > > These have never been released and are no longer branched. > What is their >  > > > involvement in the coordinated release at this point? >  > > >  > > Yes these deliverables are tagless so no released at all, > however, they are >  > > coordinated so they are branched during each series. >  > > >  > > Though, as said Ben, those deliverables haven't been branched > during the >  > > previous series (victoria), we suppose that they have been > simply forgotten >  > > inadvertently, >  > > >  > > I proposed a patch to fix that point [1]. >  > >  > Other devstack plugins are branchless (devstack-plugin-ceph, > devstack-plugin- >  > nfs), couldn't those be branchless too? >  > >  > >  > -- >  > Luigi >  > >  > >  > >  > >  > -- >  > Hervé BeraudSenior Software Engineer at Red Hatirc: > hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > >  > -----BEGIN PGP SIGNATURE----- >  > >  > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >  > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >  > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >  > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >  > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >  > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >  > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >  > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >  > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >  > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >  > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >  > v6rDpkeNksZ9fFSyoY2o >  > =ECSj >  > -----END PGP SIGNATURE----- >  > >  > >  > >  > -- >  > Hervé BeraudSenior Software Engineer at Red Hatirc: > hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > >  > -----BEGIN PGP SIGNATURE----- >  > >  > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >  > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >  > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >  > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >  > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >  > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >  > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >  > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >  > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >  > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >  > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >  > v6rDpkeNksZ9fFSyoY2o >  > =ECSj >  > -----END PGP SIGNATURE----- >  > >  > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From dms at danplanet.com Thu Apr 1 17:46:56 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 01 Apr 2021 10:46:56 -0700 Subject: [all] Gate resources and performance In-Reply-To: (Dan Smith's message of "Wed, 31 Mar 2021 17:00:12 -0700") References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: > I'll try to circle back and generate a new set of numbers with > my script, and also see if I can get updated numbers from Clark on the > overall percentages. Okay, I re-ran the numbers this morning and got updated 30-day stats from Clark. Here's what I've got (delta from the last report in parens): Project % of total Node Hours Nodes ---------------------------------------------- 1. Neutron 23% 34h (-4) 30 (-2) 2. TripleO 18% 17h (-14) 14 (-6) 3. Nova 7% 22h (+1) 25 (-0) 4. Kolla 6% 10h (-2) 18 (-0) 5. OSA 6% 19h (-3) 16 (-1) Definitely a lot of improvement from tripleo, so thanks for that! Neutron rose to the top and is still very hefty. I think Nova's 1-hr rise is probably just noise given the node count didn't change. I think we're still waiting on zuulv3 conversion of the grenade multinode job so we can drop the base grenade job, which will make things go down. I've also got a proposal to make devstack parallel mode be the default, but we're waiting until after devstack cuts wallaby to proceed with that. Hopefully that will result in some across-the-board reduction. Anyway, definitely moving in the right direction on all fronts, so thanks a lot to everyone who has made efforts in this area. I think once things really kick back up around/after PTG we should measure again and see if the "quality of life" is reasonable, and if not, revisit the numbers in terms of who to lean on to reduce further. --Dan From DHilsbos at performair.com Thu Apr 1 17:47:19 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 1 Apr 2021 17:47:19 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: Message-ID: <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> Tony / Eddie; I think this is partially dependent on the version of OpenStack running. In our Victoria cloud, a volume created from an image is also done as a snapshot by Ceph, and is completed in seconds. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Eddie Yen [mailto:missile0407 at gmail.com] Sent: Wednesday, March 31, 2021 6:00 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From vhariria at redhat.com Thu Apr 1 18:05:42 2021 From: vhariria at redhat.com (Vida Haririan) Date: Thu, 1 Apr 2021 14:05:42 -0400 Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> Message-ID: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks, Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: > I forgot this is a holiday. Same on my side. > > Thanks, > Jimmy > > > > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann > wrote: > > > >  > > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady < > Arkady.Kanevsky at dell.com> wrote ---- > >> > >> Team, > >> This Friday is Good Friday and some people have a day off. > >> Should we cancel this week meeting? > >> Please, respond so we can see if we will have quorum. > > > > Thanks Arkady, > > > > I will be off from work and would not be able to join. > > > > -gmann > > > >> Thanks, > >> Arkady > >> > >> Arkady Kanevsky, Ph.D. > >> SP Chief Technologist & DE > >> Dell Technologies office of CTO > >> Dell Inc. One Dell Way, MS PS2-91 > >> Round Rock, TX 78682, USA > >> Phone: 512 7204955 > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Thu Apr 1 18:11:56 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Thu, 1 Apr 2021 18:11:56 +0000 (UTC) Subject: [Interop][Refstack] this Friday meeting In-Reply-To: References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> Message-ID: <1443143388.3156237.1617300716714@mail.yahoo.com> Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks,Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.faulkner at verizonmedia.com Thu Apr 1 18:22:50 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 1 Apr 2021 11:22:50 -0700 Subject: [E] [ironic] Review Jams In-Reply-To: References: Message-ID: > The tl;dr is we will use meetpad[1] and meet on Mondays at 2 PM UTC and Tuesdays at 6 PM UTC. Because downstream commitments are usually in US-local time, and DST exists, we've decided to move back the Tuesday review jam to 5 PM UTC to keep the time the same after DST adjustment. If you have any questions, please ask here or in #openstack-ironic. Thanks, Jay Faulkner On Tue, Feb 9, 2021 at 12:54 PM Zachary Buhman < zachary.buhman at verizonmedia.com> wrote: > I thought the 09 Feb 2021 review jam was highly valuable. > > Without the discussions we had, I think the "Secure RBAC" patch set would > be unapproachable for me. For example, having knowledge of the (new) > oslo-policy features that the patches make use of seems to be a requirement > for deeply understanding the changes. As a direct result of the review jam > [0], I feel that I have enough understanding and comfortability to make > valuable review feedback on these patches. > > [0] and also having read/reviewed the secure-rbac spec previously, to be > fair > > On Fri, Feb 5, 2021 at 7:10 AM Julia Kreger > wrote: > >> In the Ironic team's recent mid-cycle call, we discussed the need to >> return to occasionally having review jams in order to help streamline >> the review process. In other words, get eyes on a change in parallel >> and be able to discuss the change. The goal is to help get people on >> the same page in terms of what and why. Be on hand to answer questions >> or back-fill context. This is to hopefully avoid the more iterative >> back and forth nature of code review, which can draw out a long chain >> of patches. As always, the goal is not perfection, but forward >> movement especially for complex changes. >> >> We've established two time windows that will hopefully not to be too >> hard for some contributors to make it to. It doesn't need to be >> everyone, but it would help for at least some people whom actively >> review or want to actively participate in reviewing, or whom are even >> interested in a feature to join us for our meeting. >> >> I've added an entry on to our wiki page to cover this, with the >> current agenda and anticipated review jam topic schedule. The tl;dr is >> we will use meetpad[1] and meet on Mondays at 2 PM UTC and Tuesdays at >> 6 PM UTC. The hope is to to enable some overlap of reviewers. If >> people are interested in other times, please bring this up in the >> weekly meeting or on the mailing list. >> >> I'm not sending out calendar invites for this. Yet. :) >> >> See everyone next week! >> >> -Julia >> >> [0]: >> https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.openstack.org_wiki_Meetings_Ironic-23Review-5FJams&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=OsbscIvhVDRWHpDZtO7nXdqGCfPHirpVEemMwL8l5tw&m=S4p8gD_wQlpR_rvzdqGkdq574-DkUsgBRet9-k3RpVg&s=gVApbMsmNPVlfYreqkQe4yKFxC66U6D8nFc_TwjW-FE&e= >> [1]: >> https://urldefense.proofpoint.com/v2/url?u=https-3A__meetpad.opendev.org_ironic&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=OsbscIvhVDRWHpDZtO7nXdqGCfPHirpVEemMwL8l5tw&m=S4p8gD_wQlpR_rvzdqGkdq574-DkUsgBRet9-k3RpVg&s=iHBy7h99FQZ6Xb_fN2Hv3HZXIANl6BzR867jblUJvsk&e= >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 1 18:42:43 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Apr 2021 11:42:43 -0700 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Adding the list back and trimming the message. Replies in-band. Well, that is good that the server is not signed, nor other esp images are not working. On Thu, Apr 1, 2021 at 11:20 AM Vuk Gojnic wrote: > > Hey Julia, > > Thanks for asking. I have tried with several ESP image options with same effect (one taken from Ubuntu Live ISO that boots on that node, another downloaded and third made with grub tools). None of them was signed. Interesting. At least it is consistent! Have you tried to pull down the iso image and take it apart to verify it is UEFI bootable against a VM or another physical machine? I'm wondering if you need both uefi parameters set. You definitely don't have properties['capabilities']['boot_mode'] set which is used or... maybe a better word to use is drawn in for asserting defaults, but you do have the deploy_boot_mode setting set. I guess a quick manual sanity check of the actual resulting iso image is going to be critical. Debug logging may also be useful, and I'm only thinking that because there is no logging from the generation of the image. > > The server is not in UEFI secure boot mode. Interesting, sure sounds like it is based on your original message. :( > Btw. I will be on holidays for next week so I might not be able to follow up on this discussion before Apr 12th. No worries, just ping us on irc.freenode.net in #openstack-ironic if a reply on the mailing list doesn't grab our attention. > > Bests, > Vuk > > On Thu, Apr 1, 2021 at 4:20 PM Julia Kreger wrote: >> >> Greetings, >> >> Two questions: >> 1) Are the ESP image contents signed, or are they built using one of >> the grub commands? >> 2) Is the machine set to enforce secure boot at this time? >> >> [trim] From rafal at pregusia.pl Thu Apr 1 19:05:32 2021 From: rafal at pregusia.pl (pregusia) Date: Thu, 1 Apr 2021 21:05:32 +0200 Subject: [keystone]improvments in mapping models/support for JWT tokens Message-ID: Hello members! Please direct your attention to some keystone modyfications - to be more precisly two of them:  (1) extension to mapping engine in order to support multiple projects and assigning project by id  (2) extension to authorization mechanisms - add JWT token support Ad (1):     Currently mapping engine can map multiple projects for user (as stated in https://docs.openstack.org/keystone/pike/advanced-topics/federation/mapping_combinations.html), but this support     lacks from (a) dynamic mapping projects from assertion (for ex. projects ids are in assertion) and (b) mapping engine cannot map to list from assertion (if in asertion we have     "some_field": [ id1, id2, ..., idN ] - then using this field by substitution - eg "{1}" wont map this to whole needed structure of projects in mapping).     In patch I extend mapping schema by adding new field "projects_spec" of type string.     This field contains string formated like project1ID:role1:role2:..:roleN,project2ID:role1:...:roleN,...     and this is mapped into proper structure { "projects": [  { "id": .., "roles": [ ... ] }, ... ] }     this allows the identity provider to supply permission-like informations about what projects user should have access to.     The implementation is not ideal, and some work shoud be do to make it more user-friendly (for ex. configuration options for auto-creating projects by ID, and options for allowing user to log-in if project-by-id not exists etc)     This is only proof of concept and question if this direction is proper according to keystone development. Ad (2)     This patch adds new authorization protocol - jwt_token.     It allows to access endpoint '/v3/OS-FEDERATION/identity_providers/{IDP_NAME}/protocols/jwt_token/auth' (and obtain keystone auth token) using JWT token in Authorization header field.     Header value is checked against proper formating and JWT token is extracted from it.     Then, token is validated against public key (RS256/RS512 algorithms only supported), against issuer and expiration date and others.     Then, if this end with success, payload from token is supplied to mapping engine, when new user (and according to (1), it's permission to projects for ex.) is created and given access to proper projects/roles. Happy reviewing! pregusia -------------- next part -------------- A non-text attachment was scrubbed... Name: patch-victoria.patch Type: text/x-patch Size: 13182 bytes Desc: not available URL: From fungi at yuggoth.org Thu Apr 1 19:22:04 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Apr 2021 19:22:04 +0000 Subject: [keystone]improvments in mapping models/support for JWT tokens In-Reply-To: References: Message-ID: <20210401192203.jn2heaicdlwojc7i@yuggoth.org> On 2021-04-01 21:05:32 +0200 (+0200), pregusia wrote: > Please direct your attention to some keystone modyfications - to be more > precisly two of them: >  (1) extension to mapping engine in order to support multiple projects and > assigning project by id >  (2) extension to authorization mechanisms - add JWT token support [...] This is pretty exciting stuff. But please be aware that for an OpenStack project to merge patches they'll need to be proposed into the code review system (Gerrit) by someone, preferably by the author of the patches, which is the easiest place to discuss them as well. Also we need some way to confirm that the author of the patches has agreed to the Individual Contributor License Agreement (essentially asserting that the patches they propose are their own work or that they have permission from the author or are proposing patches consisting of existing code distributed under a license compatible with the Apache License version 2.0), and the usual way to agree to the ICLA is when creating your account in Gerrit. Please see the OpenStack Contributor Guide for a general introduction to our code proposal and review workflow: https://docs.openstack.org/contributors/ And feel free to ask questions on this mailing list if you have any. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From skaplons at redhat.com Thu Apr 1 19:35:55 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 01 Apr 2021 21:35:55 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: References: Message-ID: <3930281.aCZO8KT43X@p1> Hi, Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > Hello, > > I spent a lot of time troubleshooting my issue, which I described here : > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > To summarize, when I want to create an instance, binding fails on compute > node, the dhcp agent seems to give an ip to the VM but I have an error. What do You mean exactly? Failed binding of the port in Neutron? In such case nova will not boot vm so it can't get IP from DHCP. > > I don't know where to dig, besides what I have done. Please enable debug logs in neutron-server and look in its logs for the reason why it failed to bind port on specific host. Usually reason is dead L2 agent on host or mismatch in the agent's bridge mappings configuration in the agent. > > Thanks a lot for your help ! -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From janders at redhat.com Thu Apr 1 20:53:33 2021 From: janders at redhat.com (Jacob Anders) Date: Fri, 2 Apr 2021 06:53:33 +1000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic Message-ID: Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpenick at gmail.com Thu Apr 1 21:30:28 2021 From: jpenick at gmail.com (James Penick) Date: Thu, 1 Apr 2021 14:30:28 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: I completely support this. However are you considering other options, such as pour-over coffee machines? Not every deployer is able to consume espresso! On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > Hi There, > > I was discussing this RFE with Julia and we decided it would be great to > get some feedback on it from the wider community, ideally by the end of > today. Apologies for the short notice. Here's the story: > > https://storyboard.openstack.org/#!/story/2008791 > > What are your thoughts on this? Please comment in the Story or just reply > to thread. > > Thank you, > Jacob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Thu Apr 1 21:40:32 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Thu, 1 Apr 2021 23:40:32 +0200 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Thanks for raising this Jacob! This will probably require a spec (since we will have multiple scenarios). I think we need a generic coffee driver, with support for different management-coffee-interfaces (expresso, latte, etc) and deploy-coffee-interfaces (mug, bottle). Em qui., 1 de abr. de 2021 às 23:32, James Penick escreveu: > I completely support this. However are you considering other options, such > as pour-over coffee machines? Not every deployer is able to consume > espresso! > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > >> Hi There, >> >> I was discussing this RFE with Julia and we decided it would be great to >> get some feedback on it from the wider community, ideally by the end of >> today. Apologies for the short notice. Here's the story: >> >> https://storyboard.openstack.org/#!/story/2008791 >> >> What are your thoughts on this? Please comment in the Story or just reply >> to thread. >> >> Thank you, >> Jacob >> > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 1 21:45:09 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Apr 2021 14:45:09 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: We shouldn't forget the classic drip makers that Operators tend to have in their facilities. Granted, I think they will need to send them all through cleaning once the pandemic is over. On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: > > I completely support this. However are you considering other options, such as pour-over coffee machines? Not every deployer is able to consume espresso! > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: >> >> Hi There, >> >> I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: >> >> https://storyboard.openstack.org/#!/story/2008791 >> >> What are your thoughts on this? Please comment in the Story or just reply to thread. >> >> Thank you, >> Jacob From jay.faulkner at verizonmedia.com Thu Apr 1 21:58:54 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 1 Apr 2021 14:58:54 -0700 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: OpenStack is about producing large amounts of homogenous instances. This sort of espresso-machine pandering will just lead us down a path of supporting all kinds of lattes, mochas, and macchiatos -- not even to get started on the flavor syrups and steamed milk. We have to focus on managing large pots of coffee, so that people can drink and know the next cup they get will be the exact same. We're building a homogenous coffee environment, we can't be supporting every milk, style, and flavor syrup in the world. - Jay Faulkner P.S. Don't even try to sneak any of those dark roast Starbucks beans past the refcoffee tests. That kind of bitterness exceeds our API spec. On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger wrote: > We shouldn't forget the classic drip makers that Operators tend to > have in their facilities. > > Granted, I think they will need to send them all through cleaning once > the pandemic is over. > > On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: > > > > I completely support this. However are you considering other options, > such as pour-over coffee machines? Not every deployer is able to consume > espresso! > > > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > >> > >> Hi There, > >> > >> I was discussing this RFE with Julia and we decided it would be great > to get some feedback on it from the wider community, ideally by the end of > today. Apologies for the short notice. Here's the story: > >> > >> > https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= > >> > >> What are your thoughts on this? Please comment in the Story or just > reply to thread. > >> > >> Thank you, > >> Jacob > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpenick at gmail.com Thu Apr 1 22:28:20 2021 From: jpenick at gmail.com (James Penick) Date: Thu, 1 Apr 2021 15:28:20 -0700 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Perhaps we should break this down into a subset of projects, one for grinding beans, one for extracting coffee, and maybe a microservice for handling milk or dairy based alternatives? I think we can also all agree that cold brew drinks are *completely* out of scope and should be implemented in another service. On Thu, Apr 1, 2021 at 2:59 PM Jay Faulkner wrote: > OpenStack is about producing large amounts of homogenous instances. This > sort of espresso-machine pandering will just lead us down a path of > supporting all kinds of lattes, mochas, and macchiatos -- not even to get > started on the flavor syrups and steamed milk. > > We have to focus on managing large pots of coffee, so that people can > drink and know the next cup they get will be the exact same. We're building > a homogenous coffee environment, we can't be supporting every milk, style, > and flavor syrup in the world. > > - > Jay Faulkner > > P.S. Don't even try to sneak any of those dark roast Starbucks beans past > the refcoffee tests. That kind of bitterness exceeds our API spec. > > On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger > wrote: > >> We shouldn't forget the classic drip makers that Operators tend to >> have in their facilities. >> >> Granted, I think they will need to send them all through cleaning once >> the pandemic is over. >> >> On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: >> > >> > I completely support this. However are you considering other options, >> such as pour-over coffee machines? Not every deployer is able to consume >> espresso! >> > >> > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: >> >> >> >> Hi There, >> >> >> >> I was discussing this RFE with Julia and we decided it would be great >> to get some feedback on it from the wider community, ideally by the end of >> today. Apologies for the short notice. Here's the story: >> >> >> >> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= >> >> >> >> What are your thoughts on this? Please comment in the Story or just >> reply to thread. >> >> >> >> Thank you, >> >> Jacob >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Apr 1 22:33:43 2021 From: amy at demarco.com (Amy Marrich) Date: Thu, 1 Apr 2021 17:33:43 -0500 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Can we expand the spec for Cappuccinos and Lattes? Oh and maybe Pan Au Chocolate? Amy (spotz) On Thu, Apr 1, 2021 at 5:31 PM James Penick wrote: > Perhaps we should break this down into a subset of projects, one for > grinding beans, one for extracting coffee, and maybe a microservice for > handling milk or dairy based alternatives? > > I think we can also all agree that cold brew drinks are *completely* out > of scope and should be implemented in another service. > > On Thu, Apr 1, 2021 at 2:59 PM Jay Faulkner > wrote: > >> OpenStack is about producing large amounts of homogenous instances. This >> sort of espresso-machine pandering will just lead us down a path of >> supporting all kinds of lattes, mochas, and macchiatos -- not even to get >> started on the flavor syrups and steamed milk. >> >> We have to focus on managing large pots of coffee, so that people can >> drink and know the next cup they get will be the exact same. We're building >> a homogenous coffee environment, we can't be supporting every milk, style, >> and flavor syrup in the world. >> >> - >> Jay Faulkner >> >> P.S. Don't even try to sneak any of those dark roast Starbucks beans past >> the refcoffee tests. That kind of bitterness exceeds our API spec. >> >> On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger >> wrote: >> >>> We shouldn't forget the classic drip makers that Operators tend to >>> have in their facilities. >>> >>> Granted, I think they will need to send them all through cleaning once >>> the pandemic is over. >>> >>> On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: >>> > >>> > I completely support this. However are you considering other options, >>> such as pour-over coffee machines? Not every deployer is able to consume >>> espresso! >>> > >>> > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders >>> wrote: >>> >> >>> >> Hi There, >>> >> >>> >> I was discussing this RFE with Julia and we decided it would be great >>> to get some feedback on it from the wider community, ideally by the end of >>> today. Apologies for the short notice. Here's the story: >>> >> >>> >> >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= >>> >> >>> >> What are your thoughts on this? Please comment in the Story or just >>> reply to thread. >>> >> >>> >> Thank you, >>> >> Jacob >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Apr 1 22:43:31 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Apr 2021 22:43:31 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: <20210401224331.losxlxqer6rrgmmj@yuggoth.org> On 2021-04-01 23:40:32 +0200 (+0200), Iury Gregory wrote: > Thanks for raising this Jacob! > This will probably require a spec (since we will have multiple scenarios). > I think we need a generic coffee driver, with support for different > management-coffee-interfaces (expresso, latte, etc) and > deploy-coffee-interfaces (mug, bottle). [...] This is already heading toward an inevitable interoperability nightmare; we should already be planning for the gimme_a_coffee_already porcelain API. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From janders at redhat.com Thu Apr 1 22:53:39 2021 From: janders at redhat.com (Jacob Anders) Date: Fri, 2 Apr 2021 08:53:39 +1000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: <20210401224331.losxlxqer6rrgmmj@yuggoth.org> References: <20210401224331.losxlxqer6rrgmmj@yuggoth.org> Message-ID: Thank you for your invaluable insights! >From my side, given I've been doing a fair bit of clean_step related work lately, I'm happy to take on the de-scaling challenge which might be a little trickier than usual as it needs to run periodically even if there is a long-living instance provisioned, otherwise the hardware will inevitably end up in rescue mode. I'm also happy to look into HPC (High Performance Coffee) use cases. CCing Stig as he might have some insights there as well. On Fri, Apr 2, 2021 at 8:49 AM Jeremy Stanley wrote: > On 2021-04-01 23:40:32 +0200 (+0200), Iury Gregory wrote: > > Thanks for raising this Jacob! > > This will probably require a spec (since we will have multiple > scenarios). > > I think we need a generic coffee driver, with support for different > > management-coffee-interfaces (expresso, latte, etc) and > > deploy-coffee-interfaces (mug, bottle). > [...] > > This is already heading toward an inevitable interoperability > nightmare; we should already be planning for the > gimme_a_coffee_already porcelain API. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Fri Apr 2 00:19:20 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 2 Apr 2021 00:19:20 +0000 Subject: launch VM on volume vs. image In-Reply-To: <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> References: , <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> Message-ID: Hi Dominic, What's your image format? Thanks! Tony ________________________________________ From: DHilsbos at performair.com Sent: April 1, 2021 10:47 AM To: tonyliu0592 at hotmail.com Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com Subject: RE: launch VM on volume vs. image Tony / Eddie; I think this is partially dependent on the version of OpenStack running. In our Victoria cloud, a volume created from an image is also done as a snapshot by Ceph, and is completed in seconds. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Eddie Yen [mailto:missile0407 at gmail.com] Sent: Wednesday, March 31, 2021 6:00 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From tonyliu0592 at hotmail.com Fri Apr 2 00:24:04 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 2 Apr 2021 00:24:04 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , Message-ID: I have a 300GB QCOW image (800GB raw space). If launch VM on volume, Cinder will need to convert it first, and that requires at least 300GB free disk space on controller. If launch VM on image, it takes forever, I didn't look into where it's stuck. Is there any easier way to launch VM from such image? Ceph is the storage backend. Thanks! Tony ________________________________________ From: Eddie Yen Sent: March 31, 2021 10:47 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image BTW, If the source image is based on compression or thin provision type (like VDI, QCOW2, VMDK, etc.) It will take a long time to create no matter boot on image or volume. Nova will convert the image based on these type first during creation. Because Ceph RBD doesn't support. Make sure all the images you upload is based on RBD format (or RAW format in other word), unless the virtual size of image is small. . Tony Liu > 於 2021年4月1日 週四 上午10:18寫道: Thank you Eddie! It makes sense. Creating a snapshot is much faster than copying image to a volume. Tony ________________________________________ From: Eddie Yen > Sent: March 31, 2021 05:59 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu >> 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From gouthampravi at gmail.com Fri Apr 2 00:32:12 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 1 Apr 2021 17:32:12 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: On Thu, Apr 1, 2021 at 1:59 PM Jacob Anders wrote: > > Hi There, > > I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: > > https://storyboard.openstack.org/#!/story/2008791 > > What are your thoughts on this? Please comment in the Story or just reply to thread. Very neat. I'm hoping we can start thinking about storage sooner than later. We know everyone wants their raw materials in an infinite conveyor - and they're thinking tape machine, but let's start crawling before we're walking here. Ground coffee can't also be ephemeral for archival and regulatory purposes. Persistent storage matters. > > Thank you, > Jacob From cpiercey at icloud.com Fri Apr 2 01:08:35 2021 From: cpiercey at icloud.com (CHARLES PIERCEY) Date: Thu, 1 Apr 2021 18:08:35 -0700 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: <31E452C3-82CE-4F67-B4A3-EE5F16A1BBB0@icloud.com> Everyone should just accept double expresso macchiatos as the standard for superior caffeination. On Apr 1, 2021, at 3:05 PM, Jay Faulkner wrote:  OpenStack is about producing large amounts of homogenous instances. This sort of espresso-machine pandering will just lead us down a path of supporting all kinds of lattes, mochas, and macchiatos -- not even to get started on the flavor syrups and steamed milk. We have to focus on managing large pots of coffee, so that people can drink and know the next cup they get will be the exact same. We're building a homogenous coffee environment, we can't be supporting every milk, style, and flavor syrup in the world. - Jay Faulkner P.S. Don't even try to sneak any of those dark roast Starbucks beans past the refcoffee tests. That kind of bitterness exceeds our API spec. On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger wrote: > We shouldn't forget the classic drip makers that Operators tend to > have in their facilities. > > Granted, I think they will need to send them all through cleaning once > the pandemic is over. > > On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: > > > > I completely support this. However are you considering other options, such as pour-over coffee machines? Not every deployer is able to consume espresso! > > > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > >> > >> Hi There, > >> > >> I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: > >> > >> https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= > >> > >> What are your thoughts on this? Please comment in the Story or just reply to thread. > >> > >> Thank you, > >> Jacob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Fri Apr 2 02:10:14 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 2 Apr 2021 02:10:14 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , , Message-ID: I uploaded 800GB RAW image. Then launching VM on either image or volume is in 10 seconds. Tony ________________________________________ From: Tony Liu Sent: April 1, 2021 05:24 PM To: Eddie Yen Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image I have a 300GB QCOW image (800GB raw space). If launch VM on volume, Cinder will need to convert it first, and that requires at least 300GB free disk space on controller. If launch VM on image, it takes forever, I didn't look into where it's stuck. Is there any easier way to launch VM from such image? Ceph is the storage backend. Thanks! Tony ________________________________________ From: Eddie Yen Sent: March 31, 2021 10:47 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image BTW, If the source image is based on compression or thin provision type (like VDI, QCOW2, VMDK, etc.) It will take a long time to create no matter boot on image or volume. Nova will convert the image based on these type first during creation. Because Ceph RBD doesn't support. Make sure all the images you upload is based on RBD format (or RAW format in other word), unless the virtual size of image is small. . Tony Liu > 於 2021年4月1日 週四 上午10:18寫道: Thank you Eddie! It makes sense. Creating a snapshot is much faster than copying image to a volume. Tony ________________________________________ From: Eddie Yen > Sent: March 31, 2021 05:59 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu >> 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From hberaud at redhat.com Fri Apr 2 05:31:17 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 07:31:17 +0200 Subject: [release] Release countdown for week R-1 Apr 05 - Apr 09 Message-ID: Development Focus ----------------- We are on the final mile of the Wallaby development cycle! Remember that the Wallaby final release will include the latest release candidate (for cycle-with-rc deliverables) or the latest intermediary release (for cycle-with-intermediary deliverables) available. April 8 is the deadline for final Wallaby release candidates as well as any last cycle-with-intermediary deliverables. We will then enter a quiet period until we tag the final release on 14 April, 2021. Teams should be prioritizing fixing release-critical bugs, before that deadline. Otherwise it's time to start planning the Xena development cycle, including discussing Forum and PTG sessions content, in preparation of PTG on the week of April 19. Actions ------- Watch for any translation patches coming through on the stable/wallaby branch and merge them quickly. If you discover a release-critical issue, please make sure to fix it on the master branch first, then backport the bugfix to the stable/wallaby branch before triggering a new release. Please drop by #openstack-release with any questions or concerns about the upcoming release! Upcoming Deadlines & Dates -------------------------- Final Wallaby release: 14 April, 2021 Xena virtual PTG: 19 - 23 April, 2021 Thanks for your attention -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Apr 2 05:49:01 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 2 Apr 2021 14:49:01 +0900 Subject: [infra][puppet] No verified score posted by zuul Message-ID: Hi I have asked for some help in #openstack-infra but didn't get any solution so far Many people might be now on holidays (enjoy holidays!), so let me send this email so that people involved can find this mail after getting back. A few days ago Wallaby release of openstack puppet modules were created, and the release bot submitted release patches. However for some patches zuul doesn't return any CI result(it doesn't put verified score)[1]. I posted +2+A on [2] but it is not merged, because it is not verified by zuul. I tried "recheck" but it didn't solve the problem. [1] https://review.opendev.org/c/openstack/puppet-aodh/+/784213/1 [2] https://review.opendev.org/c/openstack/puppet-cloudkitty/+/784230 Currently we don't have any job triggered for the change with .gitreview and I guess that is why we don't get verified. Actually I see that the same patch for puppet-oslo got verified +1, because tripleo job was unexpectedly triggered for the change in .gitreview [2] https://review.opendev.org/c/openstack/puppet-oslo/+/784302 The easiest solution would be to manually squash these two patches into one. However I remember that we did get verified when we created the last Victoria release, and I suspect some change in infra side which resulted in this situation. So it would be nice if I can ask some insights from infra team about this situation. Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Apr 2 07:49:02 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 2 Apr 2021 16:49:02 +0900 Subject: [infra][puppet] No verified score posted by zuul In-Reply-To: References: Message-ID: Please ignore my previous email. It turned out the following change disabled all jobs against .gitreview and that is the cause why zuul no longer posts result... https://github.com/openstack/puppet-openstack-integration/commit/1914b7ed1e499d13af4952992d0bf1728ca4db8e I'll fix the gate asap. On Fri, Apr 2, 2021 at 2:49 PM Takashi Kajinami wrote: > Hi > > > I have asked for some help in #openstack-infra but didn't get any solution > so far > Many people might be now on holidays (enjoy holidays!), so let me send > this email > so that people involved can find this mail after getting back. > > A few days ago Wallaby release of openstack puppet modules were created, > and the release bot submitted release patches. > > However for some patches zuul doesn't return any CI result(it doesn't put > verified score)[1]. I posted +2+A on [2] but it is not merged, because > it is not verified by zuul. I tried "recheck" but it didn't solve the > problem. > [1] https://review.opendev.org/c/openstack/puppet-aodh/+/784213/1 > [2] https://review.opendev.org/c/openstack/puppet-cloudkitty/+/784230 > > Currently we don't have any job triggered for the change with .gitreview > and > I guess that is why we don't get verified. > Actually I see that the same patch for puppet-oslo got verified +1, because > tripleo job was unexpectedly triggered for the change in .gitreview > [2] https://review.opendev.org/c/openstack/puppet-oslo/+/784302 > > The easiest solution would be to manually squash these two patches into > one. > However I remember that we did get verified when we created the last > Victoria release, > and I suspect some change in infra side which resulted in this situation. > So it would be nice if I can ask some insights from infra team about this > situation. > > Thank you, > Takashi Kajinami > -- ---------- Takashi Kajinami Principal Software Maintenance Engineer Customer Experience and Engagement Red Hat email: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 2 09:51:07 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 11:51:07 +0200 Subject: [docs][release] Creating Xena's landing pages Message-ID: Hello Docs team, This is a friendly reminder from the release team, I think that it should be safe for you to apply your process to create the new release series landing pages for docs.openstack.org. All stable branches are now created. If you want you can do the work before the final release date to avoid having to synchronize with the release team on that day. Let us know if you have any questions. Cheers -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 2 09:54:19 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 11:54:19 +0200 Subject: PTO Monday April 5 Message-ID: Hello Monday is a public holiday in France. I'll be back Tuesday. Have a nice weekend! -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 2 09:55:30 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 11:55:30 +0200 Subject: [oslo] Canceled meeting - PTO Monday April 5 In-Reply-To: References: Message-ID: Le ven. 2 avr. 2021 à 11:54, Herve Beraud a écrit : > Hello > > Monday is a public holiday in France. I'll be back Tuesday. > > Have a nice weekend! > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Fri Apr 2 12:34:07 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 2 Apr 2021 12:34:07 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <0c28086a5f8a45ca9aab8568d1c0de7e@ncwmexgp009.CORP.CHARTERCOM.com> I opened a bug for this issue: https://bugs.launchpad.net/kolla-ansible/+bug/1922269 -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 11:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 9:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From Arkady.Kanevsky at dell.com Fri Apr 2 12:45:39 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 2 Apr 2021 12:45:39 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 2 12:47:39 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 2 Apr 2021 12:47:39 +0000 Subject: [Interop][Refstack] No meeting this Friday meeting Message-ID: Team, No meeting this Friday. Happy Holiday! Will update Etherpad. Cheers, Arkady From: prakash RAMCHANDRAN Sent: Thursday, April 1, 2021 1:12 PM To: Jimmy McArthur; Vida Haririan Cc: Ghanshyam Mann; Kanevsky, Arkady; openstack-discuss; Martin Kopec; Goutham Pacha Ravi Subject: Re: [Interop][Refstack] this Friday meeting [EXTERNAL EMAIL] Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan > wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks, Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur > wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann > wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady > wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Apr 2 13:05:30 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 2 Apr 2021 13:05:30 +0000 (UTC) Subject: [Interop][Refstack] No meeting this Friday meeting In-Reply-To: References: Message-ID: <1429753648.209610.1617368730565@mail.yahoo.com> Thanks Arkad. A warm springtime and happy Easter to all - Cheers Prakash On Friday, April 2, 2021, 05:47:48 AM PDT, Kanevsky, Arkady wrote: Team, No meeting this Friday. Happy Holiday! Will update Etherpad.   Cheers, Arkady     From: prakash RAMCHANDRAN Sent: Thursday, April 1, 2021 1:12 PM To: Jimmy McArthur; Vida Haririan Cc: Ghanshyam Mann; Kanevsky, Arkady; openstack-discuss; Martin Kopec; Goutham Pacha Ravi Subject: Re: [Interop][Refstack] this Friday meeting   [EXTERNAL EMAIL] Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash   On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote:     Hi Arkady, Friday is a company holiday and I will be ooo.   Thanks, Vida   On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Fri Apr 2 13:08:01 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 2 Apr 2021 13:08:01 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders ; openstack-discuss Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 2 13:18:50 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 2 Apr 2021 13:18:50 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Dell Customer Communication - Confidential Expresso server is still needed and must be used at least once a day. From: Braden, Albert Sent: Friday, April 2, 2021 8:08 AM To: openstack-discuss Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders >; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Apr 2 15:23:14 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 2 Apr 2021 15:23:14 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> Message-ID: <0670B960225633449A24709C291A52524FBBF96C@COM01.performair.local> Tony; In accordance with the second "Important" note listed here: https://docs.ceph.com/en/nautilus/rbd/rbd-openstack/, we use all RAW images. Thank you, Dominic L. Hilsbos, MBA Director ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Tony Liu [mailto:tonyliu0592 at hotmail.com] Sent: Thursday, April 1, 2021 5:19 PM To: Dominic Hilsbos Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com Subject: Re: launch VM on volume vs. image Hi Dominic, What's your image format? Thanks! Tony ________________________________________ From: DHilsbos at performair.com Sent: April 1, 2021 10:47 AM To: tonyliu0592 at hotmail.com Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com Subject: RE: launch VM on volume vs. image Tony / Eddie; I think this is partially dependent on the version of OpenStack running. In our Victoria cloud, a volume created from an image is also done as a snapshot by Ceph, and is completed in seconds. Thank you, Dominic L. Hilsbos, MBA Director ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Eddie Yen [mailto:missile0407 at gmail.com] Sent: Wednesday, March 31, 2021 6:00 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From DHilsbos at performair.com Fri Apr 2 15:26:05 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 2 Apr 2021 15:26:05 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] Sent: Friday, April 2, 2021 6:19 AM To: Braden, Albert; openstack-discuss Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic Dell Customer Communication - Confidential Expresso server is still needed and must be used at least once a day. From: Braden, Albert Sent: Friday, April 2, 2021 8:08 AM To: openstack-discuss Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders >; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadewojo at gmail.com Fri Apr 2 15:32:42 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Fri, 2 Apr 2021 16:32:42 +0100 Subject: all Octavia LoadBalancer Message-ID: Hi there, I recently deployed a load balancer on our openstack private cloud. I used this manual - https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html to create the load balancer. However, after creating and trying to access it, it returns an error message saying "No server is available to handle this request". Also on the dashboard, "Operating status" shows offline but "provisioning status" shows active. I have two web applications as members of the load balancer and I can individually access those web applications. Could someone please point me in the right direction? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Fri Apr 2 15:48:12 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 2 Apr 2021 15:48:12 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> Message-ID: In the event of a leak, smart hands will respond with empty cups. Next step is designing the "series of tubes" that will deliver the product to consumers. 507eba0cecad04cd7400002b (900×633) (insider.com) From: DHilsbos at performair.com Sent: Friday, April 2, 2021 11:26 AM To: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] Sent: Friday, April 2, 2021 6:19 AM To: Braden, Albert; openstack-discuss Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic Dell Customer Communication - Confidential Expresso server is still needed and must be used at least once a day. From: Braden, Albert > Sent: Friday, April 2, 2021 8:08 AM To: openstack-discuss Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders >; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Fri Apr 2 16:09:40 2021 From: mthode at mthode.org (Matthew Thode) Date: Fri, 2 Apr 2021 11:09:40 -0500 Subject: [Requirements][all] Requirements branched, freeze lifted Message-ID: <20210402160940.naqcizgxc7psbkwt@mthode.org> The requirements freeze is now lifted. If your project has not branched please be aware that master moves on (to xena). -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Sat Apr 3 04:04:12 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 2 Apr 2021 21:04:12 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> Message-ID: I do believe any smart hands called to the rackmount espresso server would be greatly appreciative of this important functionality... even should a leak have developed in the system. On Fri, Apr 2, 2021 at 8:50 AM Braden, Albert wrote: > > In the event of a leak, smart hands will respond with empty cups. > > > > Next step is designing the “series of tubes” that will deliver the product to consumers. > > > > 507eba0cecad04cd7400002b (900×633) (insider.com) > > > > From: DHilsbos at performair.com > Sent: Friday, April 2, 2021 11:26 AM > To: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? > > > > Dominic L. Hilsbos, MBA > > Director – Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] > Sent: Friday, April 2, 2021 6:19 AM > To: Braden, Albert; openstack-discuss > Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > Dell Customer Communication - Confidential > > > > Expresso server is still needed and must be used at least once a day. > > > > From: Braden, Albert > Sent: Friday, April 2, 2021 8:08 AM > To: openstack-discuss > Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > [EXTERNAL EMAIL] > > April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? > > > > From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM > To: Jacob Anders ; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > Dell Customer Communication - Confidential > > > > I love this April fool joke > > > > From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM > To: openstack-discuss > Subject: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > [EXTERNAL EMAIL] > > Hi There, > > > > I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: > > > > MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] > > > > What are your thoughts on this? Please comment in the Story or just reply to thread. > > > > Thank you, > > Jacob > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From satish.txt at gmail.com Sat Apr 3 05:12:00 2021 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Apr 2021 01:12:00 -0400 Subject: ML2/OVN DVR question Message-ID: Folks, I have deployed openstack using ML2/OVN on 1 controller and 2 compute nodes so far everything is working fine, when i configured router it by default used L3HA and i can see active-backup router on both compute nodes. currently my all SNAT traffic going out using compute-1 I don't want bottleneck in network so i am looking for DVR deployment and after reading i found tenant VLAN doesn't support DVR https://bugzilla.redhat.com/show_bug.cgi?id=1704596 After doing more research i found that if i set manually external_mac using the following command then my vm using local compute node to send traffic in/out just like DVR instead of centralized design. root at os-infra-1-neutron-ovn-northd-container-24eea9c2:~# ovn-nbctl find NAT type=dnat_and_snat _uuid : 99bdd866-01ed-425d-853b-9362ae8572c9 external_ids : {"neutron:fip_external_mac"="fa:16:3e:2d:7e:fa", "neutron:fip_id"="025a912a-c0ee-4f36-98ad-8992bd825cfc", "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", "neutron:fip_port_id"="70ad361a-b42e-403b-a5c1-4ee39ddf5e31", "neutron:revision_number"="6", "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} external_ip : "10.40.255.10" external_mac : [] logical_ip : "172.168.0.164" logical_port : "70ad361a-b42e-403b-a5c1-4ee39ddf5e31" options : {} type : dnat_and_snat _uuid : c438e7be-5ff4-472e-b053-8d6ed74cd4dc external_ids : {"neutron:fip_external_mac"="fa:16:3e:f5:9f:f0", "neutron:fip_id"="31e8cb44-0acd-453b-a4e6-39f6ab3a6da4", "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", "neutron:fip_port_id"="44a677c5-86ff-4b6b-a046-54e79f79c4cd", "neutron:revision_number"="2", "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} external_ip : "10.40.255.5" external_mac : [] logical_ip : "172.168.0.67" logical_port : "44a677c5-86ff-4b6b-a046-54e79f79c4cd" options : {} type : dnat_and_snat This is how i set external mac from fip_external_mac"="fa:16:3e:2d:7e:fa" in above command. ovn-nbctl set NAT 99bdd866-01ed-425d-853b-9362ae8572c9 external_mac="fa\:16\:3e\:2d\:7e\:fa" How do i make this behavior default for every single VM, i don't want to do this manually to set the external mac address of each FIP? From missile0407 at gmail.com Sat Apr 3 12:24:54 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Sat, 3 Apr 2021 20:24:54 +0800 Subject: launch VM on volume vs. image In-Reply-To: <0670B960225633449A24709C291A52524FBBF96C@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> <0670B960225633449A24709C291A52524FBBF96C@COM01.performair.local> Message-ID: According to Tony's info, it will take a very long time to create because it needs to convert first. Like what Dominic said, not only wasting time but also wasting the compute node disk space in every creation. Still suggest converting to RAW format first, only a long time to upload the Ceph for the first time. 於 2021年4月2日 週五 下午11:23寫道: > Tony; > > In accordance with the second "Important" note listed here: > https://docs.ceph.com/en/nautilus/rbd/rbd-openstack/, we use all RAW > images. > > Thank you, > > Dominic L. Hilsbos, MBA > Director ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Tony Liu [mailto:tonyliu0592 at hotmail.com] > Sent: Thursday, April 1, 2021 5:19 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com > Subject: Re: launch VM on volume vs. image > > Hi Dominic, > > What's your image format? > > > Thanks! > Tony > ________________________________________ > From: DHilsbos at performair.com > Sent: April 1, 2021 10:47 AM > To: tonyliu0592 at hotmail.com > Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com > Subject: RE: launch VM on volume vs. image > > Tony / Eddie; > > I think this is partially dependent on the version of OpenStack running. > In our Victoria cloud, a volume created from an image is also done as a > snapshot by Ceph, and is completed in seconds. > > Thank you, > > Dominic L. Hilsbos, MBA > Director ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Eddie Yen [mailto:missile0407 at gmail.com] > Sent: Wednesday, March 31, 2021 6:00 PM > To: Tony Liu > Cc: openstack-discuss at lists.openstack.org > Subject: Re: launch VM on volume vs. image > > Hi Tony, > > In Ceph layer, IME, launching VM on image is creating a snapshot from > source image in Nova ephemeral pool. > If you check the RBD image created in Nova ephemeral pool, all images have > their own parents from glance images. > > For launching VM on volume, it will "copy" the image to volume pool first, > resize to specified disk size, then connect and boot. > Because it's not create a snapshot from image, so it will take much longer. > > Eddie. > > Tony Liu 於 2021年4月1日 週四 上午8:09寫道: > Hi, > > With Ceph as the backend storage, launching a VM on volume takes much > longer than launching on image. Why is that? > Could anyone elaborate the high level workflow for those two cases? > > > Thanks! > Tony > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat Apr 3 12:28:43 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 3 Apr 2021 14:28:43 +0200 Subject: [TripleO][ussuri] undercloud install # fails on heat launch Message-ID: Hi all, I am trying to understand why undercloud install does not pass the heat step. I feel that it is related to undercloud.conf file, something wrong there? Or some special char which python do not want to understand? How to enable heat-launcher more verbose output? as last line is starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! when I launch it manuallly it is running for long time... ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! openstack undercloud install last lines: http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Apr 3 13:48:50 2021 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Apr 2021 09:48:50 -0400 Subject: ML2/OVN DVR question In-Reply-To: References: Message-ID: Update: This is what I am experiencing: https://bugzilla.redhat.com/show_bug.cgi?id=1700043 I have deployed openstack using openstack-ansible on ubuntu 20.04 (openvswitch version 2.13.1), Is there anything i need to do? or this is real BUG? On Sat, Apr 3, 2021 at 1:12 AM Satish Patel wrote: > > Folks, > > I have deployed openstack using ML2/OVN on 1 controller and 2 compute > nodes so far everything is working fine, when i configured router it > by default used L3HA and i can see active-backup router on both > compute nodes. currently my all SNAT traffic going out using compute-1 > > I don't want bottleneck in network so i am looking for DVR deployment > and after reading i found tenant VLAN doesn't support DVR > https://bugzilla.redhat.com/show_bug.cgi?id=1704596 > > After doing more research i found that if i set manually external_mac > using the following command then my vm using local compute node to > send traffic in/out just like DVR instead of centralized design. > > > root at os-infra-1-neutron-ovn-northd-container-24eea9c2:~# ovn-nbctl > find NAT type=dnat_and_snat > _uuid : 99bdd866-01ed-425d-853b-9362ae8572c9 > external_ids : {"neutron:fip_external_mac"="fa:16:3e:2d:7e:fa", > "neutron:fip_id"="025a912a-c0ee-4f36-98ad-8992bd825cfc", > "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", > "neutron:fip_port_id"="70ad361a-b42e-403b-a5c1-4ee39ddf5e31", > "neutron:revision_number"="6", > "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} > external_ip : "10.40.255.10" > external_mac : [] > logical_ip : "172.168.0.164" > logical_port : "70ad361a-b42e-403b-a5c1-4ee39ddf5e31" > options : {} > type : dnat_and_snat > > _uuid : c438e7be-5ff4-472e-b053-8d6ed74cd4dc > external_ids : {"neutron:fip_external_mac"="fa:16:3e:f5:9f:f0", > "neutron:fip_id"="31e8cb44-0acd-453b-a4e6-39f6ab3a6da4", > "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", > "neutron:fip_port_id"="44a677c5-86ff-4b6b-a046-54e79f79c4cd", > "neutron:revision_number"="2", > "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} > external_ip : "10.40.255.5" > external_mac : [] > logical_ip : "172.168.0.67" > logical_port : "44a677c5-86ff-4b6b-a046-54e79f79c4cd" > options : {} > type : dnat_and_snat > > > This is how i set external mac from > fip_external_mac"="fa:16:3e:2d:7e:fa" in above command. > > ovn-nbctl set NAT 99bdd866-01ed-425d-853b-9362ae8572c9 > external_mac="fa\:16\:3e\:2d\:7e\:fa" > > How do i make this behavior default for every single VM, i don't want > to do this manually to set the external mac address of each FIP? From tolga at etom.cloud Sat Apr 3 18:17:03 2021 From: tolga at etom.cloud (Tolga Kaprol) Date: Sat, 3 Apr 2021 21:17:03 +0300 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> Message-ID: <70d42f83-b247-6842-edca-926ba9870086@etom.cloud> We are looking forward for a CoffeeScript SDK too. On 3.04.2021 07:04, Julia Kreger wrote: > I do believe any smart hands called to the rackmount espresso server > would be greatly appreciative of this important functionality... even > should a leak have developed in the system. > > On Fri, Apr 2, 2021 at 8:50 AM Braden, Albert > wrote: >> In the event of a leak, smart hands will respond with empty cups. >> >> >> >> Next step is designing the “series of tubes” that will deliver the product to consumers. >> >> >> >> 507eba0cecad04cd7400002b (900×633) (insider.com) >> >> >> >> From: DHilsbos at performair.com >> Sent: Friday, April 2, 2021 11:26 AM >> To: openstack-discuss at lists.openstack.org >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. >> >> I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? >> >> >> >> Dominic L. Hilsbos, MBA >> >> Director – Information Technology >> >> Perform Air International Inc. >> >> DHilsbos at PerformAir.com >> >> www.PerformAir.com >> >> >> >> From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] >> Sent: Friday, April 2, 2021 6:19 AM >> To: Braden, Albert; openstack-discuss >> Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> Dell Customer Communication - Confidential >> >> >> >> Expresso server is still needed and must be used at least once a day. >> >> >> >> From: Braden, Albert >> Sent: Friday, April 2, 2021 8:08 AM >> To: openstack-discuss >> Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> [EXTERNAL EMAIL] >> >> April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? >> >> >> >> From: Kanevsky, Arkady >> Sent: Friday, April 2, 2021 8:46 AM >> To: Jacob Anders ; openstack-discuss >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. >> >> Dell Customer Communication - Confidential >> >> >> >> I love this April fool joke >> >> >> >> From: Jacob Anders >> Sent: Thursday, April 1, 2021 3:54 PM >> To: openstack-discuss >> Subject: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> [EXTERNAL EMAIL] >> >> Hi There, >> >> >> >> I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: >> >> >> >> MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] >> >> >> >> What are your thoughts on this? Please comment in the Story or just reply to thread. >> >> >> >> Thank you, >> >> Jacob >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. From donny at fortnebula.com Sun Apr 4 13:57:44 2021 From: donny at fortnebula.com (Donny Davis) Date: Sun, 4 Apr 2021 09:57:44 -0400 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: <70d42f83-b247-6842-edca-926ba9870086@etom.cloud> References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> <70d42f83-b247-6842-edca-926ba9870086@etom.cloud> Message-ID: What of the front end, what kind of look should we be shooting for in the Bean modal? What kind of point-and-drink experience is expected? We can't expect everyone to handle their caffeine needs based solely on an API. On Sat, Apr 3, 2021 at 2:22 PM Tolga Kaprol wrote: > We are looking forward for a CoffeeScript SDK too. > > On 3.04.2021 07:04, Julia Kreger wrote: > > I do believe any smart hands called to the rackmount espresso server > > would be greatly appreciative of this important functionality... even > > should a leak have developed in the system. > > > > On Fri, Apr 2, 2021 at 8:50 AM Braden, Albert > > wrote: > >> In the event of a leak, smart hands will respond with empty cups. > >> > >> > >> > >> Next step is designing the “series of tubes” that will deliver the > product to consumers. > >> > >> > >> > >> 507eba0cecad04cd7400002b (900×633) (insider.com) > >> > >> > >> > >> From: DHilsbos at performair.com > >> Sent: Friday, April 2, 2021 11:26 AM > >> To: openstack-discuss at lists.openstack.org > >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso > machines in Ironic > >> > >> > >> > >> CAUTION: The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > >> > >> I agree that an espresso server is necessary, but do we really want it > to be rack mounted? What if it starts to leak? > >> > >> > >> > >> Dominic L. Hilsbos, MBA > >> > >> Director – Information Technology > >> > >> Perform Air International Inc. > >> > >> DHilsbos at PerformAir.com > >> > >> www.PerformAir.com > >> > >> > >> > >> From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] > >> Sent: Friday, April 2, 2021 6:19 AM > >> To: Braden, Albert; openstack-discuss > >> Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso > machines in Ironic > >> > >> > >> > >> Dell Customer Communication - Confidential > >> > >> > >> > >> Expresso server is still needed and must be used at least once a day. > >> > >> > >> > >> From: Braden, Albert > >> Sent: Friday, April 2, 2021 8:08 AM > >> To: openstack-discuss > >> Subject: RE: [Ironic][RFE] Enable support for espresso machines in > Ironic > >> > >> > >> > >> [EXTERNAL EMAIL] > >> > >> April fool joke?! Does this mean that I wasted my time last night > designing a rack-mount espresso server? > >> > >> > >> > >> From: Kanevsky, Arkady > >> Sent: Friday, April 2, 2021 8:46 AM > >> To: Jacob Anders ; openstack-discuss < > openstack-discuss at lists.openstack.org> > >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso > machines in Ironic > >> > >> > >> > >> CAUTION: The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > >> > >> Dell Customer Communication - Confidential > >> > >> > >> > >> I love this April fool joke > >> > >> > >> > >> From: Jacob Anders > >> Sent: Thursday, April 1, 2021 3:54 PM > >> To: openstack-discuss > >> Subject: [Ironic][RFE] Enable support for espresso machines in Ironic > >> > >> > >> > >> [EXTERNAL EMAIL] > >> > >> Hi There, > >> > >> > >> > >> I was discussing this RFE with Julia and we decided it would be great > to get some feedback on it from the wider community, ideally by the end of > today. Apologies for the short notice. Here's the story: > >> > >> > >> > >> MailScanner has detected a possible fraud attempt from "urldefense.com" > claiming to be https://storyboard.openstack.org/#!/story/2008791 [ > storyboard.openstack.org] > >> > >> > >> > >> What are your thoughts on this? Please comment in the Story or just > reply to thread. > >> > >> > >> > >> Thank you, > >> > >> Jacob > >> > >> The contents of this e-mail message and > >> any attachments are intended solely for the > >> addressee(s) and may contain confidential > >> and/or legally privileged information. If you > >> are not the intended recipient of this message > >> or if this message has been addressed to you > >> in error, please immediately alert the sender > >> by reply e-mail and then delete this message > >> and any attachments. If you are not the > >> intended recipient, you are notified that > >> any use, dissemination, distribution, copying, > >> or storage of this message or any attachment > >> is strictly prohibited. > >> > >> The contents of this e-mail message and > >> any attachments are intended solely for the > >> addressee(s) and may contain confidential > >> and/or legally privileged information. If you > >> are not the intended recipient of this message > >> or if this message has been addressed to you > >> in error, please immediately alert the sender > >> by reply e-mail and then delete this message > >> and any attachments. If you are not the > >> intended recipient, you are notified that > >> any use, dissemination, distribution, copying, > >> or storage of this message or any attachment > >> is strictly prohibited. > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From 379035389 at qq.com Sat Apr 3 05:42:24 2021 From: 379035389 at qq.com (=?gb18030?B?s6/R9M60wdI=?=) Date: Sat, 3 Apr 2021 13:42:24 +0800 Subject: can't build a instance successfully Message-ID: Folks,   I am deploying OpenStack manually and have completed minimal development of the Ussuri. My controller node can find my compute node and confirm there are compute hosts in the database with the instruction:   ” openstack compute service list --service nova-compute”   But when I want to create an instance on the compute node, the status of the compute node just remains “build”. And I try to look for faults from the “/var/log/nova/nova-compute.log” of the compute node:   2021-04-03 00:59:43.379 1432 INFO os_vif [req-2ece8c1c-a96f-4d91-b704-5598c1166016 98049570d7a54e26b8af4eaec9e2eca2 8342df14fa614ad79a08e68f097e4487 - default default] Successfully unplugged vif VIFBridge(active=True,address=fa:16:3e:55:a5:65,bridge_name='brq3169e77c-99',has_traffic_filtering=True,id=28248ef5-6ad6-44bf-b2ce-3fa7ac2371ef,network=Network(3169e77c-9945-454f-9562-6e9a55e1adce),plugin='linux_bridge',port_profile= -------------- next part -------------- A non-text attachment was scrubbed... Name: 5C60710B at 0FA9A32F.40006860.png.jpg Type: image/jpeg Size: 38601 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 915DCF8E at 1A5B8D1B.40006860.png.jpg Type: image/jpeg Size: 37368 bytes Desc: not available URL: From berndbausch at mailbox.org Sun Apr 4 14:44:23 2021 From: berndbausch at mailbox.org (Bernd Bausch) Date: Sun, 4 Apr 2021 23:44:23 +0900 Subject: [Neutron] How to provide internet access to tier 2 instance Message-ID: <7aa3968f-dceb-43af-2548-e8ed0f7ac9b1@mailbox.org> I have a pretty standard single-server Victoria Devstack, where I created this network topology: public private backend | | | | /-------\ |-- I1 |- I2 |--|Router1|--| | | \-------/ | | | | /-------\ | | |--|Router2|--| | | \-------/ | | | | I1 and I2 are instances. My question: Is it possible to give I2 access to the external world to install software and download files? I don't need access **to** I2 **from** the external world. My unsuccessful attempt: After adding a static default route via Router1 to Router2, I can ping the internet from Router2's namespace, but not from I2. My guess is that Router1 ignores traffic from networks that are not attached to it. I don't have enough experience to understand the netfilter rules in Router1's namespace, and in any case, rather than tweaking them I need a supported method to give I2 internet access, or the confirmation that it is not possible. Thanks much for any insights and suggestions. Bernd From luke.camilleri at zylacomputing.com Sun Apr 4 18:43:20 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Sun, 4 Apr 2021 20:43:20 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding Message-ID: Hello everyone, I have enable the L3 extension for port-forwarding and can succesfully port-forward traffic after assigning an additional floating IP to the project. I would like to know if it is possible to enable the corresponding horizon functionality for this extension (port-forwarding) please? Regards From masayuki.igawa at gmail.com Sun Apr 4 23:16:12 2021 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Mon, 05 Apr 2021 08:16:12 +0900 Subject: [qa][hacking] Proposing new core reviewers In-Reply-To: References: Message-ID: Hi, On Wed, Mar 31, 2021, at 05:47, Martin Kopec wrote: > Hi all, > > I'd like to propose Sorin Sbarnea (IRC: zbr) and Radosław Piliszek > (IRC: yoctozepto) to hacking > core. They both are doing a great upstream work among multiple > different projects and > volunteered to help us with maintenance of hacking project as well. > > You can vote/feedback in this email thread. If no objection by 6th of > April, we will add them > to the list. > +1 ! -- Masayuki > Regards, > -- > Martin Kopec > > > From berndbausch at gmail.com Mon Apr 5 00:48:05 2021 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 5 Apr 2021 09:48:05 +0900 Subject: can't build a instance successfully In-Reply-To: References: Message-ID: This is where you should start your troubleshooting: On 4/3/2021 2:42 PM, 朝阳未烈 wrote: > AMQP server on controller:5672 is unreachable Something prevents your compute node from reaching the message queue server on the controller. It could be a network problem, routing problem on the compute node or the controller, message queue server might be down, firewall suddenly blocking port 5672, ... the possibilities are endless. From thierry at openstack.org Mon Apr 5 08:55:36 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Apr 2021 10:55:36 +0200 Subject: [largescale-sig] Next meeting: April 7, 15utc Message-ID: Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210407T15 A number of topics have already been added to the agenda, including discussing our next video meetings and PTG participation. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From ykarel at redhat.com Mon Apr 5 12:18:37 2021 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 5 Apr 2021 17:48:37 +0530 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: Hi Ruslanas, Looks like the issue is with the version of python3-six and python3-urllib3 installed in your system, i have seen this issue in the past with this mismatch. Can you check versions of both on your system and from which repo those are installed(dnf list installed python3-six python3-urllib3). Seems python3-six is not updated from Ussuri repo, if that's true try again after updated python3-six(dnf update python3-six). Thanks and regards Yatin Karel On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis wrote: > > Hi all, > > I am trying to understand why undercloud install does not pass the heat step. > I feel that it is related to undercloud.conf file, something wrong there? Or some special char which python do not want to understand? > How to enable heat-launcher more verbose output? as last line is starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! when I launch it manuallly it is running for long time... ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! > > openstack undercloud install last lines: http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ > undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ > > -- > Ruslanas Gžibovskis > +370 6030 7030 From ruslanas at lpic.lt Mon Apr 5 12:50:01 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 5 Apr 2021 15:50:01 +0300 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: I had: [stack at undercloud ~]$ dnf list installed | egrep 'six|urllib3' python3-six.noarch 1.11.0-8.el8 @anaconda python3-urllib3.noarch 1.25.7-2.el8 @centos-openstack-ussuri [stack at undercloud ~]$ I belie they were latest last week, as I had dnf update in post install script and later fresh install last week. BUT maybe dnf update is commented out at some point... I will check, now I set it to update, as there were some updates (including six package and urllib3). Will check if this solves my issues. Yes, I have faced these issues previously, when part of packages got installed from epel... Now have only ussuri ( by the way it is now updating from ussuri repo) Might be issue, that it did not override it during installation of tipleo package... I will check it after undercloud deployment works. On Mon, 5 Apr 2021 at 15:19, Yatin Karel wrote: > Hi Ruslanas, > > Looks like the issue is with the version of python3-six and > python3-urllib3 installed in your system, i have seen this issue in > the past with this mismatch. > Can you check versions of both on your system and from which repo > those are installed(dnf list installed python3-six python3-urllib3). > Seems python3-six is not updated from Ussuri repo, if that's true try > again after updated python3-six(dnf update python3-six). > > > Thanks and regards > Yatin Karel > > > On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > I am trying to understand why undercloud install does not pass the heat > step. > > I feel that it is related to undercloud.conf file, something wrong > there? Or some special char which python do not want to understand? > > How to enable heat-launcher more verbose output? as last line is > starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! > when I launch it manuallly it is running for long time... > ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! > > > > openstack undercloud install last lines: > http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ > > undercloud.conf: > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Apr 5 12:56:52 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 5 Apr 2021 15:56:52 +0300 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: Yes, it bypasses this step now, Thank you Yatin! now it is: python3-six.noarch 1.14.0-2.el8 @centos-openstack-ussuri python3-urllib-gssapi.noarch 1.0.1-10.el8 @centos-openstack-ussuri python3-urllib3.noarch 1.25.7-3.el8 @centos-openstack-ussuri [stack at undercloud ~]$ On Mon, 5 Apr 2021 at 15:50, Ruslanas Gžibovskis wrote: > I had: > [stack at undercloud ~]$ dnf list installed | egrep 'six|urllib3' > python3-six.noarch 1.11.0-8.el8 > @anaconda > python3-urllib3.noarch 1.25.7-2.el8 > @centos-openstack-ussuri > [stack at undercloud ~]$ > > I belie they were latest last week, as I had dnf update in post install > script and later fresh install last week. BUT maybe dnf update is commented > out at some point... I will check, now I set it to update, as there were > some updates (including six package and urllib3). > Will check if this solves my issues. > > Yes, I have faced these issues previously, when part of packages got > installed from epel... Now have only ussuri ( by the way it is now updating > from ussuri repo) Might be issue, that it did not override it during > installation of tipleo package... > > I will check it after undercloud deployment works. > > > > On Mon, 5 Apr 2021 at 15:19, Yatin Karel wrote: > >> Hi Ruslanas, >> >> Looks like the issue is with the version of python3-six and >> python3-urllib3 installed in your system, i have seen this issue in >> the past with this mismatch. >> Can you check versions of both on your system and from which repo >> those are installed(dnf list installed python3-six python3-urllib3). >> Seems python3-six is not updated from Ussuri repo, if that's true try >> again after updated python3-six(dnf update python3-six). >> >> >> Thanks and regards >> Yatin Karel >> >> >> On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis >> wrote: >> > >> > Hi all, >> > >> > I am trying to understand why undercloud install does not pass the heat >> step. >> > I feel that it is related to undercloud.conf file, something wrong >> there? Or some special char which python do not want to understand? >> > How to enable heat-launcher more verbose output? as last line is >> starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! >> when I launch it manuallly it is running for long time... >> ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! >> > >> > openstack undercloud install last lines: >> http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ >> > undercloud.conf: >> https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> >> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Mon Apr 5 13:27:23 2021 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 5 Apr 2021 18:57:23 +0530 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: Hi, On Mon, Apr 5, 2021 at 6:27 PM Ruslanas Gžibovskis wrote: > > Yes, it bypasses this step now, Thank you Yatin! > Good to know. > now it is: > python3-six.noarch 1.14.0-2.el8 @centos-openstack-ussuri > python3-urllib-gssapi.noarch 1.0.1-10.el8 @centos-openstack-ussuri > python3-urllib3.noarch 1.25.7-3.el8 @centos-openstack-ussuri > [stack at undercloud ~]$ > > On Mon, 5 Apr 2021 at 15:50, Ruslanas Gžibovskis wrote: >> >> I had: >> [stack at undercloud ~]$ dnf list installed | egrep 'six|urllib3' >> python3-six.noarch 1.11.0-8.el8 @anaconda >> python3-urllib3.noarch 1.25.7-2.el8 @centos-openstack-ussuri >> [stack at undercloud ~]$ >> Yes, this combination will not work and need python3-six updated. >> I belie they were latest last week, as I had dnf update in post install script and later fresh install last week. BUT maybe dnf update is commented out at some point... I will check, now I set it to update, as there were some updates (including six package and urllib3). >> Will check if this solves my issues. >> Yes dnf update is recommended to avoid such issues. >> Yes, I have faced these issues previously, when part of packages got installed from epel... Now have only ussuri ( by the way it is now updating from ussuri repo) Might be issue, that it did not override it during installation of tipleo package... >> Yes mixing EPEL with OpenStack repos can cause issues as can bring untested updates so should be avoided. >> I will check it after undercloud deployment works. >> >> >> >> On Mon, 5 Apr 2021 at 15:19, Yatin Karel wrote: >>> >>> Hi Ruslanas, >>> >>> Looks like the issue is with the version of python3-six and >>> python3-urllib3 installed in your system, i have seen this issue in >>> the past with this mismatch. >>> Can you check versions of both on your system and from which repo >>> those are installed(dnf list installed python3-six python3-urllib3). >>> Seems python3-six is not updated from Ussuri repo, if that's true try >>> again after updated python3-six(dnf update python3-six). >>> >>> >>> Thanks and regards >>> Yatin Karel >>> >>> >>> On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis wrote: >>> > >>> > Hi all, >>> > >>> > I am trying to understand why undercloud install does not pass the heat step. >>> > I feel that it is related to undercloud.conf file, something wrong there? Or some special char which python do not want to understand? >>> > How to enable heat-launcher more verbose output? as last line is starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! when I launch it manuallly it is running for long time... ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! >>> > >>> > openstack undercloud install last lines: http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ >>> > undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >>> > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ >>> > >>> > -- >>> > Ruslanas Gžibovskis >>> > +370 6030 7030 >>> >> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 Thanks and Regards Yatin Karel From whayutin at redhat.com Mon Apr 5 13:28:20 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 5 Apr 2021 07:28:20 -0600 Subject: [tripleo] centos-stream-8 and container-tools 3.0 Message-ID: FYI.. The container-tools 3.0 module has been published by the centos team. We're seeing it land in: http://dashboard-ci.tripleo.org/d/jwDYSidGz/rpm-dependency-pipeline?viewPanel=22&orgId=1 We will be moving ALL the upstream centos-stream-8 jobs to use container-tools 3.0 now. https://review.opendev.org/c/openstack/tripleo-quickstart/+/784770 https://review.opendev.org/c/openstack/tripleo-quickstart/+/784768 0/ happy monday -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadewojo at gmail.com Mon Apr 5 14:12:22 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Mon, 5 Apr 2021 15:12:22 +0100 Subject: [all] Octavia LoadBalancer Error Message-ID: Hi there, I recently deployed a load balancer on our openstack private cloud. I used this manual - https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html to create the load balancer. However, after creating and trying to access it, it returns an error message saying "No server is available to handle this request". Also on the dashboard, "Operating status" shows offline but "provisioning status" shows active. I have two web applications as members of the load balancer and I can individually access those web applications. Could someone please point me in the right direction? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Mon Apr 5 14:32:48 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 5 Apr 2021 14:32:48 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <362166bb4d484088b1232725a1ecf0a1@ncwmexgp009.CORP.CHARTERCOM.com> It looks like the problem may be caused by incompatible versions of RMQ. How can I work around that? -----Original Message----- From: Braden, Albert Sent: Friday, April 2, 2021 8:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I opened a bug for this issue: https://bugs.launchpad.net/kolla-ansible/+bug/1922269 -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 11:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 9:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From ruslanas at lpic.lt Mon Apr 5 19:42:53 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 5 Apr 2021 22:42:53 +0300 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud Message-ID: Hi all, While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ builddir/install-undercloud.log ( contains info about container-puppet-neutron ) http://paste.openstack.org/show/804181/ undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf dnf list installed http://paste.openstack.org/show/804182/ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 5 21:38:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 05 Apr 2021 16:38:17 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 8th at 1500 UTC. Message-ID: <178a3f8d599.cf94285387564.6978079671458448803@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for April 8th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, April 7th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From fungi at yuggoth.org Mon Apr 5 22:31:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Apr 2021 22:31:17 +0000 Subject: [all][elections][tc] TC Vacancy Special Election Nominations end this week Message-ID: <20210405223116.yw7mqfahtsqps6sp@yuggoth.org> Just a reminder, nominations for one vacant OpenStack TC (Technical Committee) position are only open for three more days, until Apr 08, 2021 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at https://governance.openstack.org/election/#how-to-submit-a-candidacy Please make sure to follow the candidacy file naming convention: candidates/xena// (for example, "candidates/xena/TC/stacker at example.org"). The name of the file should match an email address for your current OpenStack Foundation Individual Membership. Take this opportunity to ensure that your OSF member profile contains current information: https://www.openstack.org/profile/ Any OpenStack Foundation Individual Member can propose their candidacy for the vacant seat on the Technical Committee. This TC vacancy special election will be held from Apr 8, 2021 23:45 UTC through to Apr 15, 2021 23:45 UTC. The electorate for the TC election are the OpenStack Foundation Individual Members who have a code contribution to one of the official teams over the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC, as well as any Extra ATCs who are acknowledged by the TC. Note that the contribution qualifying period for this special election is being kept the same as what would have been used for the original TC election. The four already elected TC members for this term are listed as candidates in the special election, but will not appear on any resulting poll as they have already been officially elected. Only new candidates in addition to the four elected TC members for this term will appear on a subsequent poll for the TC vacancy special election. Please find below the timeline: nomination starts @ Mar 25, 2021 23:45 UTC nomination ends @ Apr 08, 2021 23:45 UTC elections start @ Apr 08, 2021 23:45 UTC elections end @ Apr 15, 2021 23:45 UTC Shortly after election officials approve candidates, they will be listed on the https://governance.openstack.org/election/ page. If you have any questions please be sure to either ask them on the mailing list or to the elections officials: https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Elections Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Apr 6 01:00:24 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 05 Apr 2021 20:00:24 -0500 Subject: [qa][heat][stable] grenade jobs with tempest plugins on stable/train broken Message-ID: <178a4b1e326.db78f8f289143.8139427571865552389@ghanshyammann.com> Hello Everyone, I capped stable/stein to use the Tempest 26.0.0 which means grenade jobs that run the tests from tempest plugins started using the Tempest 26.0.0. But the constraints used in Tempest virtual env are mismatched between when Tempest virtual env was created and when tests are run from grenade or grenade plugins scripts. Due to these two different constraint used, tox recreate the tempest virtual env which remove all already installed tempest plugins and their deps and it fails to run the smoke tests. This constraints mismatch issue occurred in stable/train and I standardized these for devstack based jobs - https://review.opendev.org/q/topic:%2522standardize-tempest-tox-constraints%2522+status:merged But this issue is occurring for grenade jobs that do not run the tests via run-tempest role (run-tempest role take care of constraints things). Rabi observed this in threat grenade jobs today. I have reported this as a bug in LP[1] and making it standardize from the master branch so that this kind of issue does not occur again when any stable branch starts using the non-master Tempest. Please don't recheck if your grenade job is failing with the same issue and wait for the updates on this ML thread. [1] https://bugs.launchpad.net/grenade/+bug/1922597 -gmann From skaplons at redhat.com Tue Apr 6 06:28:15 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 06 Apr 2021 08:28:15 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: References: Message-ID: <2626442.APKxhzko2K@p1> Hi, Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: > Hello everyone, I have enable the L3 extension for port-forwarding and > can succesfully port-forward traffic after assigning an additional > floating IP to the project. > > I would like to know if it is possible to enable the corresponding > horizon functionality for this extension (port-forwarding) please? > > Regards I'm not Horizon expert so I may be wrong here but I don't think there is anything regarding port forwarding support in Horizon currently. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gagehugo at gmail.com Tue Apr 6 07:13:43 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 6 Apr 2021 02:13:43 -0500 Subject: [openstack-helm] Meeting cancelled Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting today April 6th, the meeting is cancelled. Our next IRC meeting will be April 13th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Tue Apr 6 07:43:29 2021 From: ricolin at ricolky.com (Rico Lin) Date: Tue, 6 Apr 2021 15:43:29 +0800 Subject: [Multi-arch SIG] success to run full tempest tests on Arm64 env. What's next? Message-ID: Dear all, I'm glad to tell everyone that we finally succeeded to build Devstack and run full tempest tests on it [1]. As the test build result shows [2], the job is stable enough to run. For earlier 13+ job results. (will do more recheck later) One Timeout, and two failure cases (Which are targeted by increase `BUILD_TIMEOUT` to 900 secs). The job `devstack-platform-arm64` runs around 2.22 hrs to 3.04 hrs, which is near two times slower than on x86 environment. It's not a solid number as the performance might change a lot with different cloud environments and different hardware. But I think this is a great chance for us to make more improvements. At least now we have a test job ready (Not merged yet) for you to do experiments with. And we should also add suggestions to Multi-arch SIG documentation so once we make improvements, other architecture can share the efforts too. *So please join us if you are also interested in help tuning the performance :)* *Also, we need to discuss what kind of way we should run this job, should we separate it into small jobs? Should we run it as a periodic job? voting?* *On the other hand, I would hope to collect more ideas on how we should move forward.* *Please provide your idea for this on our Xena PTG etherpad* *https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig * Our PTG is scheduled around 4/20 Tuesday from 07:00-08:00 and 15:00-16:00 (UTC time) If you plan to join our PTG, feel free to update our PTG etherpad to suggest other topics. And our Meeting time is scheduled biweekly on Tuesday (host on demand) Please join our IRC #openstack-multi-arch [1] https://review.opendev.org/c/openstack/devstack/+/708317 [2] https://zuul.openstack.org/builds?job_name=devstack-platform-arm64+ *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Apr 6 08:03:01 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 6 Apr 2021 09:03:01 +0100 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller In-Reply-To: <362166bb4d484088b1232725a1ecf0a1@ncwmexgp009.CORP.CHARTERCOM.com> References: <362166bb4d484088b1232725a1ecf0a1@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: On Mon, 5 Apr 2021 at 15:33, Braden, Albert wrote: > > It looks like the problem may be caused by incompatible versions of RMQ. How can I work around that? Hi Albert, thanks for testing this procedure and reporting issues. I suggest we continue the discussion on the bug report. https://bugs.launchpad.net/kolla-ansible/+bug/1922269 Mark > > -----Original Message----- > From: Braden, Albert > Sent: Friday, April 2, 2021 8:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > I opened a bug for this issue: > > https://bugs.launchpad.net/kolla-ansible/+bug/1922269 > > -----Original Message----- > From: Braden, Albert > Sent: Thursday, April 1, 2021 11:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. > > Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? > > -----Original Message----- > From: Braden, Albert > Sent: Thursday, April 1, 2021 9:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: > > https://paste.ubuntu.com/p/ZDgFgKtQTB/ > > This appears in the RMQ log: > > https://paste.ubuntu.com/p/5D2Qjv3H8c/ > > -----Original Message----- > From: Braden, Albert > Sent: Wednesday, March 31, 2021 8:31 AM > To: openstack-discuss at lists.openstack.org > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > Centos7: > > {rabbit,"RabbitMQ","3.7.24"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > Centos8: > > {rabbit,"RabbitMQ","3.7.28"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: > > https://paste.ubuntu.com/p/h9HWdfwmrK/ > > and the crash dump that appears on control2: > > crash dump log: > > https://paste.ubuntu.com/p/MpZ8SwTJ2T/ > > First 1500 lines of the dump: > > https://paste.ubuntu.com/p/xkCyp2B8j8/ > > If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. > > -----Original Message----- > From: Mark Goddard > Sent: Wednesday, March 31, 2021 4:14 AM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > On Tue, 30 Mar 2021 at 13:41, Braden, Albert > wrote: > > > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > > > 'rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > > > 'rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > > > {partitions,[]}, > > > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > > > … > > > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > > > 'rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > > > {partitions,[]}, > > > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > > > > > But my hypervisors are down: > > > > > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > > > > > 172.16.2.31 compute0 > > > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > > > > > In the RMQ logs I see this every 10 seconds: > > > > > > > > 172.16.1.132 control2 > > > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > client unexpectedly closed TCP connection > > > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? > > Hi Albert, > > Could you share the versions of RabbitMQ and erlang in both versions > of the container? When initially testing this setup, I think we had > 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on > sufficiently to become incompatible? > > Mark > > > > > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > > > > > The contents of this e-mail message and > > any attachments are intended solely for the > > addressee(s) and may contain confidential > > and/or legally privileged information. If you > > are not the intended recipient of this message > > or if this message has been addressed to you > > in error, please immediately alert the sender > > by reply e-mail and then delete this message > > and any attachments. If you are not the > > intended recipient, you are notified that > > any use, dissemination, distribution, copying, > > or storage of this message or any attachment > > is strictly prohibited. > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From missile0407 at gmail.com Tue Apr 6 08:47:50 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 6 Apr 2021 16:47:50 +0800 Subject: [kolla][glance] Few questions about images. In-Reply-To: References: Message-ID: Update: For question 1, I found that the publicize image feature still work, but need to do this on command line by using openstack image set. Also found "403 Forbidden" error [1] is threw by Horizon when trying to modify visibility and image properties on private image/snapshot. Does someone know how to disable this limitation in Horizon? For question 2 is totally solved. The workaround in Sysprep is working actually. [1] The error message shown in Horizon is: "Forbidden. Insufficient permissions of the requested operation" Eddie Yen 於 2021年3月26日 週五 上午7:50寫道: > Hi everyone, I want to ask about the image operating permission & > Windows images issue we met since we can't find any answers on the > internet. > > 1. Now we're still using Rocky with Ceph as storage. Sometimes we > need to re-pack the image on the Openstack. We used to save as > snapshot (re-pack by Nova ephemeral VM) or upload to image (re-pack > by volume), then set snapshot/image's visibility to the public. But since > Rocky, we can't do this anymore because when we try to set public, > Horizon always shows "not enough permission" error. > The workaround we're using for now is creating a nova snapshot after > re-pack, download the snapshot, then upload the snapshot again as > public. But it's utterly wasting time if the images are huge. So we want to > know how to unleash this limitation can let us just change snapshot to > public at least. > > 2. Openstack uses virtio as a network device by default, so we always > install a virtio driver when packing Windows images. As the network > performance issue in GSO/TSO enablement, we also need to disable > them in device properties. But since Windows 10 2004 build (my thought) > device properties always reset these settings after Sysprep. We found > there's a workaround [1] to solve this issue, but may not work sometimes. > Is there a better way to solve this issue? > > Many thanks, > Eddie. > > [1] PersistAllDeviceInstalls | Microsoft Docs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Tue Apr 6 09:24:26 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 6 Apr 2021 11:24:26 +0200 Subject: [keystone][horizon][Victoria] scope-based policy problems Message-ID: <0562401B-5118-4647-85CC-BC8A26080789@poczta.onet.pl> Hi, I have some questions about horizon and keystone policies : Im trying to achieve "domain_admin" role with the ability to add/remove/update projects in particular domain and add/update/remove users in the same domain (and of course be able to see instances, networks, etc. in this domain). As described here http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021105.html : "Horizon does not support the system-scoped token yet as of Victoria and coming Wallaby.” So there is no way to write json/not scope-based policy for horizon and scope-based policy for keystone, because it will not work due to lack of scope information in horizon’s token? So the question is how the policies should look like? Is it possible at all to achieve such „domain admin” role? How in different way allow one user to add/remove/update projects and add/update/remove users? Another thing is, that if I use something like this in horizon/keystone policy: "identity:list_users_in_group": "rule:admin_required or (role:domain_admin and domain_id:%(domain_id)s)” then (besides of that domain users) there is also admin account in the list (so I assume admin „belongs” to all domains) - how to prevent newly created domain_admin from seeing admin account and making changes to that account? It really holds up my whole project, can you help mi guys? Best regards Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Tue Apr 6 09:57:57 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 6 Apr 2021 10:57:57 +0100 Subject: [all] Gate resources and performance In-Reply-To: References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: On Thu, 1 Apr 2021 at 18:53, Dan Smith wrote: > > > I'll try to circle back and generate a new set of numbers with > > my script, and also see if I can get updated numbers from Clark on the > > overall percentages. > > Okay, I re-ran the numbers this morning and got updated 30-day stats > from Clark. Here's what I've got (delta from the last report in parens): > > Project % of total Node Hours Nodes > ---------------------------------------------- > 1. Neutron 23% 34h (-4) 30 (-2) > 2. TripleO 18% 17h (-14) 14 (-6) > 3. Nova 7% 22h (+1) 25 (-0) > 4. Kolla 6% 10h (-2) 18 (-0) > 5. OSA 6% 19h (-3) 16 (-1) > > Definitely a lot of improvement from tripleo, so thanks for that! > Neutron rose to the top and is still very hefty. I think Nova's 1-hr > rise is probably just noise given the node count didn't change. I think > we're still waiting on zuulv3 conversion of the grenade multinode job so > we can drop the base grenade job, which will make things go down. Thanks Dan, I've recently introduced a standalone nova-live-migration-ceph job that might be the cause of the additional hour for Nova. zuul: Add nova-live-migration-ceph job https://review.opendev.org/c/openstack/nova/+/768466 While this adds extra load it should be easier to maintain over the previous all in one live migration job that restacked the environment by making direct calls into various devstack plugins. Regarding the switch to the multinode grenade job I'm still working through that below and wanted to land it once Xena is formally open: zuul: Replace grenade and nova-grenade-multinode with grenade-multinode https://review.opendev.org/c/openstack/nova/+/778885/ This series also includes some attempted cleanup of our irrelevant-files for a few jobs that will hopefully reduce our numbers further. Plenty of work left to do here throughout Xena but it's a start. Cheers, Lee From aj at suse.com Tue Apr 6 10:08:21 2021 From: aj at suse.com (Andreas Jaeger) Date: Tue, 6 Apr 2021 12:08:21 +0200 Subject: [docs][release] Creating Xena's landing pages In-Reply-To: References: Message-ID: <18be6047-4455-91ec-c4cf-5bff341ba8bc@suse.com> On 02.04.21 11:51, Herve Beraud wrote: > Hello Docs team, > > This is a friendly reminder from the release team, I think that it > should be safe for you to apply your process to create the new release > series landing pages for docs.openstack.org . > > All stable branches are now created. > > If you want youcan do the work before the final release date to avoid > having to synchronize with the release team on that day. I've pushed a change for adding links for Wallaby pages that are already live and adding the Xena pages: https://review.opendev.org/c/openstack/openstack-manuals/+/784909 Note that at release time, we still need to push a change to mark Wallaby as released on docs.o.o: https://review.opendev.org/c/openstack/openstack-manuals/+/784910 Since many projects do not have /wallaby/ docs published, the wallaby index pages miss them. Once projects have published their docs (normally happens with every commit, thus initial merge like the .gitreview change should be fine), they can send updates to openstack-manuals to link to them. Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From gthiemonge at redhat.com Tue Apr 6 10:09:06 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 6 Apr 2021 12:09:06 +0200 Subject: [all] Octavia LoadBalancer Error In-Reply-To: References: Message-ID: On Mon, Apr 5, 2021 at 4:14 PM Adekunbi Adewojo wrote: > Hi there, > > I recently deployed a load balancer on our openstack private cloud. I used > this manual - > https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html > to create the load balancer. However, after creating and trying to access > it, it returns an error message saying "No server is available to handle > this request". Also on the dashboard, "Operating status" shows offline but > "provisioning status" shows active. I have two web applications as members > of the load balancer and I can individually access those web applications. > Hi, provisioning status ACTIVE shows the load balancer was successfully created but operating status OFFLINE indicates that the amphora is unable to communicate with the Octavia health-manager service. Basically, the amphora should report its status to the hm service using UDP messages (on a controller, you can dump the UDP packets on the o-hm0 interface), do you see any errors in the hm logs? I would recommend enabling the debug messages in the health-manager, and to check the logs, you should see messages about incoming packets: Apr 06 10:02:20 devstack2 octavia-health-manager[1747820]: DEBUG octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from ('192.168.0.73', 11273) {{(pid=1747857) dorecv /opt/stack/octavia/octavia/amphorae/drivers/health/heartbeat_udp.py:95}} The "No server is available to handle this request" response from the amphora indicates that haproxy is correctly running on the VIP interface but it doesn't have any member servers. Perhaps fixing the health-manager issue will help to understand why the traffic is not dispatched to the members. > Could someone please point me in the right direction? > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Apr 6 10:56:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 6 Apr 2021 11:56:38 +0100 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller In-Reply-To: References: Message-ID: On 01/04/2021 16:34, Braden, Albert wrote: > Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. > > Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? you could presumable hardcode the old images for the rabbitmq container so that you continue to use the old version if its compatiable until you have done the rest of the upgrade. then bounce all the rabbit containers from centos 7 to centos 8 at a later date. im not sure what other implications that would have but you can do it by setting rabbitmq_image https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/rabbitmq/defaults/main.yml#L54 > > -----Original Message----- > From: Braden, Albert > Sent: Thursday, April 1, 2021 9:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: > > https://paste.ubuntu.com/p/ZDgFgKtQTB/ > > This appears in the RMQ log: > > https://paste.ubuntu.com/p/5D2Qjv3H8c/ > > -----Original Message----- > From: Braden, Albert > Sent: Wednesday, March 31, 2021 8:31 AM > To: openstack-discuss at lists.openstack.org > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > Centos7: > > {rabbit,"RabbitMQ","3.7.24"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > Centos8: > > {rabbit,"RabbitMQ","3.7.28"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: > > https://paste.ubuntu.com/p/h9HWdfwmrK/ > > and the crash dump that appears on control2: > > crash dump log: > > https://paste.ubuntu.com/p/MpZ8SwTJ2T/ > > First 1500 lines of the dump: > > https://paste.ubuntu.com/p/xkCyp2B8j8/ > > If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. > > -----Original Message----- > From: Mark Goddard > Sent: Wednesday, March 31, 2021 4:14 AM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > On Tue, 30 Mar 2021 at 13:41, Braden, Albert > wrote: >> I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: >> >> >> >> https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 >> >> >> >> I used the instructions here to successfully remove and replace control0 with a Centos8 box >> >> >> >> https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers >> >> >> >> After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com >> >> >> >> (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status >> >> Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... >> >> [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', >> >> 'rabbit at chrnc-void-testupgrade-control-0-replace', >> >> 'rabbit at chrnc-void-testupgrade-control-1', >> >> 'rabbit at chrnc-void-testupgrade-control-2']}]}, >> >> {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', >> >> 'rabbit at chrnc-void-testupgrade-control-1', >> >> 'rabbit at chrnc-void-testupgrade-control-2']}, >> >> {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, >> >> {partitions,[]}, >> >> {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, >> >> {'rabbit at chrnc-void-testupgrade-control-1',[]}, >> >> {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] >> >> >> >> After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: >> >> >> >> kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 >> >> … >> >> control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 >> >> >> >> After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: >> >> >> >> (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status >> >> Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. >> >> >> >> If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: >> >> >> >> (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status >> >> Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... >> >> [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', >> >> 'rabbit at chrnc-void-testupgrade-control-0-replace', >> >> 'rabbit at chrnc-void-testupgrade-control-1', >> >> 'rabbit at chrnc-void-testupgrade-control-2']}]}, >> >> {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', >> >> 'rabbit at chrnc-void-testupgrade-control-0-replace']}, >> >> {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, >> >> {partitions,[]}, >> >> {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, >> >> {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] >> >> >> >> But my hypervisors are down: >> >> >> >> (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll >> >> +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ >> >> | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | >> >> +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ >> >> | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | >> >> | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | >> >> | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | >> >> +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ >> >> >> >> When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: >> >> >> >> 172.16.2.31 compute0 >> >> 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out >> >> 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. >> >> 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out >> >> >> >> In the RMQ logs I see this every 10 seconds: >> >> >> >> 172.16.1.132 control2 >> >> [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 >> >> 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): >> >> client unexpectedly closed TCP connection >> >> 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) >> >> 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e >> >> 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' >> >> 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): >> >> >> >> Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? > Hi Albert, > > Could you share the versions of RabbitMQ and erlang in both versions > of the container? When initially testing this setup, I think we had > 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on > sufficiently to become incompatible? > > Mark >> >> >> I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. >> >> >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From luke.camilleri at zylacomputing.com Tue Apr 6 11:20:08 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 6 Apr 2021 13:20:08 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: <2626442.APKxhzko2K@p1> References: <2626442.APKxhzko2K@p1> Message-ID: <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> That is what I thought until I saw the image below: https://openstackdocs.safeswisscloud.ch/en/howto/ht-port-forward.html https://cdn.discordapp.com/attachments/821001329715183647/826146201774587944/unknown.png On 06/04/2021 08:28, Slawek Kaplonski wrote: > Hi, > > Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: >> Hello everyone, I have enable the L3 extension for port-forwarding and >> can succesfully port-forward traffic after assigning an additional >> floating IP to the project. >> >> I would like to know if it is possible to enable the corresponding >> horizon functionality for this extension (port-forwarding) please? >> >> Regards > I'm not Horizon expert so I may be wrong here but I don't think there is > anything regarding port forwarding support in Horizon currently. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Apr 6 11:21:17 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 6 Apr 2021 13:21:17 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal Message-ID: Hi, one of our jobs (python-tempestconf project) is frequently failing with POST_FAILURE [1] during the following task: export-devstack-journal : Export journal I'm bringing this to a broader audience as we're not sure where exactly the issue might be. Did you encounter a similar issue lately or in the past? [1] https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf Thanks for any advice, -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Apr 6 11:39:25 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 6 Apr 2021 08:39:25 -0300 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> References: <2626442.APKxhzko2K@p1> <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> Message-ID: It was developed by us. The first step to implement this one is the port range support in Neutron; the spec has been accepted, and now we are working to create the patches. Afterwards, we will push this Horizon patch as well. On Tue, Apr 6, 2021 at 8:20 AM Luke Camilleri < luke.camilleri at zylacomputing.com> wrote: > That is what I thought until I saw the image below: > > https://openstackdocs.safeswisscloud.ch/en/howto/ht-port-forward.html > > [image: > https://cdn.discordapp.com/attachments/821001329715183647/826146201774587944/unknown.png] > On 06/04/2021 08:28, Slawek Kaplonski wrote: > > Hi, > > Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: > > Hello everyone, I have enable the L3 extension for port-forwarding and > can succesfully port-forward traffic after assigning an additional > floating IP to the project. > > I would like to know if it is possible to enable the corresponding > horizon functionality for this extension (port-forwarding) please? > > Regards > > I'm not Horizon expert so I may be wrong here but I don't think there is > anything regarding port forwarding support in Horizon currently. > > > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Tue Apr 6 11:41:41 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 6 Apr 2021 13:41:41 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: References: <2626442.APKxhzko2K@p1> <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> Message-ID: <92c2d3be-6563-93ba-48ab-170b7c2c8fac@zylacomputing.com> Thanks for the update On 06/04/2021 13:39, Rafael Weingärtner wrote: > It was developed by us. The first step to implement this one is the > port range support in Neutron; the spec has been accepted, and now we > are working to create the patches. Afterwards, we will push this > Horizon patch as well. > > On Tue, Apr 6, 2021 at 8:20 AM Luke Camilleri > > wrote: > > That is what I thought until I saw the image below: > > https://openstackdocs.safeswisscloud.ch/en/howto/ht-port-forward.html > > > > https://cdn.discordapp.com/attachments/821001329715183647/826146201774587944/unknown.png > > On 06/04/2021 08:28, Slawek Kaplonski wrote: >> Hi, >> >> Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: >>> Hello everyone, I have enable the L3 extension for port-forwarding and >>> can succesfully port-forward traffic after assigning an additional >>> floating IP to the project. >>> >>> I would like to know if it is possible to enable the corresponding >>> horizon functionality for this extension (port-forwarding) please? >>> >>> Regards >> I'm not Horizon expert so I may be wrong here but I don't think there is >> anything regarding port forwarding support in Horizon currently. >> > > > -- > Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Apr 6 12:10:05 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 06 Apr 2021 12:10:05 +0000 Subject: [cinder][horizon] Ussuri: Horizon shows error for managed volume(s) Message-ID: <20210406121005.Horde.yHFPRIBoNuvRq8hpTixt7kS@webmail.nde.ag> Hi *, I'm struggling with a dashboard error that seems to be wrong, I can't seem to find any existing reports but I could have simply missed it so please point me to any existing reports about that. Anyway, a user imported two volumes from Ceph via 'cinder manage' in our Train version of OpenStack, we upgraded last week to Ussuri. One of those two volumes is bootable and the instance is up and running with both volumes (extended volume group). When clicking on the instance details the Horizon dashboard shows a red window saying "Failed to get attached volume." The dashboard log says: [Tue Apr 06 09:29:56.197238 2021] [wsgi:error] [pid 10140] [remote IP:59482] WARNING horizon.exceptions Recoverable error: volume_image_metadata But I can see both volumes in the details page, so the message seems incorrect. I searched the logs for this error message from before the upgrade but couldn't find any. So this seems to be new in Ussuri, I assume. Does anyone have the same experience with managed volumes? Thanks and best regards, Eugen From amotoki at gmail.com Tue Apr 6 13:48:05 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 6 Apr 2021 22:48:05 +0900 Subject: [neutron] bug deputy report (3/29-4/4) Message-ID: Hi, I was a bug deputy last week and here is a report. Last week is relatively quite. Please check my report. ## Unassigned * OpenStack Metadata API and OVN in Neutron https://bugs.launchpad.net/neutron/+bug/1921809 It would be nice if OVN folks follow it more while haleyb replied. ## Medium, Assigned * [LB] Linux Bridge iptables firewall does not work without "ipset" https://bugs.launchpad.net/neutron/+bug/1922127 Assigned to ralonsoh ## Fix Released * Strings in tags field is limited to 60 chars https://bugs.launchpad.net/neutron/+bug/1921713 ## Almost RFE * allow using tap device on netdev enabled host https://bugs.launchpad.net/neutron/+bug/1922222 I requested the bug author to provide more information on the background ## RFE * [QoS] Add minimum guaranteed packet rate QoS rule https://bugs.launchpad.net/neutron/+bug/1922237 * [RFE] BFD for BGP Dynamic Routing https://bugs.launchpad.net/neutron/+bug/1922716 From gmann at ghanshyammann.com Tue Apr 6 14:02:31 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Apr 2021 09:02:31 -0500 Subject: [qa][gate][stable] stable stein|train py2 devstack jobs are broken Message-ID: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> Hello Everyone, During fixing the grenade issue on stable/train[1], there is another issue that came up for py2 devstack based jobs. I have logged the bug on devstack side as of now - https://bugs.launchpad.net/devstack/+bug/1922736 Let's wait for this to fix before you recheck on failing patches. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021597.html -gmann From mkopec at redhat.com Tue Apr 6 14:44:42 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 6 Apr 2021 16:44:42 +0200 Subject: [qa][hacking] Proposing new core reviewers In-Reply-To: References: Message-ID: Thank you for your feedback. As no objections were raised we added zbr and yoctozepto to the hacking-core group. Welcome to the team both of you! Regards, On Mon, 5 Apr 2021 at 01:16, Masayuki Igawa wrote: > Hi, > > On Wed, Mar 31, 2021, at 05:47, Martin Kopec wrote: > > Hi all, > > > > I'd like to propose Sorin Sbarnea (IRC: zbr) and Radosław Piliszek > > (IRC: yoctozepto) to hacking > > core. They both are doing a great upstream work among multiple > > different projects and > > volunteered to help us with maintenance of hacking project as well. > > > > You can vote/feedback in this email thread. If no objection by 6th of > > April, we will add them > > to the list. > > > > +1 ! > > -- Masayuki > > > Regards, > > -- > > Martin Kopec > > > > > > > > -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Apr 6 15:06:50 2021 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 6 Apr 2021 17:06:50 +0200 Subject: How to modify staging-ovirt parameters Message-ID: Hello, I have a test lab with Queens on CentOS 7 deployed with TripleO, that I use sometimes to mimic an OSP 13 environment. The Openstack nodes are oVirt VMS. I moved these VMs from one oVirt infrastructure to another one. The Openstack environment starts and operates without problems (3 controllers, 2 computes and 3 cephs), but I don't know how to modify the staging-ovirt parameters. At oVirt side I re-created as before a user ostackpm at internal with sufficient power and I also set the same password. But the ip address of the engine cannot be the same. When I deployed the environment I put all the parameters inside the instackenv.json file to do introspection and deploy. Where are they put on undercloud? Is there a way to update them? Perhaps some database entries or other kind of repos where I can update the ip? Thanks in advance, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Apr 6 15:12:47 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 6 Apr 2021 08:12:47 -0700 Subject: all Octavia LoadBalancer In-Reply-To: References: Message-ID: Hi Adekunbi, It sounds like the backend servers (web servers?) are not passing the health check or are otherwise unreachable. Provisioning status of Active shows that Octavia was able to create and provision the load balancer without error. Let's look at a few things: 1. Check if the that load balancer has statistics for the your connections to the VIP: openstack loadbalancer stats show If these are all zeros, your deployment of Octavia is not working correctly. Most likely the lb-mgmt-net is not passing the required traffic. Please debug in neutron. Assuming you see a value greater than zero in the "total_connections" column, your deployment is working as expected. 2. Check your health monitor configuration and load balancer status: openstack loadbalancer status show Check the "operating status" of all of the objects in your load balancer. As a refresher, operating status is the observed status of the object, so do we see the backend member as ONLINE, etc. openstack loadbalancer member show Also check that the member is configured with the correct subnet that can reach the backend member server. If a subnet was not specified, it will use the VIP subnet to attempt to reach the members. If the members are in operating status ERROR, this means that the load balancer sees that server as failed. Check your health monitor configuration (If you have one) to make sure it is connecting to the correct IPs and ports and the expected response is correct for your application. openstack loadbalancer healthmonitor show Also, check that the members have security groups or other firewall runs set appropriately to allow the load balancer to access it. Michael On Fri, Apr 2, 2021 at 8:36 AM Adekunbi Adewojo wrote: > > Hi there, > > I recently deployed a load balancer on our openstack private cloud. I used this manual - https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html > to create the load balancer. However, after creating and trying to access it, it returns an error message saying "No server is available to handle this request". Also on the dashboard, "Operating status" shows offline but "provisioning status" shows active. I have two web applications as members of the load balancer and I can individually access those web applications. > > Could someone please point me in the right direction? > > Thanks. From radoslaw.piliszek at gmail.com Tue Apr 6 15:14:02 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Apr 2021 17:14:02 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: References: Message-ID: I am testing whether replacing xz with gzip would solve the problem [1] [2]. [1] https://review.opendev.org/c/openstack/devstack/+/784964 [2] https://review.opendev.org/c/osf/python-tempestconf/+/784967 -yoctozepto On Tue, Apr 6, 2021 at 1:21 PM Martin Kopec wrote: > > Hi, > > one of our jobs (python-tempestconf project) is frequently failing with POST_FAILURE [1] > during the following task: > > export-devstack-journal : Export journal > > I'm bringing this to a broader audience as we're not sure where exactly the issue might be. > > Did you encounter a similar issue lately or in the past? > > [1] https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf > > Thanks for any advice, > -- > Martin Kopec > > > From oliver.wenz at dhbw-mannheim.de Tue Apr 6 15:28:19 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Tue, 6 Apr 2021 17:28:19 +0200 (CEST) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1707579703.116615.1617722899181@ox.dhbw-mannheim.de> Following the suggestion from https://docs.openstack.org/swift/latest/overview_auth.html I set 'log_level=debug' for authtoken. Now I'm seeing more errors in the glance-api logs: Apr 06 15:07:40 infra1-glance-container-99614ac2 glance-wsgi-api[1837]: 2021-04-06 15:07:38.197 1837 WARNING keystonemiddleware.auth_token [req-ad8d0db9-b630-4237-9fff-d7ff282155d2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Identity response: {"error": {"code": 500, "title": "Internal Server Error", "message": "An unexpected error prevented the server from fulfilling your request."}}: keystoneauth1.exceptions.http.InternalServerError: An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-55767c13-a450-4f81-90ab-5a7af6b3f672) Apr 06 15:07:46 infra1-glance-container-99614ac2 uwsgi[1822]: DAMN ! worker 13 (pid: 1836) died, killed by signal 9 :( trying respawn ... Here's the full log: http://paste.openstack.org/show/804208/ The options you suggested @Dmitriy are still there in user_variables.yml: glance_glance_api_conf_overrides: keystone_authtoken: service_token_roles_required: True service_token_roles: service Kind regards, Oliver From aadewojo at gmail.com Tue Apr 6 15:34:03 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Tue, 6 Apr 2021 16:34:03 +0100 Subject: all Octavia LoadBalancer In-Reply-To: References: Message-ID: Thank you very much for your detailed response. I checked my previous loadbalancer implementation, the members operating status showed active. However, when I checked the access log of one of the load balancer member it showed this "06/Apr/2021:06:25:05 +0000] "GET /healthcheck HTTP/1.0" 404 118". I then deleted the load balancer and recreated it. I realised that before adding a listener or any other thing, the load balancer wasn't showing an "Online" status as suggested by the cookbook. I also ran the stat command and everything was zero. I see that you mentioned neutron, I do not have admin access, I might have to go back to the admin. But from what I said, do you still think it is a neutron issue? Thanks. On Tue, Apr 6, 2021 at 4:12 PM Michael Johnson wrote: > Hi Adekunbi, > > It sounds like the backend servers (web servers?) are not passing the > health check or are otherwise unreachable. Provisioning status of > Active shows that Octavia was able to create and provision the load > balancer without error. > > Let's look at a few things: > > 1. Check if the that load balancer has statistics for the your > connections to the VIP: > > openstack loadbalancer stats show > > If these are all zeros, your deployment of Octavia is not working > correctly. Most likely the lb-mgmt-net is not passing the required > traffic. Please debug in neutron. > > Assuming you see a value greater than zero in the "total_connections" > column, your deployment is working as expected. > > 2. Check your health monitor configuration and load balancer status: > > openstack loadbalancer status show > > Check the "operating status" of all of the objects in your load > balancer. As a refresher, operating status is the observed status of > the object, so do we see the backend member as ONLINE, etc. > > openstack loadbalancer member show name> > > Also check that the member is configured with the correct subnet that > can reach the backend member server. If a subnet was not specified, it > will use the VIP subnet to attempt to reach the members. > > If the members are in operating status ERROR, this means that the load > balancer sees that server as failed. Check your health monitor > configuration (If you have one) to make sure it is connecting to the > correct IPs and ports and the expected response is correct for your > application. > > openstack loadbalancer healthmonitor show > > Also, check that the members have security groups or other firewall > runs set appropriately to allow the load balancer to access it. > > Michael > > On Fri, Apr 2, 2021 at 8:36 AM Adekunbi Adewojo > wrote: > > > > Hi there, > > > > I recently deployed a load balancer on our openstack private cloud. I > used this manual - > https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html > > to create the load balancer. However, after creating and trying to > access it, it returns an error message saying "No server is available to > handle this request". Also on the dashboard, "Operating status" shows > offline but "provisioning status" shows active. I have two web applications > as members of the load balancer and I can individually access those web > applications. > > > > Could someone please point me in the right direction? > > > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadewojo at gmail.com Tue Apr 6 15:36:18 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Tue, 6 Apr 2021 16:36:18 +0100 Subject: [all] Octavia LoadBalancer Error In-Reply-To: References: Message-ID: Thank you for your response. However, I do not know how to enable debug messages for the health monitor. I do not even know how to access the health monitor log because I can't ssh into the load balancer. On Tue, Apr 6, 2021 at 11:09 AM Gregory Thiemonge wrote: > On Mon, Apr 5, 2021 at 4:14 PM Adekunbi Adewojo > wrote: > >> Hi there, >> >> I recently deployed a load balancer on our openstack private cloud. I >> used this manual - >> https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html >> to create the load balancer. However, after creating and trying to access >> it, it returns an error message saying "No server is available to handle >> this request". Also on the dashboard, "Operating status" shows offline but >> "provisioning status" shows active. I have two web applications as members >> of the load balancer and I can individually access those web applications. >> > > Hi, > > provisioning status ACTIVE shows the load balancer was successfully > created but operating status OFFLINE indicates that the amphora is unable > to communicate with the Octavia health-manager service. > Basically, the amphora should report its status to the hm service using > UDP messages (on a controller, you can dump the UDP packets on the o-hm0 > interface), do you see any errors in the hm logs? I would > recommend enabling the debug messages in the health-manager, and to check > the logs, you should see messages about incoming packets: > > Apr 06 10:02:20 devstack2 octavia-health-manager[1747820]: DEBUG > octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from > ('192.168.0.73', 11273) {{(pid=1747857) dorecv > /opt/stack/octavia/octavia/amphorae/drivers/health/heartbeat_udp.py:95}} > > The "No server is available to handle this request" response from the > amphora indicates that haproxy is correctly running on the VIP interface > but it doesn't have any member servers. Perhaps fixing the health-manager > issue will help to understand why the traffic is not dispatched to the > members. > > > >> Could someone please point me in the right direction? >> >> Thanks. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Apr 6 15:39:20 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 6 Apr 2021 09:39:20 -0600 Subject: [tripleo] master and victoria promotions delayed Message-ID: 0/ Hey just want to communicate that promotions for master and victoria are being delayed due to promotion-blockers. If you have a merged patch in a repo that is not gated by tripleo jobs you will NOT be able to use that patch until we promote. http://dashboard-ci.tripleo.org/d/HkOLImOMk/upstream-and-rdo-promotions?orgId=1 Please review the following bugs to better understand what is blocking. https://bugs.launchpad.net/tripleo/+bugs?field.tag=promotion-blocker&orderby=-datecreated&start=0 I'll note, any tempest test results logged in bugs are most likely already skipped and not blocking upstream via https://opendev.org/openstack/openstack-tempest-skiplist/src/branch/master/roles/validate-tempest/vars/tempest_skip.yml Your focus should be on deployment failures, not tempest failures at this time. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Apr 6 15:40:38 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 6 Apr 2021 17:40:38 +0200 Subject: all Octavia LoadBalancer In-Reply-To: References: Message-ID: Hello, have you tried to use tcp check rather than http check ? Il giorno mar 6 apr 2021 alle ore 17:38 Adekunbi Adewojo ha scritto: > Thank you very much for your detailed response. I checked my previous > loadbalancer implementation, the members operating status showed active. > However, when I checked the access log of one of the load balancer member > it showed this "06/Apr/2021:06:25:05 +0000] "GET /healthcheck HTTP/1.0" 404 > 118". > > I then deleted the load balancer and recreated it. I realised that before > adding a listener or any other thing, the load balancer wasn't showing an > "Online" status as suggested by the cookbook. I also ran the stat command > and everything was zero. > > I see that you mentioned neutron, I do not have admin access, I might have > to go back to the admin. But from what I said, do you still think it is a > neutron issue? > > Thanks. > > On Tue, Apr 6, 2021 at 4:12 PM Michael Johnson > wrote: > >> Hi Adekunbi, >> >> It sounds like the backend servers (web servers?) are not passing the >> health check or are otherwise unreachable. Provisioning status of >> Active shows that Octavia was able to create and provision the load >> balancer without error. >> >> Let's look at a few things: >> >> 1. Check if the that load balancer has statistics for the your >> connections to the VIP: >> >> openstack loadbalancer stats show >> >> If these are all zeros, your deployment of Octavia is not working >> correctly. Most likely the lb-mgmt-net is not passing the required >> traffic. Please debug in neutron. >> >> Assuming you see a value greater than zero in the "total_connections" >> column, your deployment is working as expected. >> >> 2. Check your health monitor configuration and load balancer status: >> >> openstack loadbalancer status show >> >> Check the "operating status" of all of the objects in your load >> balancer. As a refresher, operating status is the observed status of >> the object, so do we see the backend member as ONLINE, etc. >> >> openstack loadbalancer member show > name> >> >> Also check that the member is configured with the correct subnet that >> can reach the backend member server. If a subnet was not specified, it >> will use the VIP subnet to attempt to reach the members. >> >> If the members are in operating status ERROR, this means that the load >> balancer sees that server as failed. Check your health monitor >> configuration (If you have one) to make sure it is connecting to the >> correct IPs and ports and the expected response is correct for your >> application. >> >> openstack loadbalancer healthmonitor show >> >> Also, check that the members have security groups or other firewall >> runs set appropriately to allow the load balancer to access it. >> >> Michael >> >> On Fri, Apr 2, 2021 at 8:36 AM Adekunbi Adewojo >> wrote: >> > >> > Hi there, >> > >> > I recently deployed a load balancer on our openstack private cloud. I >> used this manual - >> https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html >> > to create the load balancer. However, after creating and trying to >> access it, it returns an error message saying "No server is available to >> handle this request". Also on the dashboard, "Operating status" shows >> offline but "provisioning status" shows active. I have two web applications >> as members of the load balancer and I can individually access those web >> applications. >> > >> > Could someone please point me in the right direction? >> > >> > Thanks. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Apr 6 15:47:40 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 6 Apr 2021 17:47:40 +0200 Subject: [all] Octavia LoadBalancer Error In-Reply-To: References: Message-ID: I also suggest to verify if the load balancer network used by amphora ave access to controllers port udp 5555. If you want to access the load balancers instances, you can specify in the configuration : amp_ssh_key_name = your ssh key So you can logon on loadbalancer instances via ssh Il giorno mar 6 apr 2021 alle ore 17:40 Adekunbi Adewojo ha scritto: > Thank you for your response. However, I do not know how to enable debug > messages for the health monitor. I do not even know how to access the > health monitor log because I can't ssh into the load balancer. > > On Tue, Apr 6, 2021 at 11:09 AM Gregory Thiemonge > wrote: > >> On Mon, Apr 5, 2021 at 4:14 PM Adekunbi Adewojo >> wrote: >> >>> Hi there, >>> >>> I recently deployed a load balancer on our openstack private cloud. I >>> used this manual - >>> https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html >>> to create the load balancer. However, after creating and trying to >>> access it, it returns an error message saying "No server is available to >>> handle this request". Also on the dashboard, "Operating status" shows >>> offline but "provisioning status" shows active. I have two web applications >>> as members of the load balancer and I can individually access those web >>> applications. >>> >> >> Hi, >> >> provisioning status ACTIVE shows the load balancer was successfully >> created but operating status OFFLINE indicates that the amphora is unable >> to communicate with the Octavia health-manager service. >> Basically, the amphora should report its status to the hm service using >> UDP messages (on a controller, you can dump the UDP packets on the o-hm0 >> interface), do you see any errors in the hm logs? I would >> recommend enabling the debug messages in the health-manager, and to check >> the logs, you should see messages about incoming packets: >> >> Apr 06 10:02:20 devstack2 octavia-health-manager[1747820]: DEBUG >> octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from >> ('192.168.0.73', 11273) {{(pid=1747857) dorecv >> /opt/stack/octavia/octavia/amphorae/drivers/health/heartbeat_udp.py:95}} >> >> The "No server is available to handle this request" response from the >> amphora indicates that haproxy is correctly running on the VIP interface >> but it doesn't have any member servers. Perhaps fixing the health-manager >> issue will help to understand why the traffic is not dispatched to the >> members. >> >> >> >>> Could someone please point me in the right direction? >>> >>> Thanks. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Apr 6 15:51:19 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Apr 2021 08:51:19 -0700 Subject: =?UTF-8?Q?Re:_[devstack][infra]_POST=5FFAILURE_on_export-devstack-journa?= =?UTF-8?Q?l_:_Export_journal?= In-Reply-To: References: Message-ID: On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > I am testing whether replacing xz with gzip would solve the problem [1] [2]. The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > [1] https://review.opendev.org/c/openstack/devstack/+/784964 > [2] https://review.opendev.org/c/osf/python-tempestconf/+/784967 > > -yoctozepto > > On Tue, Apr 6, 2021 at 1:21 PM Martin Kopec wrote: > > > > Hi, > > > > one of our jobs (python-tempestconf project) is frequently failing with POST_FAILURE [1] > > during the following task: > > > > export-devstack-journal : Export journal > > > > I'm bringing this to a broader audience as we're not sure where exactly the issue might be. > > > > Did you encounter a similar issue lately or in the past? > > > > [1] https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf > > > > Thanks for any advice, > > -- > > Martin Kopec From pierre at stackhpc.com Tue Apr 6 15:51:50 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 6 Apr 2021 17:51:50 +0200 Subject: [qa][gate][stable] stable stein|train py2 devstack jobs are broken In-Reply-To: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> References: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> Message-ID: We've discussed this issue (which affects non-devstack jobs too) on #opendev and #openstack-qa. If I understood correctly, it is caused by a recent PyPI outage and their fallback infra not providing the data-requires-python metadata. While PyPI appears to be back to normal, some opendev mirrors (proxies really) are still serving indexes without data-requires-python, which suggests that bad PyPI servers may still be handling some of the requests. On Tue, 6 Apr 2021 at 16:10, Ghanshyam Mann wrote: > > Hello Everyone, > > During fixing the grenade issue on stable/train[1], there is another issue that came up for > py2 devstack based jobs. I have logged the bug on devstack side as of now > > - https://bugs.launchpad.net/devstack/+bug/1922736 > > Let's wait for this to fix before you recheck on failing patches. > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021597.html > > -gmann > From fungi at yuggoth.org Tue Apr 6 16:02:48 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Apr 2021 16:02:48 +0000 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: References: Message-ID: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> On 2021-04-06 13:21:17 +0200 (+0200), Martin Kopec wrote: > one of our jobs (python-tempestconf project) is frequently failing with > POST_FAILURE [1] > during the following task: > > export-devstack-journal : Export journal > > I'm bringing this to a broader audience as we're not sure where exactly the > issue might be. > > Did you encounter a similar issue lately or in the past? > > [1] > https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf Looking at the error, I strongly suspect memory exhaustion. We could try tuning xz to use less memory when compressing. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Tue Apr 6 16:11:41 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Apr 2021 18:11:41 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: On Tue, Apr 6, 2021 at 6:02 PM Jeremy Stanley wrote: > Looking at the error, I strongly suspect memory exhaustion. We could > try tuning xz to use less memory when compressing. That was my hunch as well, hence why I test using gzip. On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. Let's see how bad the file sizes are. If they are acceptable, we can keep gzip and be happy. Otherwise we try to tune the params to make xz a better citizen as fungi suggested. -yoctozepto From radoslaw.piliszek at gmail.com Tue Apr 6 16:15:28 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Apr 2021 18:15:28 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: On Tue, Apr 6, 2021 at 6:11 PM Radosław Piliszek wrote: > On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > Let's see how bad the file sizes are. devstack.journal.gz 23.6M Less than all the other logs together, I would not mind. I wonder how it is in other jobs (this is from the failing one). -yoctozepto From juliaashleykreger at gmail.com Tue Apr 6 16:25:57 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Apr 2021 09:25:57 -0700 Subject: How to modify staging-ovirt parameters In-Reply-To: References: Message-ID: Greetings, The parameters get stored in ironic. You can use the "openstack baremetal node set" command to set new parameters into the driver_info field. Specifically it seems like you only need to update the address, so you'll just want to examine the driver_info field contents to see which value you need to update. You can do this with "openstack baremetal node show ". Hope that helps, -Julia On Tue, Apr 6, 2021 at 8:10 AM Gianluca Cecchi wrote: > > Hello, > I have a test lab with Queens on CentOS 7 deployed with TripleO, that I use sometimes to mimic an OSP 13 environment. > The Openstack nodes are oVirt VMS. > I moved these VMs from one oVirt infrastructure to another one. > The Openstack environment starts and operates without problems (3 controllers, 2 computes and 3 cephs), but I don't know how to modify the staging-ovirt parameters. > At oVirt side I re-created as before a user ostackpm at internal with sufficient power and I also set the same password. > But the ip address of the engine cannot be the same. > > When I deployed the environment I put all the parameters inside the instackenv.json file to do introspection and deploy. > > Where are they put on undercloud? > Is there a way to update them? Perhaps some database entries or other kind of repos where I can update the ip? > Thanks in advance, > Gianluca > From cboylan at sapwetik.org Tue Apr 6 16:39:04 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Apr 2021 09:39:04 -0700 Subject: =?UTF-8?Q?Re:_[devstack][infra]_POST=5FFAILURE_on_export-devstack-journa?= =?UTF-8?Q?l_:_Export_journal?= In-Reply-To: References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: On Tue, Apr 6, 2021, at 9:15 AM, Radosław Piliszek wrote: > On Tue, Apr 6, 2021 at 6:11 PM Radosław Piliszek > wrote: > > On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > > > > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > > > > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > > > Let's see how bad the file sizes are. > > devstack.journal.gz 23.6M > > Less than all the other logs together, I would not mind. > I wonder how it is in other jobs (this is from the failing one). There does seem to be a range (likely due to how much the job workload causes logging to happen in journald) from about a few megabytes to eighty something MB [3]. This is probably acceptable. Just keep an eye out for jobs that end up with much larger file sizes and we can reevaluate if we notice them. [3] https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_038/784964/1/check/tempest-multinode-full-py3/038bd51/controller/logs/index.html From cboylan at sapwetik.org Tue Apr 6 16:46:33 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Apr 2021 09:46:33 -0700 Subject: =?UTF-8?Q?Re:_[devstack][infra]_POST=5FFAILURE_on_export-devstack-journa?= =?UTF-8?Q?l_:_Export_journal?= In-Reply-To: References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: <7626869f-dab3-41df-a40b-dafa20dcfaf4@www.fastmail.com> On Tue, Apr 6, 2021, at 9:11 AM, Radosław Piliszek wrote: > On Tue, Apr 6, 2021 at 6:02 PM Jeremy Stanley wrote: > > Looking at the error, I strongly suspect memory exhaustion. We could > > try tuning xz to use less memory when compressing. Worth noting that we continue to suspect memory pressure, and in particular diving into swap, for random failures that appear timing or performance related. I still think it would be a helpful exercise for OpenStack to look at its memory consumption (remember end users will experience this too) and see if there are any unexpected areas of memory use. I think the last time i skimmed logs the privsep daemon was a large consumer because we separate instance is run for each service and they all add up. > > That was my hunch as well, hence why I test using gzip. > > On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > Let's see how bad the file sizes are. > If they are acceptable, we can keep gzip and be happy. > Otherwise we try to tune the params to make xz a better citizen as > fungi suggested. > > -yoctozepto > > From fungi at yuggoth.org Tue Apr 6 16:47:56 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Apr 2021 16:47:56 +0000 Subject: [qa][gate][stable][infra] stable stein|train py2 devstack jobs are broken In-Reply-To: References: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> Message-ID: <20210406164756.5otrjrzxv423lpph@yuggoth.org> On 2021-04-06 17:51:50 +0200 (+0200), Pierre Riteau wrote: > We've discussed this issue (which affects non-devstack jobs too) on > #opendev and #openstack-qa. If I understood correctly, it is caused by > a recent PyPI outage and their fallback infra not providing the > data-requires-python metadata. While PyPI appears to be back to > normal, some opendev mirrors (proxies really) are still serving > indexes without data-requires-python, which suggests that bad PyPI > servers may still be handling some of the requests. [...] Still speculation at this point, though the evidence points to that happening (we've seen it several times in the past). Technically yes our proxies are sometimes serving indices without the metadata, but that tends to happen because PyPI's CDN is sometimes serving indices without metadata to our proxies and not because of any actual problem with our proxies. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tuguoyi at outlook.com Tue Apr 6 12:00:18 2021 From: tuguoyi at outlook.com (Guoyi Tu) Date: Tue, 6 Apr 2021 20:00:18 +0800 Subject: [dev][nova] Problem about vm migration compatibility check Message-ID: hi there, In my test environment, i created a vm and configured the cpu with host-model, when I migrate the vm to another host with the same cpu, it failed the migration compatibility check which complains the cpu definition of domain is incompatible with target host cpu. As we know, when the domain configured as above starts, the host-model cpu definition will automatically converted to custom cpu model and with some addtional features that the KVM supported, these addtional features may contains features that the host doesn't support. In the code, the compatibility of the target host is check by calling compareCPU()(libvirt API). The compareCPU() can only recongnize the features probed by cpuid instruction on the host, but it may not recognize the features of cpu definition of domain xml (virsh dumpxml domainname) when the domain running. So the compatibility check will fail when KVM support one or more features which is considerd as disabled by the cpuid instuction. I think we should call compareHypervisorCPU() or something like that (supported by libvirt since v4.4.0) instead of compareCPU() to check the migration compatibility. My test environment is as follow: host cpu: Cascadelake libvirt-6.9 qemu-5.0 host-model cpu: Cascadelake-Server Intel The hypervisor, umip, pschange-mc-no features block the compatibility check -- Best Regards, Guoyi Tu From luke.camilleri at zylacomputing.com Tue Apr 6 21:51:00 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 6 Apr 2021 23:51:00 +0200 Subject: [victoria][magnum]fedora-atomic-27 image Message-ID: We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'// //+ sleep 5s// //+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run     --entrypoint /bin/bash     --name install-kubectl     --net host     --privileged     --rm --user root     --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7     -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// //bash: /usr/bin/podman: No such file or directory// //ERROR Unable to install kubectl. Abort.// //+ i=61// //+ '[' 61 -gt 60 ']'// //+ echo 'ERROR Unable to install kubectl. Abort.'// //+ exit 1/ The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: /Starting to run kube-apiserver-to-kubelet-role// //Waiting for Kubernetes API...// //+ echo 'Waiting for Kubernetes API...'// //++ curl --silent http://127.0.0.1:8080/healthz// //+ '[' ok = '' ']'// //+ sleep 5/ This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? Thanks in advance for any asistance -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Apr 6 22:00:39 2021 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 7 Apr 2021 00:00:39 +0200 Subject: Unable to retrieve ceph docker image for Queens Message-ID: Hello, I'm trying to extend an old small test lab Queens environment with dedicated Ceph Storage nodes. I would like to add some disks to the storage nodes and so I customize my original templates/ceph-config.yaml adding them under devices section: parameter_defaults: CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb - /dev/sdc . . . When I run the overcloud deploy I get this in ceph-install-workflow.log 2021-04-06 21:24:57,955 p=32618 u=mistral | TASK [ceph-docker-common : pulling docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 image] *** 2021-04-06 21:24:57,955 p=32618 u=mistral | Tuesday 06 April 2021 21:24:57 +0200 (0:00:00.153) 0:06:17.978 ********* 2021-04-06 21:25:13,206 p=32618 u=mistral | FAILED - RETRYING: pulling docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 image (3 retries left). . . . 2021-04-06 21:26:03,715 p=32618 u=mistral | FAILED - RETRYING: pulling docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 image (1 retries left). 2021-04-06 21:26:28,839 p=32618 u=mistral | fatal: [172.23.0.232]: FAILED! => {"attempts": 3, "changed": false, "cmd": ["timeout", "300s", "docker", "pull", "docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64"], "delta": "0:00:15.022274", "end": "2021-04-06 21:25:48.701980", "msg": "non-zero return code", "rc": 1, "start": "2021-04-06 21:25:33.679706", "stderr": "Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", "stderr_lines": ["Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"], "stdout": "Trying to pull repository docker.io/ceph/daemon ... ", "stdout_lines": ["Trying to pull repository docker.io/ceph/daemon ... "]} So it seems the docker images are not there any more. My existing ceph nodes are using that image: [heat-admin at ostack-ceph0 ~]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ad4aeb3f4cb docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 "/entrypoint.sh" 3 weeks ago Up 3 weeks ceph-osd-0 0dc9a9889283 docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 "/entrypoint.sh" 3 weeks ago Up 3 weeks ceph-osd-3 8d543016cce7 docker.io/tripleoqueens/centos-binary-cron:current-tripleo "dumb-init --singl..." 11 months ago Up 3 weeks logrotate_crond [heat-admin at ostack-ceph0 ~]$ Any way or suggestion to manage this? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Tue Apr 6 22:03:15 2021 From: feilong at catalyst.net.nz (feilong) Date: Wed, 7 Apr 2021 10:03:15 +1200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hi Luke, The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. On 7/04/21 9:51 am, Luke Camilleri wrote: > > We have insatlled magnum following the installation guide here > https://docs.openstack.org/magnum/victoria/install/install-rdo.html > and the process was quite smooth but we have been having some issues > with the deployment of the clusters. > > The image being used as per the documentation is > https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 > > Our first issue was that podman was being used even if we specified > the use_podman=false (since the image above did not include podman) > but this was resulting in a timeout and the cluster would fail to > deploy. We have then installed podman in the image and the cluster > progressed a bit further > > /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'// > //+ sleep 5s// > //+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman > run     --entrypoint /bin/bash     --name install-kubectl     --net > host     --privileged     --rm     --user root     --volume > /srv/magnum/bin:/host/srv/magnum/bin     > k8s.gcr.io/hyperkube:v1.15.7     -c '\''cp /usr/local/bin/kubectl > /host/srv/magnum/bin/kubectl'\'''// > //bash: /usr/bin/podman: No such file or directory// > //ERROR Unable to install kubectl. Abort.// > //+ i=61// > //+ '[' 61 -gt 60 ']'// > //+ echo 'ERROR Unable to install kubectl. Abort.'// > //+ exit 1/ > > The cluster is now failing here at "kube_cluster_deploy" and when > checking the logs on the master node we noticed the following in the > log files: > > /Starting to run kube-apiserver-to-kubelet-role// > //Waiting for Kubernetes API...// > //+ echo 'Waiting for Kubernetes API...'// > //++ curl --silent http://127.0.0.1:8080/healthz// > //+ '[' ok = '' ']'// > //+ sleep 5/ > > This is because the kubernetes API server is not installed either. I > have noticed some scripts that should handle the installation but I > would like to know if anyone here has had similar issues with a clean > Victoria installation. > > Also should we have to install any packages in the fedora atomic image > file or should the installation requirements be part of the stack? > > Thanks in advance for any asistance > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Apr 6 22:04:55 2021 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 7 Apr 2021 00:04:55 +0200 Subject: How to modify staging-ovirt parameters In-Reply-To: References: Message-ID: On Tue, Apr 6, 2021 at 6:26 PM Julia Kreger wrote: > Greetings, > > The parameters get stored in ironic. You can use the "openstack > baremetal node set" command to set new parameters into the driver_info > field. Specifically it seems like you only need to update the address, > so you'll just want to examine the driver_info field contents to see > which value you need to update. You can do this with "openstack > baremetal node show ". > > Hope that helps, > > -Julia > > Thanks Julia, it worked setting the new ip and I was able to run openstack baremetal node power off ostack-compute1 and the corresponding VM in oVirt was correctly powered off using the user and the driver. Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Apr 6 23:02:58 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 7 Apr 2021 00:02:58 +0100 Subject: [dev][nova] Problem about vm migration compatibility check In-Reply-To: References: Message-ID: On 06/04/2021 13:00, Guoyi Tu wrote: > hi there, > > In my test environment, i created a vm and configured the cpu with > host-model, when I migrate the vm to another host with the same cpu, > it failed the migration compatibility check which complains the cpu > definition of domain is incompatible with target host cpu. > > As we know, when the domain configured as above starts, the host-model > cpu definition will automatically converted to custom cpu model and > with some addtional features that the KVM supported, these addtional > features may contains features that the host doesn't support. > > In the code, the compatibility of the target host is check by calling > compareCPU()(libvirt API). The compareCPU() can only recongnize the > features probed by cpuid instruction on the host, but it may not > recognize the features of cpu definition of domain xml (virsh dumpxml > domainname) when the domain running. So the compatibility check will > fail when KVM support one or more features which is considerd as > disabled by the cpuid instuction. > > I think we should call compareHypervisorCPU() or something like that > (supported by libvirt since v4.4.0) instead of compareCPU() to check > the migration compatibility. there are patches already for review to move to the newer cpu apis. https://review.opendev.org/c/openstack/nova/+/762330 that uses baseline_hypervisor_cpu and compare_hypervisor_cpu instead of the old functions. this work will likely be resumed now that we are after feature freeze and the recandiates are out but we tend not to merge any large change until the release is done. https://review.opendev.org/c/openstack/nova/+/762330 is not particalarly big but changing how we detct cpu feature is not something that is great to merge durign the RC stablisation period. while this should technically resovle https://bugs.launchpad.net/nova/+bug/1903822 but its not really a bug its paying down technical debt so im not sure this is something we should back port. with that said if you are interested in this you should review that patch. > > > My test environment is as follow: > host cpu: Cascadelake > libvirt-6.9 > qemu-5.0 > > host-model cpu: >    >       Cascadelake-Server >       Intel >       >       >       >       >       >       >       >       >       >       >       >       >       >       >       >       >       >     > > > The hypervisor, umip, pschange-mc-no features block the compatibility > check > > From luke.camilleri at zylacomputing.com Tue Apr 6 23:57:20 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 7 Apr 2021 01:57:20 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? On 07/04/2021 00:03, feilong wrote: > > Hi Luke, > > The Fedora Atomic driver has been deprecated a while since the Fedora > Atomic has been deprecated by upstream. For now, I would suggest using > Fedora CoreOS 32.20201104.3.0 > > The latest version of Fedora CoreOS is 33.xxx, but there are something > when booting based my testing, see > https://github.com/coreos/fedora-coreos-tracker/issues/735 > > > Please feel free to let me know if you have any question about using > Magnum. We're using stable/victoria on our public cloud and it works > very well. I can share our public templates if you want. Cheers. > > > > On 7/04/21 9:51 am, Luke Camilleri wrote: >> >> We have insatlled magnum following the installation guide here >> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >> and the process was quite smooth but we have been having some issues >> with the deployment of the clusters. >> >> The image being used as per the documentation is >> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >> >> Our first issue was that podman was being used even if we specified >> the use_podman=false (since the image above did not include podman) >> but this was resulting in a timeout and the cluster would fail to >> deploy. We have then installed podman in the image and the cluster >> progressed a bit further >> >> /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'// >> //+ sleep 5s// >> //+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman >> run     --entrypoint /bin/bash     --name install-kubectl     --net >> host     --privileged --rm     --user root     --volume >> /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7     >> -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >> //bash: /usr/bin/podman: No such file or directory// >> //ERROR Unable to install kubectl. Abort.// >> //+ i=61// >> //+ '[' 61 -gt 60 ']'// >> //+ echo 'ERROR Unable to install kubectl. Abort.'// >> //+ exit 1/ >> >> The cluster is now failing here at "kube_cluster_deploy" and when >> checking the logs on the master node we noticed the following in the >> log files: >> >> /Starting to run kube-apiserver-to-kubelet-role// >> //Waiting for Kubernetes API...// >> //+ echo 'Waiting for Kubernetes API...'// >> //++ curl --silent http://127.0.0.1:8080/healthz// >> //+ '[' ok = '' ']'// >> //+ sleep 5/ >> >> This is because the kubernetes API server is not installed either. I >> have noticed some scripts that should handle the installation but I >> would like to know if anyone here has had similar issues with a clean >> Victoria installation. >> >> Also should we have to install any packages in the fedora atomic >> image file or should the installation requirements be part of the stack? >> >> Thanks in advance for any asistance >> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email:flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Wed Apr 7 01:16:20 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Tue, 6 Apr 2021 21:16:20 -0400 Subject: [docs] Project guides in PDF format In-Reply-To: <20210330000704.bsuukwkon2vnint3@yuggoth.org> References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> <20210330000704.bsuukwkon2vnint3@yuggoth.org> Message-ID: On Mon, Mar 29, 2021 at 8:09 PM Jeremy Stanley wrote: > On 2021-03-29 19:47:24 -0400 (-0400), Peter Matulis wrote: > > I changed the testenv to 'pdf-docs' and the build is still being skipped. > > > > Do I need to submit a PR to have this [1] set to 'false'? > > > > [1]: > > > https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/zuul.d/jobs.yaml#L970 > [...] > > Oh, yep that'll need to be adjusted or overridden as well. I see > that https://review.opendev.org/678077 explicitly chose not to do > PDF builds for deploy guides for the original PDF docs > implementation a couple of years ago. Unfortunately the commit > message doesn't say why, but maybe this is a good opportunity to > start. Any other thoughts before I propose a change to the below? https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L970 However, many (most?) API projects seem to include their > deployment guides in their software Git repos, so switching this on > for everyone might break their deploy guide builds. If we combine it > with an expectation for a deploy-guide-specific PDF building tox > testenv like you had previously, then it would get safely skipped by > any projects without that testenv defined. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 379035389 at qq.com Wed Apr 7 03:46:12 2021 From: 379035389 at qq.com (=?utf-8?B?5pyd6Ziz5pyq54OI?=) Date: Wed, 7 Apr 2021 11:46:12 +0800 Subject: [victoria]oslo_privsep.daemon.FailedToDropPrivileges Message-ID: Hi, everyone: I tried to build an instance on the compute node but failed. I am sure that every necessary connection has been built. And I found the same error information on the controller node and the compute node , in /var/log/neutron/linuxbride-agent.log That is information: INFO neutron.common.config [-] Logging enabled! 2021-04-07 11:30:52.866 2182 INFO neutron.common.config [-] /usr/bin/neutron-linuxbridge-agent version 17.1.0 2021-04-07 11:30:52.867 2182 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface mappings: {'provider': 'ens160'} 2021-04-07 11:30:52.867 2182 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge mappings: {} 2021-04-07 11:30:52.868 2182 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--config-dir', '/etc/neutron/conf.d/neutron-linuxbridge-agent', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpm5d0ytiv/privsep.sock'] 2021-04-07 11:30:53.346 2182 CRITICAL oslo.privsep.daemon [-] privsep helper command exited non-zero (1) 2021-04-07 11:30:53.346 2182 CRITICAL neutron [-] Unhandled error: oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited non-zero (1) 2021-04-07 11:30:53.346 2182 ERROR neutron Traceback (most recent call last): 2021-04-07 11:30:53.346 2182 ERROR neutron   File "/usr/bin/neutron-linuxbridge-agent", line 10, in From noonedeadpunk at ya.ru Wed Apr 7 04:09:38 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 07 Apr 2021 07:09:38 +0300 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <1707579703.116615.1617722899181@ox.dhbw-mannheim.de> References: <1707579703.116615.1617722899181@ox.dhbw-mannheim.de> Message-ID: <21731617768405@mail.yandex.ua> An HTML attachment was scrubbed... URL: From aj at suse.com Wed Apr 7 07:06:38 2021 From: aj at suse.com (Andreas Jaeger) Date: Wed, 7 Apr 2021 09:06:38 +0200 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> <20210330000704.bsuukwkon2vnint3@yuggoth.org> Message-ID: <8dd30a4e-ac37-d83e-ce75-e517232d779e@suse.com> On 07.04.21 03:16, Peter Matulis wrote: > > > On Mon, Mar 29, 2021 at 8:09 PM Jeremy Stanley > wrote: > > On 2021-03-29 19:47:24 -0400 (-0400), Peter Matulis wrote: > > I changed the testenv to 'pdf-docs' and the build is still being > skipped. > > > > Do I need to submit a PR to have this [1] set to 'false'? > > > > [1]: > > > https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/zuul.d/jobs.yaml#L970 > > [...] > > Oh, yep that'll need to be adjusted or overridden as well. I see > that https://review.opendev.org/678077 > explicitly chose not to do > PDF builds for deploy guides for the original PDF docs > implementation a couple of years ago. Unfortunately the commit > message doesn't say why, but maybe this is a good opportunity to > start. > > > Any other thoughts before I propose a change to the below? > > https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L970 > > I think it will not be enough, you need to set tox_pdf_envlist as well. I suggest to propose such a change and make two tests - with depends-on: 1) For your repo to show that you build the deploy-guide as PDF and have it in artifacts uploaded 2) For another use of the job that builds the normal docs as well to check that the PDF for the deploy-guide is build and not the normal docs. Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From ralonsoh at redhat.com Wed Apr 7 07:24:24 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 7 Apr 2021 09:24:24 +0200 Subject: [victoria]oslo_privsep.daemon.FailedToDropPrivileges In-Reply-To: References: Message-ID: Hello: This is indeed a problem with the execution privileges of the user running those commands. What deployment tool are you using? What is the user that runs the LB agent? The problem is, I think, that the privsep daemon is not properly starting. Try to execute manually the command you see in the logs. That will start the privsep daemon. If it doesn't work, check the privsep log and fix the permissions. ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--config-dir', '/etc/neutron/conf.d/neutron-linuxbridge-agent', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpm5d0ytiv/privsep.sock'] Regards. On Wed, Apr 7, 2021 at 5:51 AM 朝阳未烈 <379035389 at qq.com> wrote: > Hi, everyone: > > I tried to build an instance on the* compute node *but failed. I am sure > that every necessary connection has been built. > > And I found the same error information on the *controller node* and the *compute > node* , in */var/log/neutron/linuxbride-agent.log* > > That is information: > > INFO neutron.common.config [-] Logging enabled! > > 2021-04-07 11:30:52.866 2182 INFO neutron.common.config [-] > /usr/bin/neutron-linuxbridge-agent version 17.1.0 > > 2021-04-07 11:30:52.867 2182 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] > Interface mappings: {'provider': 'ens160'} > > 2021-04-07 11:30:52.867 2182 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] > Bridge mappings: {} > > 2021-04-07 11:30:52.868 2182 INFO oslo.privsep.daemon [-] Running privsep > helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', > 'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', > '--config-file', '/etc/neutron/neutron.conf', '--config-file', > '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--config-dir', > '/etc/neutron/conf.d/neutron-linuxbridge-agent', '--privsep_context', > 'neutron.privileged.default', '--privsep_sock_path', > '/tmp/tmpm5d0ytiv/privsep.sock'] > > 2021-04-07 11:30:53.346 2182 CRITICAL oslo.privsep.daemon [-] privsep > helper command exited non-zero (1) > > 2021-04-07 11:30:53.346 2182 CRITICAL neutron [-] Unhandled error: > oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited > non-zero (1) > > 2021-04-07 11:30:53.346 2182 ERROR neutron Traceback (most recent call > last): > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/bin/neutron-linuxbridge-agent", line 10, in > > 2021-04-07 11:30:53.346 2182 ERROR neutron sys.exit(main()) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/cmd/eventlet/plugins/linuxbridge_neutron_agent.py", > line 28, in main > > 2021-04-07 11:30:53.346 2182 ERROR neutron agent_main.main() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", > line 1052, in main > > 2021-04-07 11:30:53.346 2182 ERROR neutron manager = > LinuxBridgeManager(bridge_mappings, interface_mappings) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", > line 79, in __init__ > > 2021-04-07 11:30:53.346 2182 ERROR neutron > self.validate_interface_mappings() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", > line 94, in validate_interface_mappings > > 2021-04-07 11:30:53.346 2182 ERROR neutron if not > ip_lib.device_exists(interface): > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 748, > in device_exists > > 2021-04-07 11:30:53.346 2182 ERROR neutron return > IPDevice(device_name, namespace=namespace).exists() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 328, > in exists > > 2021-04-07 11:30:53.346 2182 ERROR neutron return > privileged.interface_exists(self.name, self.namespace) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/oslo_privsep/priv_context.py", line 246, > in _wrap > > 2021-04-07 11:30:53.346 2182 ERROR neutron self.start() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/oslo_privsep/priv_context.py", line 258, > in start > > 2021-04-07 11:30:53.346 2182 ERROR neutron channel = > daemon.RootwrapClientChannel(context=self) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/oslo_privsep/daemon.py", line 367, in > __init__ > > 2021-04-07 11:30:53.346 2182 ERROR neutron raise > FailedToDropPrivileges(msg) > > 2021-04-07 11:30:53.346 2182 ERROR neutron > oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited > non-zero (1) > > 2021-04-07 11:30:53.346 2182 ERROR neutron > > > > > > And it is the configuration in* /etc/sudoer.d/neutron *below: > > > > *Defaults:neutron !requiretty* > > *neutron ALL = (root) NOPASSWD: /usr/bin/neutron-rootwrap > /etc/neutron/rootwrap.conf ** > > *neutron ALL = (root) NOPASSWD: /usr/bin/neutron-rootwrap-daemon > /etc/neutron/rootwrap.conf* > > > > > > I googled for the solution but they didn’t matter. How can I solve this > problem? Thanks for your advicement! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Wed Apr 7 08:24:54 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 7 Apr 2021 13:24:54 +0500 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hi Luke, You may refer to below guide for magnum installation and its template https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 It worked pretty well for me. - Ammad On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri < luke.camilleri at zylacomputing.com> wrote: > Thanks for your quick reply. Do you have a download link for that image as > I cannot find an archive for the 32 release? > > As for the image upload into openstack you still use the fedora-atomic > property right to be available for coe deployments? > On 07/04/2021 00:03, feilong wrote: > > Hi Luke, > > The Fedora Atomic driver has been deprecated a while since the Fedora > Atomic has been deprecated by upstream. For now, I would suggest using > Fedora CoreOS 32.20201104.3.0 > > The latest version of Fedora CoreOS is 33.xxx, but there are something > when booting based my testing, see > https://github.com/coreos/fedora-coreos-tracker/issues/735 > > Please feel free to let me know if you have any question about using > Magnum. We're using stable/victoria on our public cloud and it works very > well. I can share our public templates if you want. Cheers. > > > > On 7/04/21 9:51 am, Luke Camilleri wrote: > > We have insatlled magnum following the installation guide here > https://docs.openstack.org/magnum/victoria/install/install-rdo.html and > the process was quite smooth but we have been having some issues with the > deployment of the clusters. > > The image being used as per the documentation is > https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 > > Our first issue was that podman was being used even if we specified the > use_podman=false (since the image above did not include podman) but this > was resulting in a timeout and the cluster would fail to deploy. We have > then installed podman in the image and the cluster progressed a bit further > > *+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'* > *+ sleep 5s* > *+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run > --entrypoint /bin/bash --name install-kubectl --net host > --privileged --rm --user root --volume > /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 > -c '\''cp /usr/local/bin/kubectl > /host/srv/magnum/bin/kubectl'\'''* > *bash: /usr/bin/podman: No such file or directory* > *ERROR Unable to install kubectl. Abort.* > *+ i=61* > *+ '[' 61 -gt 60 ']'* > *+ echo 'ERROR Unable to install kubectl. Abort.'* > *+ exit 1* > > The cluster is now failing here at "kube_cluster_deploy" and when > checking the logs on the master node we noticed the following in the log > files: > > *Starting to run kube-apiserver-to-kubelet-role* > *Waiting for Kubernetes API...* > *+ echo 'Waiting for Kubernetes API...'* > *++ curl --silent http://127.0.0.1:8080/healthz > * > *+ '[' ok = '' ']'* > *+ sleep 5* > > This is because the kubernetes API server is not installed either. I have > noticed some scripts that should handle the installation but I would like > to know if anyone here has had similar issues with a clean Victoria > installation. > Also should we have to install any packages in the fedora atomic image > file or should the installation requirements be part of the stack? > > Thanks in advance for any asistance > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Apr 7 09:05:20 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Apr 2021 11:05:20 +0200 Subject: [Release-job-failures] Pre-release of openstack/neutron for ref refs/tags/18.0.0.0rc2 failed In-Reply-To: References: Message-ID: <2e0d97ea-e0ca-274c-7b0a-0bf77fd7603b@openstack.org> zuul at openstack.org wrote: > Build failed. > > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/4b31dfd25e244e38a346e097c993c519 : SUCCESS in 54s > - release-openstack-python https://zuul.opendev.org/t/openstack/build/309d80630c8e4086b52a0d83062cd431 : SUCCESS in 4m 43s > - announce-release https://zuul.opendev.org/t/openstack/build/4bde576261db4943a1dd6610aa52f6be : POST_FAILURE in 8m 33s > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/18de7b1f60044445a06db64da42bbcc3 : SUCCESS in 5m 20s We are missing logs, but it looks like the job actually succeeded at announcing the release: http://lists.openstack.org/pipermail/release-announce/2021-April/011022.html So this can be safely ignored. -- Thierry Carrez (ttx) From destienne.maxime at gmail.com Wed Apr 7 09:48:27 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Wed, 7 Apr 2021 11:48:27 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: <3930281.aCZO8KT43X@p1> References: <3930281.aCZO8KT43X@p1> Message-ID: As Slawek Kaplonski told me, I enabled neutron debugging and I didn't find why specific mechanism drivers are refusing to bind ports on that host. I noticed that the VM can get an IP from DHCP, I see a link on the web interface (network topology) between my physical network "provider" and the VM. But this link disappeared when the VM crashed due to the error. Here are the previous DEBUG logs, just before the ERROR one. I don't succeed in getting more informed by these logs. (/neutron/server.log) Thank you a lot for your time ! Maxime `2021-04-07 10:10:30.294 25623 DEBUG > neutron.pecan_wsgi.hooks.policy_enforcement > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 9c53e456ca2d4d07a4aecbf91c487cae > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > 'binding:vif_details'] _exclude_attributes_by_policy > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > 2021-04-07 10:10:30.995 25626 DEBUG > neutron.pecan_wsgi.hooks.policy_enforcement > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a 9c53e456ca2d4d07a4aecbf91c487cae > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > 'binding:vif_details'] _exclude_attributes_by_policy > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > 2021-04-07 10:10:31.105 25626 DEBUG > neutron.pecan_wsgi.hooks.policy_enforcement > [req-446ed89e-0697-4822-b69b-49b02ad9732d 9c53e456ca2d4d07a4aecbf91c487cae > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > 'binding:vif_details'] _exclude_attributes_by_policy > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: {'port': > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > with profile bind_port > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', 'network_type': > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > 'network_type': 'flat', 'physical_network': 'provider', 'segmentation_id': > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a écrit : > Hi, > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > Hello, > > > > I spent a lot of time troubleshooting my issue, which I described here : > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > To summarize, when I want to create an instance, binding fails on compute > > node, the dhcp agent seems to give an ip to the VM but I have an error. > > What do You mean exactly? Failed binding of the port in Neutron? In such > case > nova will not boot vm so it can't get IP from DHCP. > > > > > I don't know where to dig, besides what I have done. > > Please enable debug logs in neutron-server and look in its logs for the > reason > why it failed to bind port on specific host. > Usually reason is dead L2 agent on host or mismatch in the agent's bridge > mappings configuration in the agent. > > > > > Thanks a lot for your help ! > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 7 10:03:12 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 07 Apr 2021 12:03:12 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: References: <3930281.aCZO8KT43X@p1> Message-ID: <3513595.c0HGFkD9VC@p1> Hi, Can You send me full neutron-server log? I will check if there is anything more there. Dnia środa, 7 kwietnia 2021 11:48:27 CEST Maxime d'Estienne pisze: > As Slawek Kaplonski told me, I enabled neutron debugging and I didn't find > why specific mechanism drivers are refusing to bind ports > on that host. > > I noticed that the VM can get an IP from DHCP, I see a link on the web > interface (network topology) between my physical network "provider" and the > VM. But this link disappeared when the VM crashed due to the error. > > Here are the previous DEBUG logs, just before the ERROR one. > > I don't succeed in getting more informed by these logs. > (/neutron/server.log) > > Thank you a lot for your time ! > Maxime > > `2021-04-07 10:10:30.294 25623 DEBUG > > neutron.pecan_wsgi.hooks.policy_enforcement > > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 9c53e456ca2d4d07a4aecbf91c487cae > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > > 'binding:vif_details'] _exclude_attributes_by_policy > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > 2021-04-07 10:10:30.995 25626 DEBUG > > neutron.pecan_wsgi.hooks.policy_enforcement > > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a 9c53e456ca2d4d07a4aecbf91c487cae > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > > 'binding:vif_details'] _exclude_attributes_by_policy > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > 2021-04-07 10:10:31.105 25626 DEBUG > > neutron.pecan_wsgi.hooks.policy_enforcement > > [req-446ed89e-0697-4822-b69b-49b02ad9732d 9c53e456ca2d4d07a4aecbf91c487cae > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > > 'binding:vif_details'] _exclude_attributes_by_policy > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: {'port': > > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > > with profile bind_port > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', 'network_type': > > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > 'network_type': 'flat', 'physical_network': 'provider', 'segmentation_id': > > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > > > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a > écrit : > > > Hi, > > > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > > Hello, > > > > > > I spent a lot of time troubleshooting my issue, which I described here : > > > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > > > To summarize, when I want to create an instance, binding fails on compute > > > node, the dhcp agent seems to give an ip to the VM but I have an error. > > > > What do You mean exactly? Failed binding of the port in Neutron? In such > > case > > nova will not boot vm so it can't get IP from DHCP. > > > > > > > > I don't know where to dig, besides what I have done. > > > > Please enable debug logs in neutron-server and look in its logs for the > > reason > > why it failed to bind port on specific host. > > Usually reason is dead L2 agent on host or mismatch in the agent's bridge > > mappings configuration in the agent. > > > > > > > > Thanks a lot for your help ! > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From hberaud at redhat.com Wed Apr 7 10:14:17 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 7 Apr 2021 12:14:17 +0200 Subject: [release] Meeting Time Poll In-Reply-To: References: Message-ID: Greetings, The poll is now terminated, everybody voted and we reached a consensus, our new meeting time is at 2pm UTC on Thursdays. https://doodle.com/poll/ip6tg4fvznz7p3qx It will take effect from our next meeting, i.e tomorrow. I'm going to update our agenda accordingly. Thanks to everyone for your vote. Le mer. 31 mars 2021 à 17:55, Herve Beraud a écrit : > Hello deliveryers, > > Don't forget to vote for our new meeting time. > > Thank you > > Le ven. 26 mars 2021 à 13:43, Herve Beraud a écrit : > >> Hello >> >> We have a few regular attendees of the Release Management meeting who >> have conflicts >> with the current meeting time. As a result, we would like to find a new >> time to hold the meeting. I've created a Doodle poll[1] for everyone to >> give their input on times. It's mostly limited to times that reasonably >> overlap the working day in the US and Europe since that's where most of >> our attendees are located. >> >> If you attend the Release Management meeting, please fill out the poll >> so we can hopefully find a time that works better for everyone. >> >> For the sake of organization and to allow everyone to schedule his agenda >> accordingly, the poll will be closed on April 5th. On that date, I will >> announce the time of this meeting and the date on which it will take effect >> . >> >> Thanks! >> >> [1] https://doodle.com/poll/ip6tg4fvznz7p3qx >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Apr 7 11:26:13 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 07 Apr 2021 14:26:13 +0300 Subject: [openstack-ansible] OSA Meeting Poll Message-ID: <170911617794404@mail.yandex.ru> Hi! We haven't changed OSA meeting time for a while and stick with the current option (Tuesday, 16:00 UTC) for a while. So we decided it's time to make a poll regarding preferred time for OSA meetings since list of the interested parties and circumstances might have changed since picking meeting time. You can find the poll via link [1]. Poll is open till Monday, April 12 2021. Please, make sure you vote before this time. [1] https://doodle.com/poll/m554dx4mrsideuzi/ -- Kind Regards, Dmitriy Rabotyagov From christian.rohmann at inovex.de Wed Apr 7 11:53:12 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 7 Apr 2021 13:53:12 +0200 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: References: Message-ID: On 13/01/2021 10:37, Christian Rohmann wrote: > I wrote a tiny patch to add the Ceph RDB feature of fast-diff to > backups created by cinder-backup: > >  * https://review.opendev.org/c/openstack/cinder/+/766856/ > > > Could someone please take a peek and let me know of this is sufficient > to be merged? This change was already merged to master and I now created cherry-picks / backports to victoria (https://review.opendev.org/c/openstack/cinder/+/782917) and ussuri (https://review.opendev.org/c/openstack/cinder/+/782929). Also Andrey Bolgov did create yet another backport of this feaure down to stable/train (https://review.opendev.org/c/openstack/cinder/+/784041). While the cherry-pick onto the stable/victoria branch does verify fine with Zuul (only need review to be merged), the cinder-plugin-ceph-tempest tests fail for ussuri and also train. > Stdout: 'RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable volumes/volume-081e9c22-21f3-4585-a2fe-1caed098052b object-map fast-diff".\nIn some cases useful info is found in syslog - try "dmesg | tail".\n' > Stderr: 'rbd: sysfs write failed\nrbd: map failed: (6) No such device or address\n' Could anybody give me a hint on why this might be? Also is there any other process to follow for backports than to create a cherry-pick from the following release down and wait for review? Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Apr 7 12:09:40 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 07 Apr 2021 15:09:40 +0300 Subject: [openstack-ansible] OSA Meeting Poll In-Reply-To: <170911617794404@mail.yandex.ru> References: <170911617794404@mail.yandex.ru> Message-ID: <202851617797329@mail.yandex.ru> Sorry for the typo in the link, added extra slash in the end. Correct link is: https://doodle.com/poll/m554dx4mrsideuzi 07.04.2021, 14:31, "Dmitriy Rabotyagov" : > Hi! > > We haven't changed OSA meeting time for a while and stick with the current option (Tuesday, 16:00 UTC) for a while. > > So we decided it's time to make a poll regarding preferred time for OSA meetings since list of the interested parties and circumstances might have changed since picking meeting time. > > You can find the poll via link [1]. Poll is open till Monday, April 12 2021. Please, make sure you vote before this time. > > [1] https://doodle.com/poll/m554dx4mrsideuzi/ > > -- > Kind Regards, > Dmitriy Rabotyagov --  Kind Regards, Dmitriy Rabotyagov From rosmaita.fossdev at gmail.com Wed Apr 7 12:28:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Apr 2021 08:28:51 -0400 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: References: Message-ID: On 4/7/21 7:53 AM, Christian Rohmann wrote: > On 13/01/2021 10:37, Christian Rohmann wrote: >> I wrote a tiny patch to add the Ceph RDB feature of fast-diff to >> backups created by cinder-backup: >> >>  * https://review.opendev.org/c/openstack/cinder/+/766856/ >> >> >> Could someone please take a peek and let me know of this is sufficient >> to be merged? > > > This change was already merged to master and I now created cherry-picks > / backports to victoria > (https://review.opendev.org/c/openstack/cinder/+/782917) and ussuri > (https://review.opendev.org/c/openstack/cinder/+/782929). > Also Andrey Bolgov did create yet another backport of this feaure down > to stable/train (https://review.opendev.org/c/openstack/cinder/+/784041). > > While the cherry-pick onto the stable/victoria branch does verify fine > with Zuul (only need review to be merged), the > cinder-plugin-ceph-tempest tests fail for ussuri and also train. > >> Stdout: 'RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable volumes/volume-081e9c22-21f3-4585-a2fe-1caed098052b object-map fast-diff".\nIn some cases useful info is found in syslog - try "dmesg | tail".\n' >> Stderr: 'rbd: sysfs write failed\nrbd: map failed: (6) No such device or address\n' > > Could anybody give me a hint on why this might be? You have hit https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1921897 Eric has a patch up addressing this: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/783880 > > Also is there any other process to follow for backports than to create a > cherry-pick from the following release down and wait for review? > You're following the correct procedure. One thing I noticed, though, is that your pick to stable/ussuri should have the cherry pick info for both the cherry-pick from master (which was the wallaby development branch at the time) to stable/victoria and also from victoria to stable/ussuri. > > > Regards > > > Christian > From rafal at pregusia.pl Wed Apr 7 13:18:59 2021 From: rafal at pregusia.pl (pregusia) Date: Wed, 7 Apr 2021 15:18:59 +0200 Subject: [keystone]improvments in mapping models/support for JWT tokens In-Reply-To: <20210401192203.jn2heaicdlwojc7i@yuggoth.org> References: <20210401192203.jn2heaicdlwojc7i@yuggoth.org> Message-ID: Thanks for the answer. I submitted patches to review: https://review.opendev.org/c/openstack/keystone/+/784553 https://review.opendev.org/c/openstack/keystone/+/784558 but it looks like some problem with python version https://zuul.opendev.org/t/openstack/build/45a04fb21bf14806a3a32b83c18b8120 Can You advice what is wrong here ? On 4/1/21 9:22 PM, Jeremy Stanley wrote: > On 2021-04-01 21:05:32 +0200 (+0200), pregusia wrote: >> Please direct your attention to some keystone modyfications - to be more >> precisly two of them: >>  (1) extension to mapping engine in order to support multiple projects and >> assigning project by id >>  (2) extension to authorization mechanisms - add JWT token support > [...] > > This is pretty exciting stuff. But please be aware that for an > OpenStack project to merge patches they'll need to be proposed into > the code review system (Gerrit) by someone, preferably by the author > of the patches, which is the easiest place to discuss them as well. > Also we need some way to confirm that the author of the patches has > agreed to the Individual Contributor License Agreement (essentially > asserting that the patches they propose are their own work or that > they have permission from the author or are proposing patches > consisting of existing code distributed under a license compatible > with the Apache License version 2.0), and the usual way to agree to > the ICLA is when creating your account in Gerrit. > > Please see the OpenStack Contributor Guide for a general > introduction to our code proposal and review workflow: > > https://docs.openstack.org/contributors/ > > And feel free to ask questions on this mailing list if you have any. From senrique at redhat.com Wed Apr 7 13:27:14 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 7 Apr 2021 10:27:14 -0300 Subject: [cinder] Bug deputy report for week of 2021-04-07 Message-ID: Hello, This is a bug report from 2021-03-31 to 2021-04-07. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC in #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High: - Medium: - https://bugs.launchpad.net/cinder/+bug/1922408 `` Create volume from snapshot will lose encrypted head when source volume is encrypted in RBD''. Assigned to haixin. - https://bugs.launchpad.net/cinder/+bug/1922013 '' IBM SVF driver: GMCV Add vols to a group fails even if rcrel and rccg are in the same state". Unassigned. Low: - https://bugs.launchpad.net/cinder/+bug/1922255 "Dell PowerVault PVMEISCSIDriver driver cannot manage volumes". Unassigned. - https://bugs.launchpad.net/python-cinderclient/+bug/1922749 "Top-level client doesn't support auth v3". Unassigned. Undecided: - https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1921897 "fast-diff in default feature set breaks 'rbd map'". Assigned to Eric Harney. Regards, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Apr 7 14:01:29 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 7 Apr 2021 16:01:29 +0200 Subject: [ironic] APAC-Europe SPUC time? Message-ID: Hi folks! The initial SPUC datetime was for 10am UTC, which was 11am for us in central Europe, now is supposed to be 12pm. On one hand, I find it more convenient to have SPUC at 11am still, on the other - I have German classes at this time for a few months starting mid-April. What do you think? Should we keep it in UTC, i.e. 12pm for us in Europe? Will that work for you Jacob? Dmitry -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at jorisengbers.nl Wed Apr 7 14:44:19 2021 From: info at jorisengbers.nl (Joris Engbers) Date: Wed, 7 Apr 2021 16:44:19 +0200 Subject: [Neutron] How to provide internet access to tier 2 instance In-Reply-To: <7aa3968f-dceb-43af-2548-e8ed0f7ac9b1@mailbox.org> References: <7aa3968f-dceb-43af-2548-e8ed0f7ac9b1@mailbox.org> Message-ID: <5757632f-d5cd-921a-ff25-70f095384441@jorisengbers.nl> I have tried a similar set-up and it seems to work here. On Router 2 I have added a static route for 0.0.0.0/0 to the IP of Router1 in the 'private' network. With this addition it is possible to ping 1.1.1.1. Just to be sure, I disabled port security on every intermediate port, but after reenabling them, it still works. I did find that the l3 agent is slow to clean up static routes after removing them in my version from OpenStack, this caused me to do a lot more debugging than necessary. With a fresh router it worked instantly. Joris On 04-04-2021 16:44, Bernd Bausch wrote: > I have a pretty standard single-server Victoria Devstack, where I > created this network topology: > > public       private      backend >   |             |             | >   |  /-------\  |-- I1        |- I2 >   |--|Router1|--|             | >   |  \-------/  |             | >   |             |  /-------\  | >   |             |--|Router2|--| >   |             |  \-------/  | >   |             |             | > > I1 and I2 are instances. > > My question: > > Is it possible to give I2 access to the external world to install > software and download files? I don't need access **to** I2 **from** > the external world. > > My unsuccessful attempt: > > After adding a static default route via Router1 to Router2, I can ping > the internet from Router2's namespace, but not from I2. > > My guess is that Router1 ignores traffic from networks that are not > attached to it. I don't have enough experience to understand the > netfilter rules in Router1's namespace, and in any case, rather than > tweaking them I need a supported method to give I2 internet access, or > the confirmation that it is not possible. > > Thanks much for any insights and suggestions. > > Bernd > From skaplons at redhat.com Wed Apr 7 15:11:14 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 07 Apr 2021 17:11:14 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: References: <3513595.c0HGFkD9VC@p1> Message-ID: <77243093.lmQnbLH0hy@p1> Hi, Dnia środa, 7 kwietnia 2021 13:35:07 CEST Maxime d'Estienne pisze: > Hi ! > > Here is the log file. First error occurs at line 117. I have couple of questions there: 1. What version of Neutron are You using exactly? It seems from that log that You don't have patch https://github.com/openstack/neutron/commit/74c51a2e5390f258290ee890c9218beb5fdfd29c in Your code. 2. What mechanism drivers do You have enabled in Your ML2 config? In logs there should be lines e.g. like https://github.com/openstack/neutron/blob/34d6fbcc2a67eac45ad6f841903f656ef7118614/neutron/plugins/ml2/drivers/mech_agent.py#L87 but I don't see any line like that in Your log. > > Thank you ! > > Le mer. 7 avr. 2021 à 12:04, Slawek Kaplonski a > écrit : > > > Hi, > > > > Can You send me full neutron-server log? I will check if there is anything > > more there. > > > > Dnia środa, 7 kwietnia 2021 11:48:27 CEST Maxime d'Estienne pisze: > > > As Slawek Kaplonski told me, I enabled neutron debugging and I didn't > > find > > > why specific mechanism drivers are refusing to bind ports > > > on that host. > > > > > > I noticed that the VM can get an IP from DHCP, I see a link on the web > > > interface (network topology) between my physical network "provider" and > > the > > > VM. But this link disappeared when the VM crashed due to the error. > > > > > > Here are the previous DEBUG logs, just before the ERROR one. > > > > > > I don't succeed in getting more informed by these logs. > > > (/neutron/server.log) > > > > > > Thank you a lot for your time ! > > > Maxime > > > > > > `2021-04-07 10:10:30.294 25623 DEBUG > > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 > > 9c53e456ca2d4d07a4aecbf91c487cae > > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > > excluded by > > > > policy engine: ['binding:profile', 'binding:host_id', > > 'binding:vif_type', > > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > > > 2021-04-07 10:10:30.995 25626 DEBUG > > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a > > 9c53e456ca2d4d07a4aecbf91c487cae > > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > > excluded by > > > > policy engine: ['binding:profile', 'binding:host_id', > > 'binding:vif_type', > > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > > > 2021-04-07 10:10:31.105 25626 DEBUG > > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > > [req-446ed89e-0697-4822-b69b-49b02ad9732d > > 9c53e456ca2d4d07a4aecbf91c487cae > > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > > excluded by > > > > policy engine: ['binding:profile', 'binding:host_id', > > 'binding:vif_type', > > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: > > {'port': > > > > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > > > > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > > > > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > > > > > > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > > port > > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > > normal > > > > with profile bind_port > > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > > > > > > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > > port > > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > > > > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > 'network_type': > > > > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > > > > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > > > > > > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > > normal > > > > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > > > 'network_type': 'flat', 'physical_network': 'provider', > > 'segmentation_id': > > > > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > > > > > > > > > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a > > > écrit : > > > > > > > Hi, > > > > > > > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > > > > Hello, > > > > > > > > > > I spent a lot of time troubleshooting my issue, which I described > > here : > > > > > > > > > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > > > > > > > To summarize, when I want to create an instance, binding fails on > > compute > > > > > node, the dhcp agent seems to give an ip to the VM but I have an > > error. > > > > > > > > What do You mean exactly? Failed binding of the port in Neutron? In > > such > > > > case > > > > nova will not boot vm so it can't get IP from DHCP. > > > > > > > > > > > > > > I don't know where to dig, besides what I have done. > > > > > > > > Please enable debug logs in neutron-server and look in its logs for the > > > > reason > > > > why it failed to bind port on specific host. > > > > Usually reason is dead L2 agent on host or mismatch in the agent's > > bridge > > > > mappings configuration in the agent. > > > > > > > > > > > > > > Thanks a lot for your help ! > > > > > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Wed Apr 7 15:52:24 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Apr 2021 15:52:24 +0000 Subject: [Release-job-failures] Pre-release of openstack/neutron for ref refs/tags/18.0.0.0rc2 failed In-Reply-To: <2e0d97ea-e0ca-274c-7b0a-0bf77fd7603b@openstack.org> References: <2e0d97ea-e0ca-274c-7b0a-0bf77fd7603b@openstack.org> Message-ID: <20210407155223.wy63hhj2d54kg7gn@yuggoth.org> On 2021-04-07 11:05:20 +0200 (+0200), Thierry Carrez wrote: [...] > We are missing logs, but it looks like the job actually succeeded at > announcing the release: > > http://lists.openstack.org/pipermail/release-announce/2021-April/011022.html > > So this can be safely ignored. Yes, without looking that closely at it, the most likely cause was a known incident with authentication breakage for log uploads in one of our storage donors' clouds, as mentioned in our status log: https://wiki.openstack.org/wiki/Infrastructure_Status -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Wed Apr 7 16:24:41 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 7 Apr 2021 19:24:41 +0300 Subject: [TripleO] Xena PTG schedule please review Message-ID: Hello TripleO o/ Thanks again to everybody who has volunteered to lead a session for the coming Xena TripleO project teams gathering. I've had a go at the agenda [1] trying to keep it to max 4 or 5 sessions per day with some breaks. Please review the slot assigned for your session at [1]. If that time is not ok then please let me know as soon as possible and indicate if you want it later or earlier or on any other day. If you've decided the session no longer makes sense then also please tell me and we can move things around accordingly to finish earlier. I'd like to finalise the schedule by next Monday 12 April which is a week before PTG. We can and likely will make changes after this date but last minute changes are best avoided to allow folks to schedule their PTG attendance across projects. Thanks everybody for your help! Looking forward to interesting presentations and discussions as always regards, marios [1] https://etherpad.opendev.org/p/tripleo-ptg-xena From thierry at openstack.org Wed Apr 7 16:45:34 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Apr 2021 18:45:34 +0200 Subject: [largescale-sig] Next meeting: April 7, 15utc In-Reply-To: References: Message-ID: We held our meeting today. We discussed future editions of our video meeting, our PTG presence, and progress on documenting the Scaling journey. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-04-07-15.00.html Our next meeting will be Wednesday, April 21 at 15utc as part of the PTG! We'll be discussing the current state of the Large Scale SIG, and pick a topic for our next video meeting, which should be happening May 13. Regards, -- Thierry Carrez (ttx) From johfulto at redhat.com Wed Apr 7 16:54:52 2021 From: johfulto at redhat.com (John Fulton) Date: Wed, 7 Apr 2021 12:54:52 -0400 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. On Monday I see: 1. STORAGE: 1430-1510 (ceph) 2. DF: 1510-1550 (ephemeral heat) 3. DF/Networking: 1600-1700 (ports v2 "no heat") If Harald and James are OK with it, could it be changed to the following? A. DF: 1430-1510 (ephemeral heat) B. DF/Networking: 1510-1550 (ports v2 "no heat") C. STORAGE: 1600-1700 (ceph) I ask because a portion of C depends on B, so it would be helpful to have that context first. If the presenters have conflicts however, we don't need this change. Thanks, John > If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Apr 7 18:18:05 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Apr 2021 14:18:05 -0400 Subject: [cinder] final reviews for RC-2 Message-ID: We have 3 patches that need review/revision/approval as soon as possible before we release RC-2 tomorrow (Thursday 8 April). All 3 are updates to the release notes: Release note for mTLS support cinder->glance - https://review.opendev.org/c/openstack/cinder/+/783964 Release note about the cgroups v1 situation - https://review.opendev.org/c/openstack/cinder/+/784179 Add known issue note about RBD encrypted volumes - https://review.opendev.org/c/openstack/cinder/+/785235 Please review and leave comments as soon as you can. From gouthampravi at gmail.com Wed Apr 7 18:38:02 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 7 Apr 2021 11:38:02 -0700 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core Message-ID: Hello Zorillas, Vida's been our bug czar since the Ussuri release and she's conceptualized and executed our successful bug triage strategy. She has also painstakingly organized several documentation and code bug squash events and kept the pulse on multi-release efforts. She's taught me a lot about project management and you can see tangible results here, I suppose :) Liron's fixed a lot of test code bugs and covered some old and important test gaps over the past few releases. He's driving standardization of the tempest plugin and bringing in best practices from tempest, refstack and elsewhere into our testing. It's always a pleasure to work with Liron since he's happy to provide and welcome feedback. More recently, Liron and Vida have enabled us to work with the InteropWG and define refstack guidelines. They've also gotten us closer to members from the QA community who they work with more closely downstream. In short, they bring in different perspectives while also espousing the team's core values. So I'd like to propose their addition to the manila-tempest-plugin-core team. Please give me your +/- 1s for this proposal. Thanks, Goutham From viroel at gmail.com Wed Apr 7 18:41:01 2021 From: viroel at gmail.com (Douglas) Date: Wed, 7 Apr 2021 15:41:01 -0300 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: A big +1 for both. Thank you for your contributions so far. On Wed, Apr 7, 2021 at 3:39 PM Goutham Pacha Ravi wrote: > Hello Zorillas, > > Vida's been our bug czar since the Ussuri release and she's > conceptualized and executed our successful bug triage strategy. She > has also painstakingly organized several documentation and code bug > squash events and kept the pulse on multi-release efforts. She's > taught me a lot about project management and you can see tangible > results here, I suppose :) > > Liron's fixed a lot of test code bugs and covered some old and > important test gaps over the past few releases. He's driving > standardization of the tempest plugin and bringing in best practices > from tempest, refstack and elsewhere into our testing. It's always a > pleasure to work with Liron since he's happy to provide and welcome > feedback. > > More recently, Liron and Vida have enabled us to work with the > InteropWG and define refstack guidelines. They've also gotten us > closer to members from the QA community who they work with more > closely downstream. In short, they bring in different perspectives > while also espousing the team's core values. So I'd like to propose > their addition to the manila-tempest-plugin-core team. > > Please give me your +/- 1s for this proposal. > > Thanks, > Goutham > > -- Douglas Salles Viroel -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Apr 7 18:53:57 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Apr 2021 13:53:57 -0500 Subject: [ptg] Secure RBAC and Policy Xena PTG sessoins Message-ID: Hey all, Several projects are working through RBAC overhauls and naturally sessions are cropping up for the PTG. I tried bouncing around to various policy sessions during the Wallaby PTG, but I didn't plan things out very well. As a result, I missed sessions, had duplicate conversations with multiple groups, and ended up being more reactive than I'd like. To prevent that, Ghanshyam and I have condensed all the policy/RBAC sessions we know about in a single etherpad [0]. I know most projects are still firming up their schedules, but I've written down the session times that we know of and organized them chronologically. My hope is that this will help us group similar discussions and reach broader consensus on topics easier and quicker. For example, keystone and nova have a cross-project session on Thursday to discuss how nova should handle consuming system-scoped tokens for project-specific operations. This topic certainly isn't exclusive to nova. It'll impact just about every other service and approaching it consistently will be huge for end users and operators. Another good example of this would be the glance refactor to integrate system-scope support we're going to talk about on Wednesday (cinder and barbican are potentially facing very similar refactors). Each session in the etherpad [0] has topics, so if a topic sounds relevant to your service, please feel free to drop into those discussions. A rough outline is that: - Monday we're going to focus on QA and general policy problems (e.g., converting tempest to use system-scope, the JSON->YAML community goal, overall status from Wallaby, etc) - Tuesday we're going to find ways to adopt system-scope in cinder - Wednesday we're going to work through system-scope adoption, the meta definitions API, and test coverage in glance - Thursday we're going to discuss what the experience should be like for operators using system-scoped tokens to do project-specific operations with nova (e.g., rebooting instances) I'm contemplating hosting a 30 minute recap session on Friday that attempts to summarize everything from the week regarding RBAC discussions. If that sounds useful, I'll ask Kristi if I can use one of the keystone sessions for that recap. I know, this feels like a lot of focus for one thing and I appreciate everyone's help working through this stuff. But, I'm hopeful that better organization throughout the PTG week will result in less confusion about what we plan to do in Xena with RBAC so we can deliver something useful to users and operators. Thanks, Lance [0] https://etherpad.opendev.org/p/policy-popup-xena-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Apr 7 18:59:19 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Apr 2021 11:59:19 -0700 Subject: [ptg] Secure RBAC and Policy Xena PTG sessoins In-Reply-To: References: Message-ID: I think a 30 minute re-cap session would be good on Friday because not everyone is going to be able to attend every session, depending on their own resulting schedule and commitments. -Julia On Wed, Apr 7, 2021 at 11:56 AM Lance Bragstad wrote: > > Hey all, > > Several projects are working through RBAC overhauls and naturally sessions are cropping up for the PTG. > > I tried bouncing around to various policy sessions during the Wallaby PTG, but I didn't plan things out very well. As a result, I missed sessions, had duplicate conversations with multiple groups, and ended up being more reactive than I'd like. > > To prevent that, Ghanshyam and I have condensed all the policy/RBAC sessions we know about in a single etherpad [0]. > > I know most projects are still firming up their schedules, but I've written down the session times that we know of and organized them chronologically. My hope is that this will help us group similar discussions and reach broader consensus on topics easier and quicker. > > For example, keystone and nova have a cross-project session on Thursday to discuss how nova should handle consuming system-scoped tokens for project-specific operations. This topic certainly isn't exclusive to nova. It'll impact just about every other service and approaching it consistently will be huge for end users and operators. Another good example of this would be the glance refactor to integrate system-scope support we're going to talk about on Wednesday (cinder and barbican are potentially facing very similar refactors). Each session in the etherpad [0] has topics, so if a topic sounds relevant to your service, please feel free to drop into those discussions. > > A rough outline is that: > > - Monday we're going to focus on QA and general policy problems (e.g., converting tempest to use system-scope, the JSON->YAML community goal, overall status from Wallaby, etc) > - Tuesday we're going to find ways to adopt system-scope in cinder > - Wednesday we're going to work through system-scope adoption, the meta definitions API, and test coverage in glance > - Thursday we're going to discuss what the experience should be like for operators using system-scoped tokens to do project-specific operations with nova (e.g., rebooting instances) > > I'm contemplating hosting a 30 minute recap session on Friday that attempts to summarize everything from the week regarding RBAC discussions. If that sounds useful, I'll ask Kristi if I can use one of the keystone sessions for that recap. > > I know, this feels like a lot of focus for one thing and I appreciate everyone's help working through this stuff. But, I'm hopeful that better organization throughout the PTG week will result in less confusion about what we plan to do in Xena with RBAC so we can deliver something useful to users and operators. > > Thanks, > > Lance > > [0] https://etherpad.opendev.org/p/policy-popup-xena-ptg From ces.eduardo98 at gmail.com Wed Apr 7 19:04:00 2021 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 7 Apr 2021 16:04:00 -0300 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: Big +1! Thank you, Liron and Vida! :) Em qua., 7 de abr. de 2021 às 15:40, Goutham Pacha Ravi < gouthampravi at gmail.com> escreveu: > Hello Zorillas, > > Vida's been our bug czar since the Ussuri release and she's > conceptualized and executed our successful bug triage strategy. She > has also painstakingly organized several documentation and code bug > squash events and kept the pulse on multi-release efforts. She's > taught me a lot about project management and you can see tangible > results here, I suppose :) > > Liron's fixed a lot of test code bugs and covered some old and > important test gaps over the past few releases. He's driving > standardization of the tempest plugin and bringing in best practices > from tempest, refstack and elsewhere into our testing. It's always a > pleasure to work with Liron since he's happy to provide and welcome > feedback. > > More recently, Liron and Vida have enabled us to work with the > InteropWG and define refstack guidelines. They've also gotten us > closer to members from the QA community who they work with more > closely downstream. In short, they bring in different perspectives > while also espousing the team's core values. So I'd like to propose > their addition to the manila-tempest-plugin-core team. > > Please give me your +/- 1s for this proposal. > > Thanks, > Goutham > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Apr 7 19:25:00 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Apr 2021 14:25:00 -0500 Subject: [ptg] Secure RBAC and Policy Xena PTG sessoins In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 1:59 PM Julia Kreger wrote: > I think a 30 minute re-cap session would be good on Friday because not > everyone is going to be able to attend every session, depending on > their own resulting schedule and commitments. > +1 Tentatively added to the keystone schedule for Friday. I'll see what Kristi thinks. > > -Julia > > On Wed, Apr 7, 2021 at 11:56 AM Lance Bragstad > wrote: > > > > Hey all, > > > > Several projects are working through RBAC overhauls and naturally > sessions are cropping up for the PTG. > > > > I tried bouncing around to various policy sessions during the Wallaby > PTG, but I didn't plan things out very well. As a result, I missed > sessions, had duplicate conversations with multiple groups, and ended up > being more reactive than I'd like. > > > > To prevent that, Ghanshyam and I have condensed all the policy/RBAC > sessions we know about in a single etherpad [0]. > > > > I know most projects are still firming up their schedules, but I've > written down the session times that we know of and organized them > chronologically. My hope is that this will help us group similar > discussions and reach broader consensus on topics easier and quicker. > > > > For example, keystone and nova have a cross-project session on Thursday > to discuss how nova should handle consuming system-scoped tokens for > project-specific operations. This topic certainly isn't exclusive to nova. > It'll impact just about every other service and approaching it consistently > will be huge for end users and operators. Another good example of this > would be the glance refactor to integrate system-scope support we're going > to talk about on Wednesday (cinder and barbican are potentially facing very > similar refactors). Each session in the etherpad [0] has topics, so if a > topic sounds relevant to your service, please feel free to drop into those > discussions. > > > > A rough outline is that: > > > > - Monday we're going to focus on QA and general policy problems (e.g., > converting tempest to use system-scope, the JSON->YAML community goal, > overall status from Wallaby, etc) > > - Tuesday we're going to find ways to adopt system-scope in cinder > > - Wednesday we're going to work through system-scope adoption, the meta > definitions API, and test coverage in glance > > - Thursday we're going to discuss what the experience should be like for > operators using system-scoped tokens to do project-specific operations with > nova (e.g., rebooting instances) > > > > I'm contemplating hosting a 30 minute recap session on Friday that > attempts to summarize everything from the week regarding RBAC discussions. > If that sounds useful, I'll ask Kristi if I can use one of the keystone > sessions for that recap. > > > > I know, this feels like a lot of focus for one thing and I appreciate > everyone's help working through this stuff. But, I'm hopeful that better > organization throughout the PTG week will result in less confusion about > what we plan to do in Xena with RBAC so we can deliver something useful to > users and operators. > > > > Thanks, > > > > Lance > > > > [0] https://etherpad.opendev.org/p/policy-popup-xena-ptg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From destienne.maxime at gmail.com Wed Apr 7 11:35:07 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Wed, 7 Apr 2021 13:35:07 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: <3513595.c0HGFkD9VC@p1> References: <3930281.aCZO8KT43X@p1> <3513595.c0HGFkD9VC@p1> Message-ID: Hi ! Here is the log file. First error occurs at line 117. Thank you ! Le mer. 7 avr. 2021 à 12:04, Slawek Kaplonski a écrit : > Hi, > > Can You send me full neutron-server log? I will check if there is anything > more there. > > Dnia środa, 7 kwietnia 2021 11:48:27 CEST Maxime d'Estienne pisze: > > As Slawek Kaplonski told me, I enabled neutron debugging and I didn't > find > > why specific mechanism drivers are refusing to bind ports > > on that host. > > > > I noticed that the VM can get an IP from DHCP, I see a link on the web > > interface (network topology) between my physical network "provider" and > the > > VM. But this link disappeared when the VM crashed due to the error. > > > > Here are the previous DEBUG logs, just before the ERROR one. > > > > I don't succeed in getting more informed by these logs. > > (/neutron/server.log) > > > > Thank you a lot for your time ! > > Maxime > > > > `2021-04-07 10:10:30.294 25623 DEBUG > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 > 9c53e456ca2d4d07a4aecbf91c487cae > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > excluded by > > > policy engine: ['binding:profile', 'binding:host_id', > 'binding:vif_type', > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > 2021-04-07 10:10:30.995 25626 DEBUG > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a > 9c53e456ca2d4d07a4aecbf91c487cae > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > excluded by > > > policy engine: ['binding:profile', 'binding:host_id', > 'binding:vif_type', > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > 2021-04-07 10:10:31.105 25626 DEBUG > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > [req-446ed89e-0697-4822-b69b-49b02ad9732d > 9c53e456ca2d4d07a4aecbf91c487cae > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > excluded by > > > policy engine: ['binding:profile', 'binding:host_id', > 'binding:vif_type', > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: > {'port': > > > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > > > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > > > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > > > > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > port > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > normal > > > with profile bind_port > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > > > > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > port > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > > > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > 'network_type': > > > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > > > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > > > > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > normal > > > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > > 'network_type': 'flat', 'physical_network': 'provider', > 'segmentation_id': > > > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > > > > > > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a > > écrit : > > > > > Hi, > > > > > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > > > Hello, > > > > > > > > I spent a lot of time troubleshooting my issue, which I described > here : > > > > > > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > > > > > To summarize, when I want to create an instance, binding fails on > compute > > > > node, the dhcp agent seems to give an ip to the VM but I have an > error. > > > > > > What do You mean exactly? Failed binding of the port in Neutron? In > such > > > case > > > nova will not boot vm so it can't get IP from DHCP. > > > > > > > > > > > I don't know where to dig, besides what I have done. > > > > > > Please enable debug logs in neutron-server and look in its logs for the > > > reason > > > why it failed to bind port on specific host. > > > Usually reason is dead L2 agent on host or mismatch in the agent's > bridge > > > mappings configuration in the agent. > > > > > > > > > > > Thanks a lot for your help ! > > > > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: server.log Type: application/octet-stream Size: 454640 bytes Desc: not available URL: From eng.taha1928 at gmail.com Wed Apr 7 17:17:10 2021 From: eng.taha1928 at gmail.com (Taha Adel) Date: Wed, 7 Apr 2021 19:17:10 +0200 Subject: [Keystone] Managing keystone tokens in high availability environment Message-ID: Hello Engineers and Developers, I'm currently deploying a three-nodes openstack controller cluster, controller-01, controller-02, anc controller-03. I have installed the keystone service on the three controllers and generated fernet keys on one node and distributed the keys to the other nodes of the cluster. Hence, I have configured an HAProxy in front of them that would distribute the incoming requests over them. The issue is, when I try to access the keystone endpoint from using the VIP of the loadbalancer, the service works ONLY on the node that I have generated the keys on, and it doesn't work on the nodes that got the keys by distribution. the error message I have got is *"INTERNAL SERVER ERROR (500)"* In other words, the node that had* keystone-manage fernet_setup *command ran on it, it can run the service properly, but the others can't. Is the way of replicating the key incorrect? is there any other way? Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Wed Apr 7 18:37:03 2021 From: helena at openstack.org (helena at openstack.org) Date: Wed, 7 Apr 2021 14:37:03 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting Message-ID: <1617820623.770226846@apps.rackspace.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Project Updates Template.pptx Type: application/octet-stream Size: 791921 bytes Desc: not available URL: From tpb at dyncloud.net Wed Apr 7 19:44:36 2021 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 7 Apr 2021 15:44:36 -0400 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: <20210407194436.vbtmfwts3r7ighh3@barron.net> Ditto, including the big thanks. On 07/04/21 16:04 -0300, Carlos Silva wrote: >Big +1! > >Thank you, Liron and Vida! :) > >Em qua., 7 de abr. de 2021 às 15:40, Goutham Pacha Ravi < >gouthampravi at gmail.com> escreveu: > >> Hello Zorillas, >> >> Vida's been our bug czar since the Ussuri release and she's >> conceptualized and executed our successful bug triage strategy. She >> has also painstakingly organized several documentation and code bug >> squash events and kept the pulse on multi-release efforts. She's >> taught me a lot about project management and you can see tangible >> results here, I suppose :) >> >> Liron's fixed a lot of test code bugs and covered some old and >> important test gaps over the past few releases. He's driving >> standardization of the tempest plugin and bringing in best practices >> from tempest, refstack and elsewhere into our testing. It's always a >> pleasure to work with Liron since he's happy to provide and welcome >> feedback. >> >> More recently, Liron and Vida have enabled us to work with the >> InteropWG and define refstack guidelines. They've also gotten us >> closer to members from the QA community who they work with more >> closely downstream. In short, they bring in different perspectives >> while also espousing the team's core values. So I'd like to propose >> their addition to the manila-tempest-plugin-core team. >> >> Please give me your +/- 1s for this proposal. >> >> Thanks, >> Goutham >> >> From rlandy at redhat.com Wed Apr 7 19:48:19 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 7 Apr 2021 15:48:19 -0400 Subject: [tripleo][DIB] Second core review/w+ requested on DIB patch to unblock tripleo promotions Message-ID: Hi diskimage-builder cores, OVB jobs have been failing on centos-8 based releases promotion jobs since the move to container-tools:3.0. https://review.opendev.org/c/openstack/diskimage-builder/+/785138 - "Make DIB_DNF_MODULE_STREAMS part of yum element" and https://review.opendev.org/c/openstack/tripleo-ci/+/785087 - "Use dib_dnf_module_streams for enabling modules" patches were added to address the failures. These patches were tested in https://review.rdoproject.org/r/c/testproject/+/33138 - see passing OVB job https://review.rdoproject.org/zuul/build/6582fb8afa2a44c6a806bb2545a9fadf. Thanks to Carlos for reviewing the DIB patch. We are looking for a second core review and w+ on this patch to clear the promotion lines. Thank you, tripleo CI ruck/rovers -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Apr 7 19:50:45 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 7 Apr 2021 12:50:45 -0700 Subject: [all] vPTG April 2021 Registration & Schedule Message-ID: Hello everyone! The April 2021 Project Teams Gathering is right around the corner! The official schedule has now been posted on the PTG website [1], the PTGbot will be up to date by the end of the week [2], and we have also attached it to this email. Please double check your rooms! We did a little consolidation and shifting while maintaining the times you signed up for. Friendly reminder, if you have not already registered, please do so [3]. It is important that we get everyone to register for the event as this is how we will contact you about tooling information/passwords and other event details. Please let us know if you have any questions! Cheers, - The Kendalls (diablo_rojo & wendallkaters) [1] PTG Website www.openstack.org/ptg [2] PTGbot: http://ptg.openstack.org/ptg.html [3] PTG Registration: https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 7 20:11:00 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 07 Apr 2021 22:11:00 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: <6282510.b6OWOPelg3@p1> Hi, Dnia środa, 7 kwietnia 2021 20:37:03 CEST helena at openstack.org pisze: > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. Thx for doing that. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. I will do the update for Neutron. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From luke.camilleri at zylacomputing.com Wed Apr 7 20:39:37 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 7 Apr 2021 22:39:37 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: kube-master:     type: OS::Nova::Server     condition: image_based     properties:       name: {get_param: name}       image: {get_param: server_image}       flavor: {get_param: master_flavor}                                                 MISSING ----->   key_name: {get_param: ssh_key_name}       user_data_format: SOFTWARE_CONFIG       software_config_transport: POLL_SERVER_HEAT       user_data: {get_resource: agent_config}       networks:         - port: {get_resource: kube_master_eth0}       scheduler_hints: { group: { get_param: nodes_server_group_id }}       availability_zone: {get_param: availability_zone} kube-master-bfv:     type: OS::Nova::Server     condition: volume_based     properties:       name: {get_param: name}       flavor: {get_param: master_flavor}                                                 MISSING ----->   key_name: {get_param: ssh_key_name}       user_data_format: SOFTWARE_CONFIG       software_config_transport: POLL_SERVER_HEAT       user_data: {get_resource: agent_config}       networks:         - port: {get_resource: kube_master_eth0}       scheduler_hints: { group: { get_param: nodes_server_group_id }}       availability_zone: {get_param: availability_zone}       block_device_mapping_v2:         - boot_index: 0           volume_id: {get_resource: kube_node_volume} If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? On 07/04/2021 10:24, Ammad Syed wrote: > Hi Luke, > > You may refer to below guide for magnum installation and its template > > https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 > > > It worked pretty well for me. > > - Ammad > On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri > > wrote: > > Thanks for your quick reply. Do you have a download link for that > image as I cannot find an archive for the 32 release? > > As for the image upload into openstack you still use the > fedora-atomic property right to be available for coe deployments? > > On 07/04/2021 00:03, feilong wrote: >> >> Hi Luke, >> >> The Fedora Atomic driver has been deprecated a while since the >> Fedora Atomic has been deprecated by upstream. For now, I would >> suggest using Fedora CoreOS 32.20201104.3.0 >> >> The latest version of Fedora CoreOS is 33.xxx, but there are >> something when booting based my testing, see >> https://github.com/coreos/fedora-coreos-tracker/issues/735 >> >> >> Please feel free to let me know if you have any question about >> using Magnum. We're using stable/victoria on our public cloud and >> it works very well. I can share our public templates if you want. >> Cheers. >> >> >> >> On 7/04/21 9:51 am, Luke Camilleri wrote: >>> >>> We have insatlled magnum following the installation guide here >>> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >>> >>> and the process was quite smooth but we have been having some >>> issues with the deployment of the clusters. >>> >>> The image being used as per the documentation is >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>> >>> >>> Our first issue was that podman was being used even if we >>> specified the use_podman=false (since the image above did not >>> include podman) but this was resulting in a timeout and the >>> cluster would fail to deploy. We have then installed podman in >>> the image and the cluster progressed a bit further >>> >>> /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping >>> 5s'// >>> //+ sleep 5s// >>> //+ ssh -F /srv/magnum/.ssh/config root at localhost >>> '/usr/bin/podman run --entrypoint /bin/bash     --name >>> install-kubectl     --net host --privileged     --rm     --user >>> root --volume /srv/magnum/bin:/host/srv/magnum/bin >>> k8s.gcr.io/hyperkube:v1.15.7 >>> -c '\''cp >>> /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >>> //bash: /usr/bin/podman: No such file or directory// >>> //ERROR Unable to install kubectl. Abort.// >>> //+ i=61// >>> //+ '[' 61 -gt 60 ']'// >>> //+ echo 'ERROR Unable to install kubectl. Abort.'// >>> //+ exit 1/ >>> >>> The cluster is now failing here at "kube_cluster_deploy" and >>> when checking the logs on the master node we noticed the >>> following in the log files: >>> >>> /Starting to run kube-apiserver-to-kubelet-role// >>> //Waiting for Kubernetes API...// >>> //+ echo 'Waiting for Kubernetes API...'// >>> //++ curl --silent http://127.0.0.1:8080/healthz >>> // >>> //+ '[' ok = '' ']'// >>> //+ sleep 5/ >>> >>> This is because the kubernetes API server is not installed >>> either. I have noticed some scripts that should handle the >>> installation but I would like to know if anyone here has had >>> similar issues with a clean Victoria installation. >>> >>> Also should we have to install any packages in the fedora atomic >>> image file or should the installation requirements be part of >>> the stack? >>> >>> Thanks in advance for any asistance >>> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email:flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House,150 Willis Street, Wellington >> ------------------------------------------------------ > > -- > Regards, > > > Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Wed Apr 7 20:54:05 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Wed, 7 Apr 2021 21:54:05 +0100 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: <4A94086F-F79A-4EC4-8E3F-A6AE8EDF4C16@stackhpc.com> The ssh key gets injected via ignition which is why it’s not present in the HOT template. You need minimum train release of Heat for this to work however. Sent from my iPhone > On 7 Apr 2021, at 21:45, Luke Camilleri wrote: > >  > Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" > > Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. > > I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: > > kube-master: > type: OS::Nova::Server > condition: image_based > properties: > name: {get_param: name} > image: {get_param: server_image} > flavor: {get_param: master_flavor} > MISSING -----> key_name: {get_param: ssh_key_name} > user_data_format: SOFTWARE_CONFIG > software_config_transport: POLL_SERVER_HEAT > user_data: {get_resource: agent_config} > networks: > - port: {get_resource: kube_master_eth0} > scheduler_hints: { group: { get_param: nodes_server_group_id }} > availability_zone: {get_param: availability_zone} > > kube-master-bfv: > type: OS::Nova::Server > condition: volume_based > properties: > name: {get_param: name} > flavor: {get_param: master_flavor} > MISSING -----> key_name: {get_param: ssh_key_name} > user_data_format: SOFTWARE_CONFIG > software_config_transport: POLL_SERVER_HEAT > user_data: {get_resource: agent_config} > networks: > - port: {get_resource: kube_master_eth0} > scheduler_hints: { group: { get_param: nodes_server_group_id }} > availability_zone: {get_param: availability_zone} > block_device_mapping_v2: > - boot_index: 0 > volume_id: {get_resource: kube_node_volume} > > If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? > >> On 07/04/2021 10:24, Ammad Syed wrote: >> Hi Luke, >> >> You may refer to below guide for magnum installation and its template >> >> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >> >> It worked pretty well for me. >> >> - Ammad >> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri wrote: >>> Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? >>> >>> As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? >>> >>>> On 07/04/2021 00:03, feilong wrote: >>>> Hi Luke, >>>> >>>> The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 >>>> >>>> The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>> >>>> Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. >>>> >>>> >>>> >>>> >>>> >>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>> We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. >>>>> >>>>> The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>> >>>>> Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further >>>>> >>>>> + echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s' >>>>> + sleep 5s >>>>> + ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\''' >>>>> bash: /usr/bin/podman: No such file or directory >>>>> ERROR Unable to install kubectl. Abort. >>>>> + i=61 >>>>> + '[' 61 -gt 60 ']' >>>>> + echo 'ERROR Unable to install kubectl. Abort.' >>>>> + exit 1 >>>>> >>>>> The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: >>>>> >>>>> Starting to run kube-apiserver-to-kubelet-role >>>>> Waiting for Kubernetes API... >>>>> + echo 'Waiting for Kubernetes API...' >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> >>>>> This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. >>>>> >>>>> Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? >>>>> >>>>> Thanks in advance for any asistance >>>>> >>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> ------------------------------------------------------ >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email: flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>> ------------------------------------------------------ >> -- >> Regards, >> >> >> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Wed Apr 7 21:12:44 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 7 Apr 2021 23:12:44 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: <4A94086F-F79A-4EC4-8E3F-A6AE8EDF4C16@stackhpc.com> References: <4A94086F-F79A-4EC4-8E3F-A6AE8EDF4C16@stackhpc.com> Message-ID: Hi Bharat, I am on Victoria so that should satisfy the requirement: # rpm -qa | grep -i heat openstack-heat-api-cfn-15.0.0-1.el8.noarch openstack-heat-api-15.0.0-1.el8.noarch python3-heatclient-2.2.1-2.el8.noarch openstack-heat-common-15.0.0-1.el8.noarch openstack-heat-engine-15.0.0-1.el8.noarch openstack-heat-ui-4.0.0-1.el8.noarch So from what I can see during the stack's step at OS::Heat::SoftwareConfig is the step that gets the data right? agent_config:     type: OS::Heat::SoftwareConfig     properties:       group: ungrouped       config:         list_join:           - "\n"           -             - str_replace:                 template: {get_file: user_data.json}                 params:                   __HOSTNAME__: {get_param: name}                   __SSH_KEY_VALUE__: {get_param: ssh_public_key}                   __OPENSTACK_CA__: {get_param: openstack_ca}                   __CONTAINER_INFRA_PREFIX__: In the stack I can see that the step below which corresponds to the agent_config above and has just been initialized: kube_cluster_config OS::Heat::SoftwareConfig 46 minutes Init Complete My question here would be: 1- is the file the user_data 2- at which step is this data aplied to the instance as from the fedora docs ( https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview ) this step seems to be at the initial stages of the boot process Thanks in advance for any assistance On 07/04/2021 22:54, Bharat Kunwar wrote: > The ssh key gets injected via ignition which is why it’s not present > in the HOT template. You need minimum train release of Heat for this > to work however. > > Sent from my iPhone > >> On 7 Apr 2021, at 21:45, Luke Camilleri >> wrote: >> >>  >> >> Hello Ammad and thanks for your assistance. I followed the guide and >> it has all the details and steps except for one thing, the ssh key is >> not being passed over to the instance, if I deploy an instance from >> that image and pass the ssh key it works fine but if I use the image >> as part of the HOT it lists the key as "-" >> >> Did you have this issue by any chance? Never thought I would be >> asking this question as it is a basic thing but I find it very >> strange that this is not working. I tried to pass the ssh key in >> either the template or in the cluster creation command but for both >> options the Key Name metadata option for the instance remains "None" >> when the instance is deployed. >> >> I then went on and checked the yaml file the resource uses that >> loads/gets the parameters >> /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >> has the below yaml configurations: >> >> kube-master: >>     type: OS::Nova::Server >>     condition: image_based >>     properties: >>       name: {get_param: name} >>       image: {get_param: server_image} >>       flavor: {get_param: master_flavor} >>                                                 MISSING ----->   >> key_name: {get_param: ssh_key_name} >>       user_data_format: SOFTWARE_CONFIG >>       software_config_transport: POLL_SERVER_HEAT >>       user_data: {get_resource: agent_config} >>       networks: >>         - port: {get_resource: kube_master_eth0} >>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>       availability_zone: {get_param: availability_zone} >> >> kube-master-bfv: >>     type: OS::Nova::Server >>     condition: volume_based >>     properties: >>       name: {get_param: name} >>       flavor: {get_param: master_flavor} >>                                                 MISSING ----->   >> key_name: {get_param: ssh_key_name} >>       user_data_format: SOFTWARE_CONFIG >>       software_config_transport: POLL_SERVER_HEAT >>       user_data: {get_resource: agent_config} >>       networks: >>         - port: {get_resource: kube_master_eth0} >>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>       availability_zone: {get_param: availability_zone} >>       block_device_mapping_v2: >>         - boot_index: 0 >>           volume_id: {get_resource: kube_node_volume} >> >> If i add the lines which show as missing, then everything works well >> and the key is actually injected in the kubemaster. Did anyone had >> this issue? >> >> On 07/04/2021 10:24, Ammad Syed wrote: >>> Hi Luke, >>> >>> You may refer to below guide for magnum installation and its template >>> >>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>> >>> >>> It worked pretty well for me. >>> >>> - Ammad >>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri >>> >> > wrote: >>> >>> Thanks for your quick reply. Do you have a download link for >>> that image as I cannot find an archive for the 32 release? >>> >>> As for the image upload into openstack you still use the >>> fedora-atomic property right to be available for coe deployments? >>> >>> On 07/04/2021 00:03, feilong wrote: >>>> >>>> Hi Luke, >>>> >>>> The Fedora Atomic driver has been deprecated a while since the >>>> Fedora Atomic has been deprecated by upstream. For now, I would >>>> suggest using Fedora CoreOS 32.20201104.3.0 >>>> >>>> The latest version of Fedora CoreOS is 33.xxx, but there are >>>> something when booting based my testing, see >>>> https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>> >>>> >>>> Please feel free to let me know if you have any question about >>>> using Magnum. We're using stable/victoria on our public cloud >>>> and it works very well. I can share our public templates if you >>>> want. Cheers. >>>> >>>> >>>> >>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>> >>>>> We have insatlled magnum following the installation guide here >>>>> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >>>>> >>>>> and the process was quite smooth but we have been having some >>>>> issues with the deployment of the clusters. >>>>> >>>>> The image being used as per the documentation is >>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>> >>>>> >>>>> Our first issue was that podman was being used even if we >>>>> specified the use_podman=false (since the image above did not >>>>> include podman) but this was resulting in a timeout and the >>>>> cluster would fail to deploy. We have then installed podman in >>>>> the image and the cluster progressed a bit further >>>>> >>>>> /+ echo 'WARNING Attempt 60: Trying to install kubectl. >>>>> Sleeping 5s'// >>>>> //+ sleep 5s// >>>>> //+ ssh -F /srv/magnum/.ssh/config root at localhost >>>>> '/usr/bin/podman run --entrypoint /bin/bash     --name >>>>> install-kubectl     --net host --privileged     --rm     >>>>> --user root --volume /srv/magnum/bin:/host/srv/magnum/bin >>>>> k8s.gcr.io/hyperkube:v1.15.7 >>>>> -c '\''cp >>>>> /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >>>>> //bash: /usr/bin/podman: No such file or directory// >>>>> //ERROR Unable to install kubectl. Abort.// >>>>> //+ i=61// >>>>> //+ '[' 61 -gt 60 ']'// >>>>> //+ echo 'ERROR Unable to install kubectl. Abort.'// >>>>> //+ exit 1/ >>>>> >>>>> The cluster is now failing here at "kube_cluster_deploy" and >>>>> when checking the logs on the master node we noticed the >>>>> following in the log files: >>>>> >>>>> /Starting to run kube-apiserver-to-kubelet-role// >>>>> //Waiting for Kubernetes API...// >>>>> //+ echo 'Waiting for Kubernetes API...'// >>>>> //++ curl --silent http://127.0.0.1:8080/healthz >>>>> // >>>>> //+ '[' ok = '' ']'// >>>>> //+ sleep 5/ >>>>> >>>>> This is because the kubernetes API server is not installed >>>>> either. I have noticed some scripts that should handle the >>>>> installation but I would like to know if anyone here has had >>>>> similar issues with a clean Victoria installation. >>>>> >>>>> Also should we have to install any packages in the fedora >>>>> atomic image file or should the installation requirements be >>>>> part of the stack? >>>>> >>>>> Thanks in advance for any asistance >>>>> >>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> ------------------------------------------------------ >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email:flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House,150 Willis Street, Wellington >>>> ------------------------------------------------------ >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Apr 7 21:42:58 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Apr 2021 14:42:58 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: Related, Is there 2020 user survey data available? On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > From allison at openstack.org Wed Apr 7 22:15:09 2021 From: allison at openstack.org (Allison Price) Date: Wed, 7 Apr 2021 17:15:09 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Hi Julia, I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. Thanks! Allison > On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > > Related, Is there 2020 user survey data available? > > On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > wrote: >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! >> >> >> >> Thank you for your participation, >> >> Helena >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Apr 7 22:23:30 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Apr 2021 17:23:30 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 8th at 1500 UTC. In-Reply-To: <178a3f8d599.cf94285387564.6978079671458448803@ghanshyammann.com> References: <178a3f8d599.cf94285387564.6978079671458448803@ghanshyammann.com> Message-ID: <178ae6ef65f.1107939f863588.6814515042321844496@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on April 8th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Follow up on past action items * PTG ** https://etherpad.opendev.org/p/tc-xena-ptg * Gate performance and heavy job configs (dansmith) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * PTL assignment for Xena cycle leaderless projects (gmann) ** https://etherpad.opendev.org/p/xena-leaderless * Election for one Vacant TC seat (gmann) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021334.html * Community newsletter: "OpenStack project news" snippets ** https://etherpad.opendev.org/p/newsletter-openstack-news * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 05 Apr 2021 16:38:17 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for April 8th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, April 7th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gouthampravi at gmail.com Wed Apr 7 23:09:51 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 7 Apr 2021 16:09:51 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! Hi Helena, Thanks for the information. I'd like to sign up on behalf of the Manila project team. Is this a live presentation unlike last time where we pre-recorded ~10 minute updates? Thanks, Goutham > > > > Thank you for your participation, > > Helena > > From rosmaita.fossdev at gmail.com Thu Apr 8 00:20:37 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Apr 2021 20:20:37 -0400 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: On 4/7/21 7:09 PM, Goutham Pacha Ravi wrote: > On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org > wrote: >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! > > Hi Helena, > > Thanks for the information. I'd like to sign up on behalf of the > Manila project team. > Is this a live presentation unlike last time where we pre-recorded ~10 > minute updates? > > Thanks, > Goutham I'd like to sign up on behalf of Cinder. Same questions as Goutham, though: will it be "live", and what are your expectations about length of presentation? cheers, brian > > >> >> >> >> Thank you for your participation, >> >> Helena >> >> > From bharat at stackhpc.com Thu Apr 8 06:05:16 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Thu, 8 Apr 2021 07:05:16 +0100 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Is your os_distro=fedora-coreos or fedora-atomic? Sent from my iPhone > On 7 Apr 2021, at 22:12, Luke Camilleri wrote: > >  > Hi Bharat, I am on Victoria so that should satisfy the requirement: > > # rpm -qa | grep -i heat > openstack-heat-api-cfn-15.0.0-1.el8.noarch > openstack-heat-api-15.0.0-1.el8.noarch > python3-heatclient-2.2.1-2.el8.noarch > openstack-heat-common-15.0.0-1.el8.noarch > openstack-heat-engine-15.0.0-1.el8.noarch > openstack-heat-ui-4.0.0-1.el8.noarch > > So from what I can see during the stack's step at OS::Heat::SoftwareConfig is the step that gets the data right? > > agent_config: > type: OS::Heat::SoftwareConfig > properties: > group: ungrouped > config: > list_join: > - "\n" > - > - str_replace: > template: {get_file: user_data.json} > params: > __HOSTNAME__: {get_param: name} > __SSH_KEY_VALUE__: {get_param: ssh_public_key} > __OPENSTACK_CA__: {get_param: openstack_ca} > __CONTAINER_INFRA_PREFIX__: > > > > In the stack I can see that the step below which corresponds to the agent_config above and has just been initialized: > > kube_cluster_config > OS::Heat::SoftwareConfig 46 minutes Init Complete > My question here would be: > > 1- is the file the user_data > > 2- at which step is this data aplied to the instance as from the fedora docs ( https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview ) this step seems to be at the initial stages of the boot process > > Thanks in advance for any assistance > > On 07/04/2021 22:54, Bharat Kunwar wrote: >> The ssh key gets injected via ignition which is why it’s not present in the HOT template. You need minimum train release of Heat for this to work however. >> >> Sent from my iPhone >> >>> On 7 Apr 2021, at 21:45, Luke Camilleri wrote: >>> >>>  >>> Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" >>> >>> Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. >>> >>> I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: >>> >>> kube-master: >>> type: OS::Nova::Server >>> condition: image_based >>> properties: >>> name: {get_param: name} >>> image: {get_param: server_image} >>> flavor: {get_param: master_flavor} >>> MISSING -----> key_name: {get_param: ssh_key_name} >>> user_data_format: SOFTWARE_CONFIG >>> software_config_transport: POLL_SERVER_HEAT >>> user_data: {get_resource: agent_config} >>> networks: >>> - port: {get_resource: kube_master_eth0} >>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>> availability_zone: {get_param: availability_zone} >>> >>> kube-master-bfv: >>> type: OS::Nova::Server >>> condition: volume_based >>> properties: >>> name: {get_param: name} >>> flavor: {get_param: master_flavor} >>> MISSING -----> key_name: {get_param: ssh_key_name} >>> user_data_format: SOFTWARE_CONFIG >>> software_config_transport: POLL_SERVER_HEAT >>> user_data: {get_resource: agent_config} >>> networks: >>> - port: {get_resource: kube_master_eth0} >>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>> availability_zone: {get_param: availability_zone} >>> block_device_mapping_v2: >>> - boot_index: 0 >>> volume_id: {get_resource: kube_node_volume} >>> >>> If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? >>> >>> On 07/04/2021 10:24, Ammad Syed wrote: >>>> Hi Luke, >>>> >>>> You may refer to below guide for magnum installation and its template >>>> >>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>>> >>>> It worked pretty well for me. >>>> >>>> - Ammad >>>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri wrote: >>>>> Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? >>>>> >>>>> As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? >>>>> >>>>> On 07/04/2021 00:03, feilong wrote: >>>>>> Hi Luke, >>>>>> >>>>>> The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 >>>>>> >>>>>> The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>>>> >>>>>> Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>>>> We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. >>>>>>> >>>>>>> The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>>>> >>>>>>> Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further >>>>>>> >>>>>>> + echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s' >>>>>>> + sleep 5s >>>>>>> + ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\''' >>>>>>> bash: /usr/bin/podman: No such file or directory >>>>>>> ERROR Unable to install kubectl. Abort. >>>>>>> + i=61 >>>>>>> + '[' 61 -gt 60 ']' >>>>>>> + echo 'ERROR Unable to install kubectl. Abort.' >>>>>>> + exit 1 >>>>>>> >>>>>>> The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: >>>>>>> >>>>>>> Starting to run kube-apiserver-to-kubelet-role >>>>>>> Waiting for Kubernetes API... >>>>>>> + echo 'Waiting for Kubernetes API...' >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. >>>>>>> >>>>>>> Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? >>>>>>> >>>>>>> Thanks in advance for any asistance >>>>>>> >>>>>> -- >>>>>> Cheers & Best regards, >>>>>> Feilong Wang (王飞龙) >>>>>> ------------------------------------------------------ >>>>>> Senior Cloud Software Engineer >>>>>> Tel: +64-48032246 >>>>>> Email: flwang at catalyst.net.nz >>>>>> Catalyst IT Limited >>>>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>>>> ------------------------------------------------------ >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Thu Apr 8 06:19:38 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Thu, 8 Apr 2021 07:19:38 +0100 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: As in, do you have that label set in the image property? Sent from my iPhone > On 8 Apr 2021, at 07:05, Bharat Kunwar wrote: > > Is your os_distro=fedora-coreos or fedora-atomic? > > Sent from my iPhone > >>> On 7 Apr 2021, at 22:12, Luke Camilleri wrote: >>> >>  >> Hi Bharat, I am on Victoria so that should satisfy the requirement: >> >> # rpm -qa | grep -i heat >> openstack-heat-api-cfn-15.0.0-1.el8.noarch >> openstack-heat-api-15.0.0-1.el8.noarch >> python3-heatclient-2.2.1-2.el8.noarch >> openstack-heat-common-15.0.0-1.el8.noarch >> openstack-heat-engine-15.0.0-1.el8.noarch >> openstack-heat-ui-4.0.0-1.el8.noarch >> >> So from what I can see during the stack's step at OS::Heat::SoftwareConfig is the step that gets the data right? >> >> agent_config: >> type: OS::Heat::SoftwareConfig >> properties: >> group: ungrouped >> config: >> list_join: >> - "\n" >> - >> - str_replace: >> template: {get_file: user_data.json} >> params: >> __HOSTNAME__: {get_param: name} >> __SSH_KEY_VALUE__: {get_param: ssh_public_key} >> __OPENSTACK_CA__: {get_param: openstack_ca} >> __CONTAINER_INFRA_PREFIX__: >> >> >> >> In the stack I can see that the step below which corresponds to the agent_config above and has just been initialized: >> >> kube_cluster_config >> OS::Heat::SoftwareConfig 46 minutes Init Complete >> My question here would be: >> >> 1- is the file the user_data >> >> 2- at which step is this data aplied to the instance as from the fedora docs ( https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview ) this step seems to be at the initial stages of the boot process >> >> Thanks in advance for any assistance >> >> On 07/04/2021 22:54, Bharat Kunwar wrote: >>> The ssh key gets injected via ignition which is why it’s not present in the HOT template. You need minimum train release of Heat for this to work however. >>> >>> Sent from my iPhone >>> >>>> On 7 Apr 2021, at 21:45, Luke Camilleri wrote: >>>> >>>>  >>>> Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" >>>> >>>> Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. >>>> >>>> I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: >>>> >>>> kube-master: >>>> type: OS::Nova::Server >>>> condition: image_based >>>> properties: >>>> name: {get_param: name} >>>> image: {get_param: server_image} >>>> flavor: {get_param: master_flavor} >>>> MISSING -----> key_name: {get_param: ssh_key_name} >>>> user_data_format: SOFTWARE_CONFIG >>>> software_config_transport: POLL_SERVER_HEAT >>>> user_data: {get_resource: agent_config} >>>> networks: >>>> - port: {get_resource: kube_master_eth0} >>>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>> availability_zone: {get_param: availability_zone} >>>> >>>> kube-master-bfv: >>>> type: OS::Nova::Server >>>> condition: volume_based >>>> properties: >>>> name: {get_param: name} >>>> flavor: {get_param: master_flavor} >>>> MISSING -----> key_name: {get_param: ssh_key_name} >>>> user_data_format: SOFTWARE_CONFIG >>>> software_config_transport: POLL_SERVER_HEAT >>>> user_data: {get_resource: agent_config} >>>> networks: >>>> - port: {get_resource: kube_master_eth0} >>>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>> availability_zone: {get_param: availability_zone} >>>> block_device_mapping_v2: >>>> - boot_index: 0 >>>> volume_id: {get_resource: kube_node_volume} >>>> >>>> If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? >>>> >>>> On 07/04/2021 10:24, Ammad Syed wrote: >>>>> Hi Luke, >>>>> >>>>> You may refer to below guide for magnum installation and its template >>>>> >>>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>>>> >>>>> It worked pretty well for me. >>>>> >>>>> - Ammad >>>>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri wrote: >>>>>> Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? >>>>>> >>>>>> As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? >>>>>> >>>>>>> On 07/04/2021 00:03, feilong wrote: >>>>>>> Hi Luke, >>>>>>> >>>>>>> The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 >>>>>>> >>>>>>> The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>>>>> >>>>>>> Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>>>>> We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. >>>>>>>> >>>>>>>> The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>>>>> >>>>>>>> Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further >>>>>>>> >>>>>>>> + echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s' >>>>>>>> + sleep 5s >>>>>>>> + ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\''' >>>>>>>> bash: /usr/bin/podman: No such file or directory >>>>>>>> ERROR Unable to install kubectl. Abort. >>>>>>>> + i=61 >>>>>>>> + '[' 61 -gt 60 ']' >>>>>>>> + echo 'ERROR Unable to install kubectl. Abort.' >>>>>>>> + exit 1 >>>>>>>> >>>>>>>> The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: >>>>>>>> >>>>>>>> Starting to run kube-apiserver-to-kubelet-role >>>>>>>> Waiting for Kubernetes API... >>>>>>>> + echo 'Waiting for Kubernetes API...' >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> >>>>>>>> This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. >>>>>>>> >>>>>>>> Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? >>>>>>>> >>>>>>>> Thanks in advance for any asistance >>>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> Feilong Wang (王飞龙) >>>>>>> ------------------------------------------------------ >>>>>>> Senior Cloud Software Engineer >>>>>>> Tel: +64-48032246 >>>>>>> Email: flwang at catalyst.net.nz >>>>>>> Catalyst IT Limited >>>>>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>>>>> ------------------------------------------------------ >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Apr 8 06:36:15 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 8 Apr 2021 12:06:15 +0530 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: I would like to sign up for glance but I will not be around April 15th for the session. Let me know if you can just show the presentation during the session or not. Thanks & Best Regards, Abhishek Kekane On Thu, Apr 8, 2021 at 5:54 AM Brian Rosmaita wrote: > On 4/7/21 7:09 PM, Goutham Pacha Ravi wrote: > > On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org > > wrote: > >> > >> Hello ptls, > >> > >> > >> > >> The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session > via Zoom as well as live-streamed to YouTube. > >> > >> > >> > >> If you are a PTL interested in presenting an update for your project at > the Wallaby community meeting, please let me know by this Friday, April > 9th. Slides will be due next Tuesday, April 13th, and please find a > template attached you may use if you wish. > >> > >> > >> > >> Let me know if you have any other questions! > > > > Hi Helena, > > > > Thanks for the information. I'd like to sign up on behalf of the > > Manila project team. > > Is this a live presentation unlike last time where we pre-recorded ~10 > > minute updates? > > > > Thanks, > > Goutham > > I'd like to sign up on behalf of Cinder. Same questions as Goutham, > though: will it be "live", and what are your expectations about length > of presentation? > > > cheers, > brian > > > > > > >> > >> > >> > >> Thank you for your participation, > >> > >> Helena > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Thu Apr 8 06:37:24 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 08 Apr 2021 06:37:24 +0000 Subject: [Keystone] Managing keystone tokens in high availability environment In-Reply-To: Message-ID: <20210408063724.Horde.JNPDsvSBrElhLX4emySpDwo@webmail.nde.ag> Hi, my first guess would be permissions. Did you check if the directory and files have the correct permissions? How did you distribute the keys? Zitat von Taha Adel : > Hello Engineers and Developers, > > I'm currently deploying a three-nodes openstack controller cluster, > controller-01, controller-02, anc controller-03. I have installed the > keystone service on the three controllers and generated fernet keys on one > node and distributed the keys to the other nodes of the cluster. Hence, I > have configured an HAProxy in front of them that would distribute the > incoming requests over them. > > The issue is, when I try to access the keystone endpoint from using the VIP > of the loadbalancer, the service works ONLY on the node that I have > generated the keys on, and it doesn't work on the nodes that got the keys > by distribution. the error message I have got is *"INTERNAL SERVER ERROR > (500)"* > > In other words, the node that had* keystone-manage fernet_setup *command > ran on it, it can run the service properly, but the others can't. > > Is the way of replicating the key incorrect? is there any other way? > > Thanks in advance From skaplons at redhat.com Thu Apr 8 07:33:05 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 08 Apr 2021 09:33:05 +0200 Subject: [neutron] Drivers meeting agenda - 09.04.2021 Message-ID: <2516619.xdQ2LmAMnW@p1> Hi, Agenda for the tomorrow's drivers meeting is at [1]. We have 2 RFEs to discuss: https://bugs.launchpad.net/neutron/+bug/1922237 - [RFE][QoS] Add minimum guaranteed packet rate QoS rule https://bugs.launchpad.net/neutron/+bug/1921461 - [RFE] Enhancement to Neutron BGPaaS to directly support Neutron Routers & bgp-peering from such routers over internal & external Neutron Networks [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From hberaud at redhat.com Thu Apr 8 09:00:53 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 8 Apr 2021 11:00:53 +0200 Subject: [cinder] final reviews for RC-2 In-Reply-To: References: Message-ID: Hello, I submitted the final RC patches series so feel free to update the used hash for cinder when these changes will be merged. https://review.opendev.org/c/openstack/releases/+/785343 For now I hold this patch to allow you to release these changes. Le mer. 7 avr. 2021 à 20:22, Brian Rosmaita a écrit : > We have 3 patches that need review/revision/approval as soon as possible > before we release RC-2 tomorrow (Thursday 8 April). All 3 are updates > to the release notes: > > Release note for mTLS support cinder->glance > - https://review.opendev.org/c/openstack/cinder/+/783964 > > Release note about the cgroups v1 situation > - https://review.opendev.org/c/openstack/cinder/+/784179 > > Add known issue note about RBD encrypted volumes > - https://review.opendev.org/c/openstack/cinder/+/785235 > > Please review and leave comments as soon as you can. > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Apr 8 09:54:16 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 8 Apr 2021 11:54:16 +0200 Subject: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: References: Message-ID: FYI Looks similar to that story: - http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021002.html - http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021217.html I proposed a patch to move to nodejs10 all our projects that depend on nodejs: https://review.opendev.org/c/openstack/project-config/+/785353 When this patch will be merged I think that this job could be reenqueued. Le jeu. 8 avr. 2021 à 11:10, a écrit : > Build failed. > > - release-openstack-javascript > https://zuul.opendev.org/t/openstack/build/4062ea0df4e74565b9f8b443e550c0fd > : RETRY_LIMIT in 3m 53s > - announce-release https://zuul.opendev.org/t/openstack/build/None : > SKIPPED > - openstack-upload-github-mirror > https://zuul.opendev.org/t/openstack/build/2e9b658e70f340818cafefa88c5044e2 > : SUCCESS in 41s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Thu Apr 8 10:19:41 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 8 Apr 2021 12:19:41 +0200 Subject: [baremetal-sig][ironic] Tue Apr 13, 2021, 2pm UTC: Secure RBAC in Ironic Message-ID: <56c36688-95d2-4c35-f4ec-b4a20d884bb8@cern.ch> Dear all, The Bare Metal SIG will meet next week Tue Apr 13, 2021, at 2pm UTC on zoom. There will be two main points on the agenda: - A "topic-of-the-day" presentation by Julia Kreger (TheJulia) on     'Secure RBAC in Ironic'   and a - PTG pre-discussion on a potential integration of Ironic with   Kea DHCP. As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome! Cheers,  Arne From luke.camilleri at zylacomputing.com Thu Apr 8 11:06:22 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 8 Apr 2021 13:06:22 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hi Bharat, in fact I had noticed that property when creating the image in OS and made some more research about this. I now have 2 images (atomic and coreos) and have set the different flags in the image creation process. The documentation from Victoria to latest has also changed to this: Victoria (Kubernetes cluster creation) - Create a cluster template for a Kubernetes cluster using the |fedora-atomic-latest| image latest - Create a cluster template for a Kubernetes cluster using the |fedora-coreos-latest| image So in the end it seems that the CoreOS image is now being suggested for the Kubernetes cluster creation. The bootstrapping process seems to be handled by ignition which handles the ssh keys (I need to find out in more detail how the ignition mechanism works to better understand this process) Thanks On 08/04/2021 08:19, Bharat Kunwar wrote: > As in, do you have that label set in the image property? > > Sent from my iPhone > >> On 8 Apr 2021, at 07:05, Bharat Kunwar wrote: >> >>  Is your os_distro=fedora-coreos or fedora-atomic? >> >> Sent from my iPhone >> >>> On 7 Apr 2021, at 22:12, Luke Camilleri >>> wrote: >>> >>>  >>> >>> Hi Bharat, I am on Victoria so that should satisfy the requirement: >>> >>> # rpm -qa | grep -i heat >>> openstack-heat-api-cfn-15.0.0-1.el8.noarch >>> openstack-heat-api-15.0.0-1.el8.noarch >>> python3-heatclient-2.2.1-2.el8.noarch >>> openstack-heat-common-15.0.0-1.el8.noarch >>> openstack-heat-engine-15.0.0-1.el8.noarch >>> openstack-heat-ui-4.0.0-1.el8.noarch >>> >>> So from what I can see during the stack's step at >>> OS::Heat::SoftwareConfig is the step that gets the data right? >>> >>> agent_config: >>>     type: OS::Heat::SoftwareConfig >>>     properties: >>>       group: ungrouped >>>       config: >>>         list_join: >>>           - "\n" >>>           - >>>             - str_replace: >>>                 template: {get_file: user_data.json} >>>                 params: >>>                   __HOSTNAME__: {get_param: name} >>>                   __SSH_KEY_VALUE__: {get_param: ssh_public_key} >>>                   __OPENSTACK_CA__: {get_param: openstack_ca} >>>                   __CONTAINER_INFRA_PREFIX__: >>> >>> >>> In the stack I can see that the step below which corresponds to the >>> agent_config above and has just been initialized: >>> >>> kube_cluster_config >>> >>> >>> OS::Heat::SoftwareConfig 46 minutes Init Complete >>> >>> My question here would be: >>> >>> 1- is the file the user_data >>> >>> 2- at which step is this data aplied to the instance as from the >>> fedora docs ( >>> https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview >>> ) this step seems to be at the initial stages of the boot process >>> >>> Thanks in advance for any assistance >>> >>> On 07/04/2021 22:54, Bharat Kunwar wrote: >>>> The ssh key gets injected via ignition which is why it’s not >>>> present in the HOT template. You need minimum train release of Heat >>>> for this to work however. >>>> >>>> Sent from my iPhone >>>> >>>>> On 7 Apr 2021, at 21:45, Luke Camilleri >>>>> wrote: >>>>> >>>>>  >>>>> >>>>> Hello Ammad and thanks for your assistance. I followed the guide >>>>> and it has all the details and steps except for one thing, the ssh >>>>> key is not being passed over to the instance, if I deploy an >>>>> instance from that image and pass the ssh key it works fine but if >>>>> I use the image as part of the HOT it lists the key as "-" >>>>> >>>>> Did you have this issue by any chance? Never thought I would be >>>>> asking this question as it is a basic thing but I find it very >>>>> strange that this is not working. I tried to pass the ssh key in >>>>> either the template or in the cluster creation command but for >>>>> both options the Key Name metadata option for the instance remains >>>>> "None" when the instance is deployed. >>>>> >>>>> I then went on and checked the yaml file the resource uses that >>>>> loads/gets the parameters >>>>> /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >>>>> has the below yaml configurations: >>>>> >>>>> kube-master: >>>>>     type: OS::Nova::Server >>>>>     condition: image_based >>>>>     properties: >>>>>       name: {get_param: name} >>>>>       image: {get_param: server_image} >>>>>       flavor: {get_param: master_flavor} >>>>> MISSING ----->   key_name: {get_param: ssh_key_name} >>>>>       user_data_format: SOFTWARE_CONFIG >>>>>       software_config_transport: POLL_SERVER_HEAT >>>>>       user_data: {get_resource: agent_config} >>>>>       networks: >>>>>         - port: {get_resource: kube_master_eth0} >>>>>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>>>       availability_zone: {get_param: availability_zone} >>>>> >>>>> kube-master-bfv: >>>>>     type: OS::Nova::Server >>>>>     condition: volume_based >>>>>     properties: >>>>>       name: {get_param: name} >>>>>       flavor: {get_param: master_flavor} >>>>> MISSING ----->   key_name: {get_param: ssh_key_name} >>>>>       user_data_format: SOFTWARE_CONFIG >>>>>       software_config_transport: POLL_SERVER_HEAT >>>>>       user_data: {get_resource: agent_config} >>>>>       networks: >>>>>         - port: {get_resource: kube_master_eth0} >>>>>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>>>       availability_zone: {get_param: availability_zone} >>>>>       block_device_mapping_v2: >>>>>         - boot_index: 0 >>>>>           volume_id: {get_resource: kube_node_volume} >>>>> >>>>> If i add the lines which show as missing, then everything works >>>>> well and the key is actually injected in the kubemaster. Did >>>>> anyone had this issue? >>>>> >>>>> On 07/04/2021 10:24, Ammad Syed wrote: >>>>>> Hi Luke, >>>>>> >>>>>> You may refer to below guide for magnum installation and its >>>>>> template >>>>>> >>>>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>>>>> >>>>>> >>>>>> It worked pretty well for me. >>>>>> >>>>>> - Ammad >>>>>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri >>>>>> >>>>> > wrote: >>>>>> >>>>>> Thanks for your quick reply. Do you have a download link for >>>>>> that image as I cannot find an archive for the 32 release? >>>>>> >>>>>> As for the image upload into openstack you still use the >>>>>> fedora-atomic property right to be available for coe deployments? >>>>>> >>>>>> On 07/04/2021 00:03, feilong wrote: >>>>>>> >>>>>>> Hi Luke, >>>>>>> >>>>>>> The Fedora Atomic driver has been deprecated a while since >>>>>>> the Fedora Atomic has been deprecated by upstream. For now, >>>>>>> I would suggest using Fedora CoreOS 32.20201104.3.0 >>>>>>> >>>>>>> The latest version of Fedora CoreOS is 33.xxx, but there are >>>>>>> something when booting based my testing, see >>>>>>> https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>>>>> >>>>>>> >>>>>>> Please feel free to let me know if you have any question >>>>>>> about using Magnum. We're using stable/victoria on our >>>>>>> public cloud and it works very well. I can share our public >>>>>>> templates if you want. Cheers. >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>>>>> >>>>>>>> We have insatlled magnum following the installation guide >>>>>>>> here >>>>>>>> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >>>>>>>> >>>>>>>> and the process was quite smooth but we have been having >>>>>>>> some issues with the deployment of the clusters. >>>>>>>> >>>>>>>> The image being used as per the documentation is >>>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>>>>> >>>>>>>> >>>>>>>> Our first issue was that podman was being used even if we >>>>>>>> specified the use_podman=false (since the image above did >>>>>>>> not include podman) but this was resulting in a timeout and >>>>>>>> the cluster would fail to deploy. We have then installed >>>>>>>> podman in the image and the cluster progressed a bit further >>>>>>>> >>>>>>>> /+ echo 'WARNING Attempt 60: Trying to install kubectl. >>>>>>>> Sleeping 5s'// >>>>>>>> //+ sleep 5s// >>>>>>>> //+ ssh -F /srv/magnum/.ssh/config root at localhost >>>>>>>> '/usr/bin/podman run     --entrypoint /bin/bash     --name >>>>>>>> install-kubectl     --net host     --privileged --rm     >>>>>>>> --user root --volume /srv/magnum/bin:/host/srv/magnum/bin >>>>>>>> k8s.gcr.io/hyperkube:v1.15.7 >>>>>>>> -c '\''cp >>>>>>>> /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >>>>>>>> //bash: /usr/bin/podman: No such file or directory// >>>>>>>> //ERROR Unable to install kubectl. Abort.// >>>>>>>> //+ i=61// >>>>>>>> //+ '[' 61 -gt 60 ']'// >>>>>>>> //+ echo 'ERROR Unable to install kubectl. Abort.'// >>>>>>>> //+ exit 1/ >>>>>>>> >>>>>>>> The cluster is now failing here at "kube_cluster_deploy" >>>>>>>> and when checking the logs on the master node we noticed >>>>>>>> the following in the log files: >>>>>>>> >>>>>>>> /Starting to run kube-apiserver-to-kubelet-role// >>>>>>>> //Waiting for Kubernetes API...// >>>>>>>> //+ echo 'Waiting for Kubernetes API...'// >>>>>>>> //++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> // >>>>>>>> //+ '[' ok = '' ']'// >>>>>>>> //+ sleep 5/ >>>>>>>> >>>>>>>> This is because the kubernetes API server is not installed >>>>>>>> either. I have noticed some scripts that should handle the >>>>>>>> installation but I would like to know if anyone here has >>>>>>>> had similar issues with a clean Victoria installation. >>>>>>>> >>>>>>>> Also should we have to install any packages in the fedora >>>>>>>> atomic image file or should the installation requirements >>>>>>>> be part of the stack? >>>>>>>> >>>>>>>> Thanks in advance for any asistance >>>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> Feilong Wang (王飞龙) >>>>>>> ------------------------------------------------------ >>>>>>> Senior Cloud Software Engineer >>>>>>> Tel: +64-48032246 >>>>>>> Email:flwang at catalyst.net.nz >>>>>>> Catalyst IT Limited >>>>>>> Level 6, Catalyst House,150 Willis Street, Wellington >>>>>>> ------------------------------------------------------ >>>>>> >>>>>> -- >>>>>> Regards, >>>>>> >>>>>> >>>>>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Thu Apr 8 13:50:06 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 09:50:06 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <1617889806.416816408@apps.rackspace.com> Hi Brian and Goutham, Thank you for participating! Yes, it will be a live session. As for length, we are aiming for 5-10 minutes per presenter (this number kind of depends on how many people we have signup to present, so I can give y'all a better idea tomorrow what the length should be). Cheers, Helena -----Original Message----- From: "Brian Rosmaita" Sent: Wednesday, April 7, 2021 8:20pm To: openstack-discuss at lists.openstack.org Subject: Re: [ptl] Wallaby Release Community Meeting On 4/7/21 7:09 PM, Goutham Pacha Ravi wrote: > On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org > wrote: >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! > > Hi Helena, > > Thanks for the information. I'd like to sign up on behalf of the > Manila project team. > Is this a live presentation unlike last time where we pre-recorded ~10 > minute updates? > > Thanks, > Goutham I'd like to sign up on behalf of Cinder. Same questions as Goutham, though: will it be "live", and what are your expectations about length of presentation? cheers, brian > > >> >> >> >> Thank you for your participation, >> >> Helena >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu Apr 8 14:17:22 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 08 Apr 2021 16:17:22 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: On Wed, Apr 7, 2021 at 14:37, helena at openstack.org wrote: > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live > session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project > at the Wallaby community meeting, please let me know by this Friday, > April 9th. Slides will be due next Tuesday, April 13th, and please > find a template attached you may use if you wish. > Please sign me up, I will give a short update from Nova perspective. Cheers, gibi > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > From ykarel at redhat.com Thu Apr 8 14:17:14 2021 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 8 Apr 2021 19:47:14 +0530 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Ruslanas, For the issue see https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, The puppet-neutron issue in above was specific to victoria but since there is new release for ussuri recently, it also hit there too. Thanks and Regards Yatin Karel On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis wrote: > > Hi all, > > While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: > openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml > > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ > > builddir/install-undercloud.log ( contains info about container-puppet-neutron ) > http://paste.openstack.org/show/804181/ > > undercloud.conf: > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > > dnf list installed > http://paste.openstack.org/show/804182/ > > -- > Ruslanas Gžibovskis > +370 6030 7030 From helena at openstack.org Thu Apr 8 14:23:58 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 10:23:58 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <1617891838.939328095@apps.rackspace.com> Perfect! Thank you for participating :) Cheers, Helena -----Original Message----- From: "Balazs Gibizer" Sent: Thursday, April 8, 2021 10:17am To: helena at openstack.org Cc: "OpenStack Discuss" Subject: Re: [ptl] Wallaby Release Community Meeting On Wed, Apr 7, 2021 at 14:37, helena at openstack.org wrote: > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live > session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project > at the Wallaby community meeting, please let me know by this Friday, > April 9th. Slides will be due next Tuesday, April 13th, and please > find a template attached you may use if you wish. > Please sign me up, I will give a short update from Nova perspective. Cheers, gibi > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Apr 8 14:29:06 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 8 Apr 2021 16:29:06 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617891838.939328095@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> <1617891838.939328095@apps.rackspace.com> Message-ID: Please count me in for Masakari. Kind regards, -yoctozepto From helena at openstack.org Thu Apr 8 14:34:50 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 10:34:50 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <1617891838.939328095@apps.rackspace.com> Message-ID: <1617892490.730925429@apps.rackspace.com> Awesome! Thank you Cheers, Helena -----Original Message----- From: "Radosław Piliszek" Sent: Thursday, April 8, 2021 10:29am To: "helena at openstack.org" Cc: "OpenStack Discuss" , "Ashlee Ferguson" , "Erin Disney" Subject: Re: [ptl] Wallaby Release Community Meeting Please count me in for Masakari. Kind regards, -yoctozepto -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 8 14:43:40 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 8 Apr 2021 07:43:40 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Hey Allison, Metrics would be awesome and I'm just looking for the key high level adoption information as that is good to put into the presentation. -Julia On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > > Hi Julia, > > I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > > Thanks! > Allison > > > > > > On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > > Related, Is there 2020 user survey data available? > > On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > wrote: > > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > > From juliaashleykreger at gmail.com Thu Apr 8 14:45:52 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 8 Apr 2021 07:45:52 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: Hi Helena, I would be happy to participate on behalf of Ironic. -Julia On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > From helena at openstack.org Thu Apr 8 14:48:20 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 10:48:20 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <1617893300.94195859@apps.rackspace.com> Awesome, thank you! Cheers, Helena -----Original Message----- From: "Julia Kreger" Sent: Thursday, April 8, 2021 10:45am To: "helena at openstack.org" Cc: "OpenStack Discuss" Subject: Re: [ptl] Wallaby Release Community Meeting Hi Helena, I would be happy to participate on behalf of Ironic. -Julia On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu Apr 8 14:48:15 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 8 Apr 2021 23:48:15 +0900 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: References: Message-ID: Hello, Thank you, all who shared your feedback ! Because we have only positive responses and I got +2 from Emilien locally, I'll invite Alan to the core team for these two projects based on my proposal. I'll request new groups specific to puppet-cinder and puppet-glance in a few days and add him to these groups once prepared. Thank you, Alan, for your nice work so far, and I'm looking forward to your further contributions ! Thank you, Takashi On Wed, Mar 31, 2021 at 6:24 PM Takashi Kajinami wrote: > Hello, > > > I'd like to propose Alan Bishop (abishop) for the core team of > puppet-cinder > and puppet-glance. > Alan has been actively involved in these 2 modules for a few years > and has implemented some nice features like multiple backend support in > glance, > cinder s3 backup driver and etc, which expanded adoption of > puppet-openstack. > He has also provided good reviews on patches for these 2 repos based > on his understanding about our code, puppet and serverspec. > > He is an active contributor to cinder and has deep knowledge about it. > In addition He is also a core review in TripleO, which consumes our puppet > modules, > and mainly covers storage components like cinder and glance, so he is > familiar > with the way how these two components are deployed and configured. > > I believe adding him to our board helps us improve our review of these two > modules. > > I'll wait for one week to hear any feedback from other core reviewers. > > Thank you, > Takashi > > -- ---------- Takashi Kajinami Principal Software Maintenance Engineer Customer Experience and Engagement Red Hat email: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Thu Apr 8 15:22:01 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Thu, 8 Apr 2021 17:22:01 +0200 (CEST) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <512255613.133816.1617895321756@ox.dhbw-mannheim.de> Hi Dmitriy, > I'm wondering if you see also stack trace in keystone logs? Running 'journalctl' on the keystone container, I don't see any tracebacks. Or is there a specific service I should check? Kind regards, Oliver From ruslanas at lpic.lt Thu Apr 8 16:11:09 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 8 Apr 2021 18:11:09 +0200 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Yatin, I have spotted that version of puppet-tripleo, but even after downgrade I had/have same issue. should I downgrade even more? :) OR You know when fixed version might get in for production centos ussuri release repo? As you know now that it is affected also :) On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: > Hi Ruslanas, > > For the issue see > https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, > The puppet-neutron issue in above was specific to victoria but since > there is new release for ussuri recently, it also hit there too. > > > Thanks and Regards > Yatin Karel > > On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > While deploying undercloud, always fails on puppet-container-neutron > configuration, it fails with missing ml2 ovs_driver plugin... downloading > them using: > > openstack tripleo container image prepare default --output-env-file > containers-prepare-parameters.yaml > > > > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log > http://paste.openstack.org/show/804180/ > > > > builddir/install-undercloud.log ( contains info about > container-puppet-neutron ) > > http://paste.openstack.org/show/804181/ > > > > undercloud.conf: > > > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > > > > dnf list installed > > http://paste.openstack.org/show/804182/ > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Thu Apr 8 16:20:37 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Thu, 8 Apr 2021 16:20:37 +0000 Subject: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? Message-ID: Hello stackers, I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. What works: - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. - I can create all my neutron networks, routers, subnets, without (obvious) errors. - I can update security groups on the container and see the iptables rules updated appropriately. - I can directly create Docker networks using the kuryr driver/type. What doesn’t work: - I can’t see any vxlan ports on the br-tun OVS bridge - I can’t access the exposed container ports from the control/network node via the router netns - Because of that, I can’t assign floating IPs because NAT effectively won’t work to reach the containers The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. Has anybody deployed such a setup / are there limitations I should know about? Thank you! Jason Anderson DevOps Lead, Chameleon --- Department of Computer Science, University of Chicago Mathematics and Computer Science, Argonne National Laboratory jasonanderson at uchicago.edu From marios at redhat.com Thu Apr 8 16:20:44 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 8 Apr 2021 19:20:44 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 7:55 PM John Fulton wrote: > > On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou wrote: >> >> Hello TripleO o/ >> >> Thanks again to everybody who has volunteered to lead a session for >> the coming Xena TripleO project teams gathering. >> >> I've had a go at the agenda [1] trying to keep it to max 4 or 5 >> sessions per day with some breaks. >> >> Please review the slot assigned for your session at [1]. If that time >> is not ok then please let me know as soon as possible and indicate if >> you want it later or earlier or on any other day. > > > On Monday I see: > > 1. STORAGE: 1430-1510 (ceph) > 2. DF: 1510-1550 (ephemeral heat) > 3. DF/Networking: 1600-1700 (ports v2 "no heat") > > If Harald and James are OK with it, could it be changed to the following? > > A. DF: 1430-1510 (ephemeral heat) > B. DF/Networking: 1510-1550 (ports v2 "no heat") > C. STORAGE: 1600-1700 (ceph) > > I ask because a portion of C depends on B, so it would be helpful to have that context first. If the presenters have conflicts however, we don't need this change. > ACK thanks John that totally makes sense... as just discussed on irc [1] I've updated the schedule to reflect your proposal. I haven't heard back from slagle yet but cc'ing him here and if there are any issues we can work them out thanks [1] http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2021-04-08.log.html#t2021-04-08T15:47:12 > Thanks, > John > > >> >> If you've decided >> the session no longer makes sense then also please tell me and we can >> move things around accordingly to finish earlier. >> >> I'd like to finalise the schedule by next Monday 12 April which is a >> week before PTG. We can and likely will make changes after this date >> but last minute changes are best avoided to allow folks to schedule >> their PTG attendance across projects. >> >> Thanks everybody for your help! Looking forward to interesting >> presentations and discussions as always >> >> regards, marios >> >> [1] https://etherpad.opendev.org/p/tripleo-ptg-xena >> >> From james.slagle at gmail.com Thu Apr 8 16:32:16 2021 From: james.slagle at gmail.com (James Slagle) Date: Thu, 8 Apr 2021 12:32:16 -0400 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On Thu, Apr 8, 2021 at 12:24 PM Marios Andreou wrote: > On Wed, Apr 7, 2021 at 7:55 PM John Fulton wrote: > > > > On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou > wrote: > >> > >> Hello TripleO o/ > >> > >> Thanks again to everybody who has volunteered to lead a session for > >> the coming Xena TripleO project teams gathering. > >> > >> I've had a go at the agenda [1] trying to keep it to max 4 or 5 > >> sessions per day with some breaks. > >> > >> Please review the slot assigned for your session at [1]. If that time > >> is not ok then please let me know as soon as possible and indicate if > >> you want it later or earlier or on any other day. > > > > > > On Monday I see: > > > > 1. STORAGE: 1430-1510 (ceph) > > 2. DF: 1510-1550 (ephemeral heat) > > 3. DF/Networking: 1600-1700 (ports v2 "no heat") > > > > If Harald and James are OK with it, could it be changed to the following? > > > > A. DF: 1430-1510 (ephemeral heat) > > B. DF/Networking: 1510-1550 (ports v2 "no heat") > > C. STORAGE: 1600-1700 (ceph) > > > > I ask because a portion of C depends on B, so it would be helpful to > have that context first. If the presenters have conflicts however, we don't > need this change. > > > > ACK thanks John that totally makes sense... as just discussed on irc > [1] I've updated the schedule to reflect your proposal. > > I haven't heard back from slagle yet but cc'ing him here and if there > are any issues we can work them out > The change wfm, thanks. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Apr 8 16:50:37 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Apr 2021 16:50:37 +0000 Subject: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: References: Message-ID: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: [...] > I proposed a patch to move to nodejs10 all our projects that depend on > nodejs: > > https://review.opendev.org/c/openstack/project-config/+/785353 > > When this patch will be merged I think that this job could be reenqueued. I reenqueued the tag, but release-openstack-javascript failed on a different problem. NPM complains that there's already a eslint-config-openstack 4.0.1 published which can't be overwritten, but the tag is for 4.1.0... someone should probably update the version parameter in eslint-config-openstack's package.json file, which means it'll need another release tagged anyway (4.1.1?). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jasonanderson at uchicago.edu Thu Apr 8 17:00:19 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Thu, 8 Apr 2021 17:00:19 +0000 Subject: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? In-Reply-To: References: Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341@uchicago.edu> As usual, “rubber ducking” the openstack-discuss list yielded fruit. It turns out that I didn’t have the l2population mechanism driver enabled. I thought this was optional for some reason. It looks like enabling this and restarting the neutorn-openvswitch-agent has fixed connectivity! /Jason > On Apr 8, 2021, at 11:20 AM, Jason Anderson wrote: > > Hello stackers, > > I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. > > What works: > - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. > - I can create all my neutron networks, routers, subnets, without (obvious) errors. > - I can update security groups on the container and see the iptables rules updated appropriately. > - I can directly create Docker networks using the kuryr driver/type. > > What doesn’t work: > - I can’t see any vxlan ports on the br-tun OVS bridge > - I can’t access the exposed container ports from the control/network node via the router netns > - Because of that, I can’t assign floating IPs because NAT effectively won’t work to reach the containers > > The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. > > Has anybody deployed such a setup / are there limitations I should know about? > > Thank you! > > > Jason Anderson > > DevOps Lead, Chameleon > > --- > > Department of Computer Science, University of Chicago > Mathematics and Computer Science, Argonne National Laboratory > jasonanderson at uchicago.edu > From ildiko.vancsa at gmail.com Thu Apr 8 17:18:54 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 8 Apr 2021 19:18:54 +0200 Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> Hi, I’m reaching out to draw your attention to the Edge Computing Group sessions on the PTG in less than two weeks. We are still formalizing our agenda, but we have storage identified as one of the topics that the working group would like to discuss. It would be great to have the session also as a continuation to earlier discussions that we had on previous PTGs with relevant OpenStack project contributors. We have a few cross-community sessions scheduled already, but we still have some flexibility in our agenda to schedule this topic so the most people who are interested in participating can join. Our available options are: * Monday (April 19) between 1400 UTC and 1500 UTC * Tuesday (April) between 1400 UTC and 1600 UTC __Please let me know if you or your project would like to participate and if you have a time slot difference from the above.__ Thanks and Best Regards, Ildikó (IRC ildikov on Freenode) From johfulto at redhat.com Thu Apr 8 17:39:45 2021 From: johfulto at redhat.com (John Fulton) Date: Thu, 8 Apr 2021 13:39:45 -0400 Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG In-Reply-To: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> References: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> Message-ID: On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa wrote: > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group > sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as one > of the topics that the working group would like to discuss. It would be > great to have the session also as a continuation to earlier discussions > that we had on previous PTGs with relevant OpenStack project contributors. > > We have a few cross-community sessions scheduled already, but we still > have some flexibility in our agenda to schedule this topic so the most > people who are interested in participating can join. Our available options > are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > I'm not available Monday but could join Tuesday. I'd be curious to hear what others are doing with Storage on the Edge and could share some info on how TripleO does it. John > > __Please let me know if you or your project would like to participate and > if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Thu Apr 8 20:04:04 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 8 Apr 2021 20:04:04 +0000 Subject: [kolla] RHEL based container image Message-ID: Hi, Given [1], RHEL based container is supported on RHEL 8 by Kolla. Where can I get RHEL based container images? I see CentOS and Ubuntu based images on docker hub, but can't find RHEL based images. [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support-matrix.html Thanks! Tony From stephane.chalansonnet at acoss.fr Thu Apr 8 20:28:30 2021 From: stephane.chalansonnet at acoss.fr (=?utf-8?B?Q0hBTEFOU09OTkVUIFN0w6lwaGFuZSAoQWNvc3Mp?=) Date: Thu, 8 Apr 2021 20:28:30 +0000 Subject: [kolla] RHEL based container image (Tony Liu) Message-ID: Hello, You need an active subscription RHOSP for doing that , but Kolla was not supported by Redhat unfortunely ... Stéphane Chalansonnet -----Message d'origine----- De : openstack-discuss-request at lists.openstack.org Envoyé : jeudi 8 avril 2021 22:04 À : openstack-discuss at lists.openstack.org Objet : openstack-discuss Digest, Vol 30, Issue 56 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed (Jeremy Stanley) 2. Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? (Jason Anderson) 3. [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG (Ildiko Vancsa) 4. Re: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG (John Fulton) 5. [kolla] RHEL based container image (Tony Liu) ---------------------------------------------------------------------- Message: 1 Date: Thu, 8 Apr 2021 16:50:37 +0000 From: Jeremy Stanley To: openstack-discuss at lists.openstack.org Subject: Re: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed Message-ID: <20210408165036.6tkcfwwix5ms3ig4 at yuggoth.org> Content-Type: text/plain; charset="utf-8" On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: [...] > I proposed a patch to move to nodejs10 all our projects that depend on > nodejs: > > https://review.opendev.org/c/openstack/project-config/+/785353 > > When this patch will be merged I think that this job could be reenqueued. I reenqueued the tag, but release-openstack-javascript failed on a different problem. NPM complains that there's already a eslint-config-openstack 4.0.1 published which can't be overwritten, but the tag is for 4.1.0... someone should probably update the version parameter in eslint-config-openstack's package.json file, which means it'll need another release tagged anyway (4.1.1?). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: ------------------------------ Message: 2 Date: Thu, 8 Apr 2021 17:00:19 +0000 From: Jason Anderson To: openstack-discuss Subject: Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341 at uchicago.edu> Content-Type: text/plain; charset="utf-8" As usual, “rubber ducking” the openstack-discuss list yielded fruit. It turns out that I didn’t have the l2population mechanism driver enabled. I thought this was optional for some reason. It looks like enabling this and restarting the neutorn-openvswitch-agent has fixed connectivity! /Jason > On Apr 8, 2021, at 11:20 AM, Jason Anderson wrote: > > Hello stackers, > > I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. > > What works: > - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. > - I can create all my neutron networks, routers, subnets, without (obvious) errors. > - I can update security groups on the container and see the iptables rules updated appropriately. > - I can directly create Docker networks using the kuryr driver/type. > > What doesn’t work: > - I can’t see any vxlan ports on the br-tun OVS bridge > - I can’t access the exposed container ports from the control/network > node via the router netns > - Because of that, I can’t assign floating IPs because NAT effectively > won’t work to reach the containers > > The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. > > Has anybody deployed such a setup / are there limitations I should know about? > > Thank you! > > > Jason Anderson > > DevOps Lead, Chameleon > > --- > > Department of Computer Science, University of Chicago Mathematics and > Computer Science, Argonne National Laboratory > jasonanderson at uchicago.edu > ------------------------------ Message: 3 Date: Thu, 8 Apr 2021 19:18:54 +0200 From: Ildiko Vancsa To: OpenStack Discuss Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0 at gmail.com> Content-Type: text/plain; charset=utf-8 Hi, I’m reaching out to draw your attention to the Edge Computing Group sessions on the PTG in less than two weeks. We are still formalizing our agenda, but we have storage identified as one of the topics that the working group would like to discuss. It would be great to have the session also as a continuation to earlier discussions that we had on previous PTGs with relevant OpenStack project contributors. We have a few cross-community sessions scheduled already, but we still have some flexibility in our agenda to schedule this topic so the most people who are interested in participating can join. Our available options are: * Monday (April 19) between 1400 UTC and 1500 UTC * Tuesday (April) between 1400 UTC and 1600 UTC __Please let me know if you or your project would like to participate and if you have a time slot difference from the above.__ Thanks and Best Regards, Ildikó (IRC ildikov on Freenode) ------------------------------ Message: 4 Date: Thu, 8 Apr 2021 13:39:45 -0400 From: John Fulton To: Ildiko Vancsa Cc: OpenStack Discuss Subject: Re: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG Message-ID: Content-Type: text/plain; charset="utf-8" On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa wrote: > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group > sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as > one of the topics that the working group would like to discuss. It > would be great to have the session also as a continuation to earlier > discussions that we had on previous PTGs with relevant OpenStack project contributors. > > We have a few cross-community sessions scheduled already, but we still > have some flexibility in our agenda to schedule this topic so the most > people who are interested in participating can join. Our available > options > are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > I'm not available Monday but could join Tuesday. I'd be curious to hear what others are doing with Storage on the Edge and could share some info on how TripleO does it. John > > __Please let me know if you or your project would like to participate > and if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 5 Date: Thu, 8 Apr 2021 20:04:04 +0000 From: Tony Liu To: "openstack-discuss at lists.openstack.org" Subject: [kolla] RHEL based container image Message-ID: Content-Type: text/plain; charset="us-ascii" Hi, Given [1], RHEL based container is supported on RHEL 8 by Kolla. Where can I get RHEL based container images? I see CentOS and Ubuntu based images on docker hub, but can't find RHEL based images. [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support-matrix.html Thanks! Tony ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 30, Issue 56 ************************************************* From sbaker at redhat.com Thu Apr 8 22:17:21 2021 From: sbaker at redhat.com (Steve Baker) Date: Fri, 9 Apr 2021 10:17:21 +1200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me if it was earlier in the day. I'll probably make more sense at 1am than 3am :) Could I maybe swap with NETWORKING: 1300-1340? On 8/04/21 4:24 am, Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Apr 8 23:07:27 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 8 Apr 2021 16:07:27 -0700 Subject: [first contact] [SIG] PTG Planning! Message-ID: Hello! I didn't think we would need a ton of time and I tried to pick a time to balance everyone's timezones so we have an hour at 22 UTC on Monday in the Austin room. We have an etherpad that was autogenerated: https://etherpad.opendev.org/p/apr2021-ptg-first-contact Please add topics if you have them and your name if you plan to join us! -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Apr 9 00:36:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Apr 2021 00:36:17 +0000 Subject: [all][elections][tc] TC Vacancy Special Election Voting Kickoff Message-ID: <20210409003616.shwf353mgwlxjmwy@yuggoth.org> TC Vacancy Special Election Nomination period is now over. The four already elected TC members for this term are listed as candidates in the special election, but will not appear on any resulting poll as they have already been officially elected. Only new candidates who are not the four elected TC members for this term will appear on a subsequent poll for the TC vacancy special election. The poll for the TC Vacancy Special Election is now open and will remain open until Apr 15, 2021 23:45 UTC. We are selecting 1 additional TC member, please rank all candidates in your order of preference. You are eligible to vote if you are a Foundation individual member[1] that also has committed to one of the official project teams' deliverable repositories[2] over the Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC timeframe (Victoria to Wallaby) or if you are one of the extra-atcs.[3] Please note that in order to confirm contributors are foundation members, the preferred address in Gerrit must also be included in the addresses for the corresponding member profile. What to do if you don't see the email and have a commit in at least one of the official deliverables[2]: * check the trash or spam folder of your gerrit Preferred Email address[4], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from an official deliverable repo[2] and email the election officials[1]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Our democratic process is important to the health of OpenStack, please exercise your right to vote. Candidate statements/platforms can be found linked to Candidate names[6]. Happy voting! Thank you, [1] https://www.openstack.org/community/members/ [2] https://opendev.org/openstack/governance/src/commit/892c4f3a851428cf41bab57c6c283e82f1df06d8/reference/projects.yaml [3] Look for the extra-atcs element in [2] [4] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your preferred email. That is where the ballot has been sent. [5] https://governance.openstack.org/election/#election-officials [6] https://governance.openstack.org/election/#xena-tc-candidates -- Jeremy Stanley on behalf of the OpenStack Technical Elections Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tonyliu0592 at hotmail.com Fri Apr 9 01:03:45 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 9 Apr 2021 01:03:45 +0000 Subject: [kolla] RHEL based container image (Tony Liu) In-Reply-To: References: Message-ID: I have RHEL subscription and I want to know if it's possible to use Kolla deploy OpenStack. It's supposed to be yes based on the doc. I just want to know where I can get container images. The container image on RedHat is only for TripleO. Thanks! Tony > -----Original Message----- > From: CHALANSONNET Stéphane (Acoss) > Sent: Thursday, April 8, 2021 1:29 PM > To: openstack-discuss at lists.openstack.org > Subject: RE: [kolla] RHEL based container image (Tony Liu) > > Hello, > > You need an active subscription RHOSP for doing that , but Kolla was not > supported by Redhat unfortunely ... > > Stéphane Chalansonnet > > > -----Message d'origine----- > De : openstack-discuss-request at lists.openstack.org request at lists.openstack.org> > Envoyé : jeudi 8 avril 2021 22:04 > À : openstack-discuss at lists.openstack.org > Objet : openstack-discuss Digest, Vol 30, Issue 56 > > Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > discuss > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific than > "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. Re: [Release-job-failures] Release of > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > (Jeremy Stanley) > 2. Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > Zun containers? (Jason Anderson) > 3. [edge][cinder][manila][swift][tripleo] Storage at the edge > discussions at the PTG (Ildiko Vancsa) > 4. Re: [edge][cinder][manila][swift][tripleo] Storage at the > edge discussions at the PTG (John Fulton) > 5. [kolla] RHEL based container image (Tony Liu) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 8 Apr 2021 16:50:37 +0000 > From: Jeremy Stanley > To: openstack-discuss at lists.openstack.org > Subject: Re: [Release-job-failures] Release of > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > Message-ID: <20210408165036.6tkcfwwix5ms3ig4 at yuggoth.org> > Content-Type: text/plain; charset="utf-8" > > On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: > [...] > > I proposed a patch to move to nodejs10 all our projects that depend on > > nodejs: > > > > https://review.opendev.org/c/openstack/project-config/+/785353 > > > > When this patch will be merged I think that this job could be > reenqueued. > > I reenqueued the tag, but release-openstack-javascript failed on a > different problem. NPM complains that there's already a eslint-config- > openstack 4.0.1 published which can't be overwritten, but the tag is for > 4.1.0... someone should probably update the version parameter in eslint- > config-openstack's package.json file, which means it'll need another > release tagged anyway (4.1.1?). > -- > Jeremy Stanley > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 833 bytes > Desc: not available > URL: discuss/attachments/20210408/961aecf4/attachment-0001.sig> > > ------------------------------ > > Message: 2 > Date: Thu, 8 Apr 2021 17:00:19 +0000 > From: Jason Anderson > To: openstack-discuss > Subject: Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > Zun containers? > Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341 at uchicago.edu> > Content-Type: text/plain; charset="utf-8" > > As usual, “rubber ducking” the openstack-discuss list yielded fruit. It > turns out that I didn’t have the l2population mechanism driver enabled. > I thought this was optional for some reason. It looks like enabling this > and restarting the neutorn-openvswitch-agent has fixed connectivity! > > /Jason > > > On Apr 8, 2021, at 11:20 AM, Jason Anderson > wrote: > > > > Hello stackers, > > > > I’m interested in using zun to launch containers and assign floating > IPs via neutron to those containers. I am deploying zun, kuryr- > libnetwork, and neutron with kolla-ansible on the Train release. I’ve > configured neutron with one physical network and I’d like to use a VXLAN > overlay for tenant networks. > > > > What works: > > - I can launch containers on a neutron tenant network, they start > successfully, they get an IP and can reach each other if they’re co- > located on a single host. > > - I can create all my neutron networks, routers, subnets, without > (obvious) errors. > > - I can update security groups on the container and see the iptables > rules updated appropriately. > > - I can directly create Docker networks using the kuryr driver/type. > > > > What doesn’t work: > > - I can’t see any vxlan ports on the br-tun OVS bridge > > - I can’t access the exposed container ports from the control/network > > node via the router netns > > - Because of that, I can’t assign floating IPs because NAT effectively > > won’t work to reach the containers > > > > The fact that there are no ports on br-tun is supicious, but I’m not > sure how this is supposed to work. I don’t see anything weird in > neutron-openvswitch-agent logs but those logs are quite noisy and I’m > not sure what to look for. > > > > Has anybody deployed such a setup / are there limitations I should > know about? > > > > Thank you! > > > > > > Jason Anderson > > > > DevOps Lead, Chameleon > > > > --- > > > > Department of Computer Science, University of Chicago Mathematics and > > Computer Science, Argonne National Laboratory > > jasonanderson at uchicago.edu > > > > > ------------------------------ > > Message: 3 > Date: Thu, 8 Apr 2021 19:18:54 +0200 > From: Ildiko Vancsa > To: OpenStack Discuss > Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge > discussions at the PTG > Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0 at gmail.com> > Content-Type: text/plain; charset=utf-8 > > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group > sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as > one of the topics that the working group would like to discuss. It would > be great to have the session also as a continuation to earlier > discussions that we had on previous PTGs with relevant OpenStack project > contributors. > > We have a few cross-community sessions scheduled already, but we still > have some flexibility in our agenda to schedule this topic so the most > people who are interested in participating can join. Our available > options are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > > __Please let me know if you or your project would like to participate > and if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > > > > ------------------------------ > > Message: 4 > Date: Thu, 8 Apr 2021 13:39:45 -0400 > From: John Fulton > To: Ildiko Vancsa > Cc: OpenStack Discuss > Subject: Re: [edge][cinder][manila][swift][tripleo] Storage at the > edge discussions at the PTG > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa > wrote: > > > Hi, > > > > I’m reaching out to draw your attention to the Edge Computing Group > > sessions on the PTG in less than two weeks. > > > > We are still formalizing our agenda, but we have storage identified as > > one of the topics that the working group would like to discuss. It > > would be great to have the session also as a continuation to earlier > > discussions that we had on previous PTGs with relevant OpenStack > project contributors. > > > > We have a few cross-community sessions scheduled already, but we still > > have some flexibility in our agenda to schedule this topic so the most > > people who are interested in participating can join. Our available > > options > > are: > > > > * Monday (April 19) between 1400 UTC and 1500 UTC > > * Tuesday (April) between 1400 UTC and 1600 UTC > > > > I'm not available Monday but could join Tuesday. I'd be curious to hear > what others are doing with Storage on the Edge and could share some info > on how TripleO does it. > > John > > > > > > __Please let me know if you or your project would like to participate > > and if you have a time slot difference from the above.__ > > > > Thanks and Best Regards, > > Ildikó > > (IRC ildikov on Freenode) > > > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: discuss/attachments/20210408/ac34d099/attachment-0001.html> > > ------------------------------ > > Message: 5 > Date: Thu, 8 Apr 2021 20:04:04 +0000 > From: Tony Liu > To: "openstack-discuss at lists.openstack.org" > > Subject: [kolla] RHEL based container image > Message-ID: > .outlook.com> > > Content-Type: text/plain; charset="us-ascii" > > Hi, > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > Where can I get RHEL based container images? I see CentOS and Ubuntu > based images on docker hub, but can't find RHEL based images. > > [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support- > matrix.html > > > Thanks! > Tony > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > ------------------------------ > > End of openstack-discuss Digest, Vol 30, Issue 56 > ************************************************* From iwienand at redhat.com Fri Apr 9 04:01:11 2021 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 9 Apr 2021 14:01:11 +1000 Subject: [Multi-arch SIG] success to run full tempest tests on Arm64 env. What's next? In-Reply-To: References: Message-ID: On Tue, Apr 06, 2021 at 03:43:29PM +0800, Rico Lin wrote: > The job `devstack-platform-arm64` runs around 2.22 hrs to 3.04 hrs, which > is near two times slower than on x86 environment. It's not a solid number > as the performance might change a lot with different cloud environments and > different hardware. I guess right now we only have one ARM64 cloud so it won't vary that much :) But we're working on it ... I'd like to use this for nodepool / diskimage-builder end-to-end testing, where we bring up a devstack cloud, build images with dib, upload them to the devstack cloud with nodepool and boot them. But I found that there was no nested virtualisation and the binary translation mode was impractically slow; like I walked away for almost an hour and the serial console was putting out a letter every few seconds like a teletype from 1977 :) $ qemu-system-aarch64 -M virt -m 2048 -drive if=none,file=./test.qcow2,media=disk,id=hd0 -device virtio-blk-device,drive=hd0 -net none -pflash flash0.img -pflash flash1.img Maybe I have something wrong there? I couldn't find a lot of info on how to boot. I expected slow, but not that slow. Is binary translation practical? Is booting cirros images, etc. big part of this much longer runtime? -i From noonedeadpunk at ya.ru Fri Apr 9 05:24:53 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Fri, 09 Apr 2021 08:24:53 +0300 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <512255613.133816.1617895321756@ox.dhbw-mannheim.de> References: <512255613.133816.1617895321756@ox.dhbw-mannheim.de> Message-ID: <500221617945664@mail.yandex.ru> An HTML attachment was scrubbed... URL: From cgoncalves at redhat.com Fri Apr 9 06:27:55 2021 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Fri, 9 Apr 2021 08:27:55 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me > if it was earlier in the day. I'll probably make more sense at 1am than 3am > :) > > Could I maybe swap with NETWORKING: 1300-1340? > Fine with me. Michele, Dan? > On 8/04/21 4:24 am, Marios Andreou wrote: > > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michele at acksyn.org Fri Apr 9 06:44:49 2021 From: michele at acksyn.org (Michele Baldessari) Date: Fri, 9 Apr 2021 08:44:49 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 09, 2021 at 08:27:55AM +0200, Carlos Goncalves wrote: > On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > > > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me > > if it was earlier in the day. I'll probably make more sense at 1am than 3am > > :) > > > > Could I maybe swap with NETWORKING: 1300-1340? > > > Fine with me. > Michele, Dan? Totally fine by me > > On 8/04/21 4:24 am, Marios Andreou wrote: > > > > Hello TripleO o/ > > > > Thanks again to everybody who has volunteered to lead a session for > > the coming Xena TripleO project teams gathering. > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > sessions per day with some breaks. > > > > Please review the slot assigned for your session at [1]. If that time > > is not ok then please let me know as soon as possible and indicate if > > you want it later or earlier or on any other day. If you've decided > > the session no longer makes sense then also please tell me and we can > > move things around accordingly to finish earlier. > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > week before PTG. We can and likely will make changes after this date > > but last minute changes are best avoided to allow folks to schedule > > their PTG attendance across projects. > > > > Thanks everybody for your help! Looking forward to interesting > > presentations and discussions as always > > > > regards, marios > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From skaplons at redhat.com Fri Apr 9 06:53:42 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 09 Apr 2021 08:53:42 +0200 Subject: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? In-Reply-To: <0A70CB1A-35AA-4782-8BF3-496080E47341@uchicago.edu> References: <0A70CB1A-35AA-4782-8BF3-496080E47341@uchicago.edu> Message-ID: <22354979.Dg0L681ARF@p1> Hi, Dnia czwartek, 8 kwietnia 2021 19:00:19 CEST Jason Anderson pisze: > As usual, “rubber ducking” the openstack-discuss list yielded fruit. It turns out that I didn’t have the l2population mechanism driver enabled. I thought this was optional for some reason. It looks like enabling this and restarting the neutorn-openvswitch-agent has fixed connectivity! L2pop should be optional. It's required only when DVR is used. But if You don't want to use it You should disable it on both agent and server's side. In such case neutron-openvswitcht-agent should establish vxlan tunnels to all other nodes just after start of the agent, during first rpc_loop iteration: https://github.com/openstack/neutron/blob/bdd661d21898d573ef39448316860aa4c692b834/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L2604 > > /Jason > > > On Apr 8, 2021, at 11:20 AM, Jason Anderson wrote: > > > > Hello stackers, > > > > I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. > > > > What works: > > - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. > > - I can create all my neutron networks, routers, subnets, without (obvious) errors. > > - I can update security groups on the container and see the iptables rules updated appropriately. > > - I can directly create Docker networks using the kuryr driver/type. > > > > What doesn’t work: > > - I can’t see any vxlan ports on the br-tun OVS bridge > > - I can’t access the exposed container ports from the control/network node via the router netns > > - Because of that, I can’t assign floating IPs because NAT effectively won’t work to reach the containers > > > > The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. > > > > Has anybody deployed such a setup / are there limitations I should know about? > > > > Thank you! > > > > > > Jason Anderson > > > > DevOps Lead, Chameleon > > > > --- > > > > Department of Computer Science, University of Chicago > > Mathematics and Computer Science, Argonne National Laboratory > > jasonanderson at uchicago.edu > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 9 06:57:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 09 Apr 2021 08:57:33 +0200 Subject: [neutron][all] Tempest jobs running on rocky and queens branches are broken Message-ID: <4338832.CYQXJBBLPY@p1> Hi, I noticed it mostly in the neutron jobs but it seems that it's true also for other projects for jobs which still runs on Ubuntu 16.04. I Neutron case those are all jobs on stable/rocky and stable/queens branches. Due to [1] those jobs will end up with POST_FAILURE. So please don't recheck Your patches if You have such errors until that bug will be fixed. I think that gmann has or is working on fix for that. [1] https://bugs.launchpad.net/devstack/+bug/1923042 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From cjeanner at redhat.com Fri Apr 9 07:09:36 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 9 Apr 2021 09:09:36 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: <751d3ecf-b977-0557-de5f-7390e327db1d@redhat.com> Hey so far so good, my 2 slots are OK Cheers, C. On 4/7/21 6:24 PM, Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From marios at redhat.com Fri Apr 9 07:10:11 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Apr 2021 10:10:11 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 9, 2021 at 9:46 AM Michele Baldessari wrote: > > On Fri, Apr 09, 2021 at 08:27:55AM +0200, Carlos Goncalves wrote: > > On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > > > > > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me > > > if it was earlier in the day. I'll probably make more sense at 1am than 3am > > > :) > > > ouch sorry Steve and thank you for participating despite the bad time-difference for you! Yes we can make this change see below > > > Could I maybe swap with NETWORKING: 1300-1340? > > > > > Fine with me. > > Michele, Dan? > > Totally fine by me Great thanks folks - this works well actually since Dan S. already indicated (in another reply to me) that your current slot (1300-1340 UTC) is too early (like 5 am) so moving it to the later slot should work better for him too. I have just updated the schedule so on Tuesday 20 we have Baremetal sbaker @ 1300-1340 and then the networking/bgp/frr folks at 1510-1550 thank you! regards, marios > > > > On 8/04/21 4:24 am, Marios Andreou wrote: > > > > > > Hello TripleO o/ > > > > > > Thanks again to everybody who has volunteered to lead a session for > > > the coming Xena TripleO project teams gathering. > > > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > > sessions per day with some breaks. > > > > > > Please review the slot assigned for your session at [1]. If that time > > > is not ok then please let me know as soon as possible and indicate if > > > you want it later or earlier or on any other day. If you've decided > > > the session no longer makes sense then also please tell me and we can > > > move things around accordingly to finish earlier. > > > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > > week before PTG. We can and likely will make changes after this date > > > but last minute changes are best avoided to allow folks to schedule > > > their PTG attendance across projects. > > > > > > Thanks everybody for your help! Looking forward to interesting > > > presentations and discussions as always > > > > > > regards, marios > > > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > > > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > From hberaud at redhat.com Fri Apr 9 07:48:56 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 9 Apr 2021 09:48:56 +0200 Subject: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> References: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> Message-ID: Thanks Jeremy for your update. I'll try to discuss with the project team to see if a bugfix (4.1.1) version fits well for them. Anyway, the previous proposed fix seems to have helped us. We didn't face the max retry issue anymore, indeed, during the latest execution we faced a "post failure" so our job went further. http://lists.openstack.org/pipermail/release-job-failures/2021-April/001528.html Le jeu. 8 avr. 2021 à 18:53, Jeremy Stanley a écrit : > On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: > [...] > > I proposed a patch to move to nodejs10 all our projects that depend on > > nodejs: > > > > https://review.opendev.org/c/openstack/project-config/+/785353 > > > > When this patch will be merged I think that this job could be reenqueued. > > I reenqueued the tag, but release-openstack-javascript failed on a > different problem. NPM complains that there's already a > eslint-config-openstack 4.0.1 published which can't be overwritten, > but the tag is for 4.1.0... someone should probably update the > version parameter in eslint-config-openstack's package.json file, > which means it'll need another release tagged anyway (4.1.1?). > -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Fri Apr 9 07:52:17 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 9 Apr 2021 09:52:17 +0200 Subject: [kolla] RHEL based container image In-Reply-To: References: Message-ID: W dniu 08.04.2021 o 22:04, Tony Liu pisze: > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > Where can I get RHEL based container images? I see CentOS and Ubuntu > based images on docker hub, but can't find RHEL based images. > > I have RHEL subscription and I want to know if it's possible to use > Kolla deploy OpenStack. It's supposed to be yes based on the doc. I > just want to know where I can get container images. The container > image on RedHat is only for TripleO. We (as a project) do not build RHEL based container images. During PTG we will discuss dropping it from code [1]. Please use Wallaby CentOS images instead. They are using CentOS Stream 8 so the only difference you would get is what container image was used as a base. 1. https://review.opendev.org/c/openstack/kolla/+/785569 From mark at stackhpc.com Fri Apr 9 07:52:26 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 08:52:26 +0100 Subject: [kolla] RHEL based container image (Tony Liu) In-Reply-To: References: Message-ID: On Fri, 9 Apr 2021 at 02:04, Tony Liu wrote: > > I have RHEL subscription and I want to know if it's possible > to use Kolla deploy OpenStack. It's supposed to be yes based > on the doc. I just want to know where I can get container > images. The container image on RedHat is only for TripleO. Hi Tony, RHEL support is one of those things that was added a long time ago, but is not tested in CI. It is therefore likely to break at any point, especially now that Tripleo does not use Kolla images. I know that RH were pushing the UBI images, and I don't think we've actively done anything to move to those. We've added the future of RHEL support as a discussion topic for the PTG [1]. If you are interested, I recommend that you attend, or at least add some notes to the Etherpad. Thanks, Mark [1] https://etherpad.opendev.org/p/kolla-xena-ptg > > > Thanks! > Tony > > -----Original Message----- > > From: CHALANSONNET Stéphane (Acoss) > > Sent: Thursday, April 8, 2021 1:29 PM > > To: openstack-discuss at lists.openstack.org > > Subject: RE: [kolla] RHEL based container image (Tony Liu) > > > > Hello, > > > > You need an active subscription RHOSP for doing that , but Kolla was not > > supported by Redhat unfortunely ... > > > > Stéphane Chalansonnet > > > > > > -----Message d'origine----- > > De : openstack-discuss-request at lists.openstack.org > request at lists.openstack.org> > > Envoyé : jeudi 8 avril 2021 22:04 > > À : openstack-discuss at lists.openstack.org > > Objet : openstack-discuss Digest, Vol 30, Issue 56 > > > > Send openstack-discuss mailing list submissions to > > openstack-discuss at lists.openstack.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > > discuss > > or, via email, send a message with subject or body 'help' to > > openstack-discuss-request at lists.openstack.org > > > > You can reach the person managing the list at > > openstack-discuss-owner at lists.openstack.org > > > > When replying, please edit your Subject line so it is more specific than > > "Re: Contents of openstack-discuss digest..." > > > > > > Today's Topics: > > > > 1. Re: [Release-job-failures] Release of > > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > > (Jeremy Stanley) > > 2. Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > > Zun containers? (Jason Anderson) > > 3. [edge][cinder][manila][swift][tripleo] Storage at the edge > > discussions at the PTG (Ildiko Vancsa) > > 4. Re: [edge][cinder][manila][swift][tripleo] Storage at the > > edge discussions at the PTG (John Fulton) > > 5. [kolla] RHEL based container image (Tony Liu) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Thu, 8 Apr 2021 16:50:37 +0000 > > From: Jeremy Stanley > > To: openstack-discuss at lists.openstack.org > > Subject: Re: [Release-job-failures] Release of > > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > > Message-ID: <20210408165036.6tkcfwwix5ms3ig4 at yuggoth.org> > > Content-Type: text/plain; charset="utf-8" > > > > On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: > > [...] > > > I proposed a patch to move to nodejs10 all our projects that depend on > > > nodejs: > > > > > > https://review.opendev.org/c/openstack/project-config/+/785353 > > > > > > When this patch will be merged I think that this job could be > > reenqueued. > > > > I reenqueued the tag, but release-openstack-javascript failed on a > > different problem. NPM complains that there's already a eslint-config- > > openstack 4.0.1 published which can't be overwritten, but the tag is for > > 4.1.0... someone should probably update the version parameter in eslint- > > config-openstack's package.json file, which means it'll need another > > release tagged anyway (4.1.1?). > > -- > > Jeremy Stanley > > -------------- next part -------------- > > A non-text attachment was scrubbed... > > Name: signature.asc > > Type: application/pgp-signature > > Size: 833 bytes > > Desc: not available > > URL: > discuss/attachments/20210408/961aecf4/attachment-0001.sig> > > > > ------------------------------ > > > > Message: 2 > > Date: Thu, 8 Apr 2021 17:00:19 +0000 > > From: Jason Anderson > > To: openstack-discuss > > Subject: Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > > Zun containers? > > Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341 at uchicago.edu> > > Content-Type: text/plain; charset="utf-8" > > > > As usual, “rubber ducking” the openstack-discuss list yielded fruit. It > > turns out that I didn’t have the l2population mechanism driver enabled. > > I thought this was optional for some reason. It looks like enabling this > > and restarting the neutorn-openvswitch-agent has fixed connectivity! > > > > /Jason > > > > > On Apr 8, 2021, at 11:20 AM, Jason Anderson > > wrote: > > > > > > Hello stackers, > > > > > > I’m interested in using zun to launch containers and assign floating > > IPs via neutron to those containers. I am deploying zun, kuryr- > > libnetwork, and neutron with kolla-ansible on the Train release. I’ve > > configured neutron with one physical network and I’d like to use a VXLAN > > overlay for tenant networks. > > > > > > What works: > > > - I can launch containers on a neutron tenant network, they start > > successfully, they get an IP and can reach each other if they’re co- > > located on a single host. > > > - I can create all my neutron networks, routers, subnets, without > > (obvious) errors. > > > - I can update security groups on the container and see the iptables > > rules updated appropriately. > > > - I can directly create Docker networks using the kuryr driver/type. > > > > > > What doesn’t work: > > > - I can’t see any vxlan ports on the br-tun OVS bridge > > > - I can’t access the exposed container ports from the control/network > > > node via the router netns > > > - Because of that, I can’t assign floating IPs because NAT effectively > > > won’t work to reach the containers > > > > > > The fact that there are no ports on br-tun is supicious, but I’m not > > sure how this is supposed to work. I don’t see anything weird in > > neutron-openvswitch-agent logs but those logs are quite noisy and I’m > > not sure what to look for. > > > > > > Has anybody deployed such a setup / are there limitations I should > > know about? > > > > > > Thank you! > > > > > > > > > Jason Anderson > > > > > > DevOps Lead, Chameleon > > > > > > --- > > > > > > Department of Computer Science, University of Chicago Mathematics and > > > Computer Science, Argonne National Laboratory > > > jasonanderson at uchicago.edu > > > > > > > > > ------------------------------ > > > > Message: 3 > > Date: Thu, 8 Apr 2021 19:18:54 +0200 > > From: Ildiko Vancsa > > To: OpenStack Discuss > > Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge > > discussions at the PTG > > Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0 at gmail.com> > > Content-Type: text/plain; charset=utf-8 > > > > Hi, > > > > I’m reaching out to draw your attention to the Edge Computing Group > > sessions on the PTG in less than two weeks. > > > > We are still formalizing our agenda, but we have storage identified as > > one of the topics that the working group would like to discuss. It would > > be great to have the session also as a continuation to earlier > > discussions that we had on previous PTGs with relevant OpenStack project > > contributors. > > > > We have a few cross-community sessions scheduled already, but we still > > have some flexibility in our agenda to schedule this topic so the most > > people who are interested in participating can join. Our available > > options are: > > > > * Monday (April 19) between 1400 UTC and 1500 UTC > > * Tuesday (April) between 1400 UTC and 1600 UTC > > > > __Please let me know if you or your project would like to participate > > and if you have a time slot difference from the above.__ > > > > Thanks and Best Regards, > > Ildikó > > (IRC ildikov on Freenode) > > > > > > > > > > > > ------------------------------ > > > > Message: 4 > > Date: Thu, 8 Apr 2021 13:39:45 -0400 > > From: John Fulton > > To: Ildiko Vancsa > > Cc: OpenStack Discuss > > Subject: Re: [edge][cinder][manila][swift][tripleo] Storage at the > > edge discussions at the PTG > > Message-ID: > > > > Content-Type: text/plain; charset="utf-8" > > > > On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa > > wrote: > > > > > Hi, > > > > > > I’m reaching out to draw your attention to the Edge Computing Group > > > sessions on the PTG in less than two weeks. > > > > > > We are still formalizing our agenda, but we have storage identified as > > > one of the topics that the working group would like to discuss. It > > > would be great to have the session also as a continuation to earlier > > > discussions that we had on previous PTGs with relevant OpenStack > > project contributors. > > > > > > We have a few cross-community sessions scheduled already, but we still > > > have some flexibility in our agenda to schedule this topic so the most > > > people who are interested in participating can join. Our available > > > options > > > are: > > > > > > * Monday (April 19) between 1400 UTC and 1500 UTC > > > * Tuesday (April) between 1400 UTC and 1600 UTC > > > > > > > I'm not available Monday but could join Tuesday. I'd be curious to hear > > what others are doing with Storage on the Edge and could share some info > > on how TripleO does it. > > > > John > > > > > > > > > > __Please let me know if you or your project would like to participate > > > and if you have a time slot difference from the above.__ > > > > > > Thanks and Best Regards, > > > Ildikó > > > (IRC ildikov on Freenode) > > > > > > > > > > > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > discuss/attachments/20210408/ac34d099/attachment-0001.html> > > > > ------------------------------ > > > > Message: 5 > > Date: Thu, 8 Apr 2021 20:04:04 +0000 > > From: Tony Liu > > To: "openstack-discuss at lists.openstack.org" > > > > Subject: [kolla] RHEL based container image > > Message-ID: > > > .outlook.com> > > > > Content-Type: text/plain; charset="us-ascii" > > > > Hi, > > > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > > Where can I get RHEL based container images? I see CentOS and Ubuntu > > based images on docker hub, but can't find RHEL based images. > > > > [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support- > > matrix.html > > > > > > Thanks! > > Tony > > > > > > > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > openstack-discuss mailing list > > openstack-discuss at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > > > > ------------------------------ > > > > End of openstack-discuss Digest, Vol 30, Issue 56 > > ************************************************* From mark at stackhpc.com Fri Apr 9 07:56:52 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 08:56:52 +0100 Subject: [kolla] RHEL based container image In-Reply-To: References: Message-ID: On Fri, 9 Apr 2021 at 08:52, Marcin Juszkiewicz wrote: > > W dniu 08.04.2021 o 22:04, Tony Liu pisze: > > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > > Where can I get RHEL based container images? I see CentOS and Ubuntu > > based images on docker hub, but can't find RHEL based images. > > > > I have RHEL subscription and I want to know if it's possible to use > > Kolla deploy OpenStack. It's supposed to be yes based on the doc. I > > just want to know where I can get container images. The container > > image on RedHat is only for TripleO. > > We (as a project) do not build RHEL based container images. During PTG > we will discuss dropping it from code [1]. > > Please use Wallaby CentOS images instead. They are using CentOS Stream 8 > so the only difference you would get is what container image was used as > a base. > > 1. https://review.opendev.org/c/openstack/kolla/+/785569 > For those not at the coal face... Wallaby isn't released yet - please use Victoria or earlier! From hberaud at redhat.com Fri Apr 9 08:49:40 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 9 Apr 2021 10:49:40 +0200 Subject: [telemetry][cyborg][heat][monasca][tacker][keystone][release] Last minute RC to land fixes Message-ID: Hello teams listed above, We identified fixes and significant changes in your repos so we proposed last minute RC to allow you to release them before the final release. Your teams patches are available here: https://review.opendev.org/q/topic:%22wallaby-final-rc%22 Deadline is today, please validate them ASAP to have a chance to see these fixes released. Patches without response from PTLs/liaisons will be abandoned. After this point final release for RC projects will be started. Notice that RC changes should be on stable/wallaby and not on master, all projects are now branched so your master branches are now for Xena purpose. Thank you for your understanding. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From janders at redhat.com Fri Apr 9 09:14:19 2021 From: janders at redhat.com (Jacob Anders) Date: Fri, 9 Apr 2021 19:14:19 +1000 Subject: [ironic] APAC-Europe SPUC time? In-Reply-To: References: Message-ID: Hi Dmitry, Thanks for your email and apologies for slow reply. Keeping the APAC SPUC at 10am UTC would work well for me. The only concern is it may fall in the lunch time slot in Europe but that might actually be a good thing - we can do lunch-dinner sessions and talk food if we want to :) @Riccardo what do you reckon? Cheers, Jacob On Thu, Apr 8, 2021 at 12:01 AM Dmitry Tantsur wrote: > Hi folks! > > The initial SPUC datetime was for 10am UTC, which was 11am for us in > central Europe, now is supposed to be 12pm. On one hand, I find it more > convenient to have SPUC at 11am still, on the other - I have German classes > at this time for a few months starting mid-April. > > What do you think? Should we keep it in UTC, i.e. 12pm for us in Europe? > Will that work for you Jacob? > > Dmitry > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Apr 9 09:40:29 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 9 Apr 2021 11:40:29 +0200 Subject: [neutron] Neutron, nftables support and other fantastic beasts Message-ID: Hello Neutrinos: During Wallaby I've been working on enabling "nftables" support in Neutron. The goal was to use the new Netfilter framework replacing the legacy tools ("iptables", "ip6tables", "arptables" and "ebtables"). Because each namespace has its own Netfilter process, isolated from other namespaces, the migration process could be segmented in several tasks: dnat, fip, router, dhcp, metadata, Linux Bridge FW and OVS hybrid FW (I think I'm not missing anything here). When swapping to the new "nftables" framework, we can use the legacy API tools provided. Those tools provide a smooth transition to the new tooling (we found some differences that are now solved). That means we can keep the current code while using "nftables". Please, read [3] before reading the next paragraph, explaining the three "Netfilter" available framework alternatives. I started creating a "nft" (the "nftables" native binary) parser [1] to implement a NFtablesManager class, same as IPtablesManager. But soon I found that the transition to the new API is not that easy. This is not only a matter of creating the equivalent rule in the "nft" API but considering how those rules are handled in "nftables". Other problems found when using the new "nft" API: - The "--checksum-fill" command used in OVN metadata and DHCP namespace has no equivalent in "nft". That means old DHCP servers incorrectly calculating the packet checksum or DKDP environments won't work correctly. - "ipset" tool, used to group IP addresses and reduce the LB FW rule size, can be converted into a "map" [3]. The problem is this is only understood by the new API, not the "nftables" binaries using the legacy API. In a nutshell, what is the current status? We support (a) legacy tools and (b) "nftables" binaries with legacy API. This is the list of patches enabling the second option: - https://review.opendev.org/c/openstack/neutron/+/784913: this problem was affecting LB FW when "ipset" was disabled (merged). - https://review.opendev.org/c/openstack/neutron/+/785177: reorder the "ebtables" rules and prevent execution error 4 with empty chains. - https://review.opendev.org/c/openstack/neutron/+/785144: this patch, on top of the other two, creates two new neutron-tempest-plugin CI jobs, based on "linuxbridge" and "openvswitch-iptables_hybrid", to test the execution with the new binaries. - https://review.opendev.org/c/openstack/neutron/+/775413: this patch tests what is implemented in the previous one but testing those jobs in the "check" queue (it is a DNM patch just for testing). About the third option, to support the native "nft" API, I don't know if now we have the resources (time) and the need for that. This could be discussed again in the next PTG and in this mail too. Regards. [1]https://review.opendev.org/c/openstack/neutron/+/759874 [2] https://review.opendev.org/c/openstack/neutron/+/785137/3/doc/source/admin/deploy-lb.rst [3] https://review.opendev.org/c/openstack/neutron/+/775413/10/neutron/agent/linux/ipset_manager.py -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Fri Apr 9 10:05:54 2021 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Fri, 9 Apr 2021 10:05:54 +0000 Subject: =?utf-8?B?562U5aSNOiBbdGVsZW1ldHJ5XVtjeWJvcmddW2hlYXRdW21vbmFzY2FdW3Rh?= =?utf-8?B?Y2tlcl1ba2V5c3RvbmVdW3JlbGVhc2VdIExhc3QgbWludXRlIFJDIHRvIGxh?= =?utf-8?Q?nd_fixes?= In-Reply-To: References: Message-ID: Thanks Herve Beraud, +1 for this patch. brinzhang Inspur Electronic Information Industry Co.,Ltd. 发件人: Herve Beraud [mailto:hberaud at redhat.com] 发送时间: 2021年4月9日 16:50 收件人: openstack-discuss 主题: [telemetry][cyborg][heat][monasca][tacker][keystone][release] Last minute RC to land fixes Hello teams listed above, We identified fixes and significant changes in your repos so we proposed last minute RC to allow you to release them before the final release. Your teams patches are available here: https://review.opendev.org/q/topic:%22wallaby-final-rc%22 Deadline is today, please validate them ASAP to have a chance to see these fixes released. Patches without response from PTLs/liaisons will be abandoned. After this point final release for RC projects will be started. Notice that RC changes should be on stable/wallaby and not on master, all projects are now branched so your master branches are now for Xena purpose. Thank you for your understanding. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 9 12:28:11 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 9 Apr 2021 14:28:11 +0200 Subject: [QA][Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: References: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> Message-ID: Adding the QA team (they own eslint-config-openstack) to highlight this topic. Le ven. 9 avr. 2021 à 09:48, Herve Beraud a écrit : > Thanks Jeremy for your update. > > I'll try to discuss with the project team to see if a bugfix (4.1.1) > version fits well for them. > > Anyway, the previous proposed fix seems to have helped us. We didn't face > the max retry issue anymore, indeed, during the latest execution we faced a > "post failure" so our job went further. > > > http://lists.openstack.org/pipermail/release-job-failures/2021-April/001528.html > > Le jeu. 8 avr. 2021 à 18:53, Jeremy Stanley a écrit : > >> On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: >> [...] >> > I proposed a patch to move to nodejs10 all our projects that depend on >> > nodejs: >> > >> > https://review.opendev.org/c/openstack/project-config/+/785353 >> > >> > When this patch will be merged I think that this job could be >> reenqueued. >> >> I reenqueued the tag, but release-openstack-javascript failed on a >> different problem. NPM complains that there's already a >> eslint-config-openstack 4.0.1 published which can't be overwritten, >> but the tag is for 4.1.0... someone should probably update the >> version parameter in eslint-config-openstack's package.json file, >> which means it'll need another release tagged anyway (4.1.1?). >> -- >> Jeremy Stanley >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Apr 9 13:27:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Apr 2021 08:27:29 -0500 Subject: [neutron][all] Tempest jobs running on rocky and queens branches are broken In-Reply-To: <4338832.CYQXJBBLPY@p1> References: <4338832.CYQXJBBLPY@p1> Message-ID: <178b6d0ef1c.12a602b48166191.5024862720479411475@ghanshyammann.com> ---- On Fri, 09 Apr 2021 01:57:33 -0500 Slawek Kaplonski wrote ---- > Hi, > > I noticed it mostly in the neutron jobs but it seems that it's true also for other projects for jobs which still runs on Ubuntu 16.04. > I Neutron case those are all jobs on stable/rocky and stable/queens branches. > > Due to [1] those jobs will end up with POST_FAILURE. So please don't recheck Your patches if You have such errors until that bug will be fixed. > I think that gmann has or is working on fix for that. Yeah, making stackviz not to fail job is up, please wait until those land. - https://review.opendev.org/q/Ifee04f28ecee52e74803f1623aba5cfe5ee5ec90 -gmann > > [1] https://bugs.launchpad.net/devstack/+bug/1923042 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From pramchan at yahoo.com Fri Apr 9 14:37:55 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 9 Apr 2021 14:37:55 +0000 (UTC) Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <1443143388.3156237.1617300716714@mail.yahoo.com> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> <1443143388.3156237.1617300716714@mail.yahoo.com> Message-ID: <633882697.270095.1617979075678@mail.yahoo.com> Hi all, Have a vaccine apptmt at 9.40 PDT. Depending on the schedule may get late or may miss. Wanted to get some feedback  on testing results but still have 1 more week and next one will depend on where I am and try catch up if I miss on etherpad as what is the results and where we are Wallaby on way  next  week and Xena  release planned for Ocober https://releases.openstack.org/wallaby/schedule.html#w-final What's new in Wallaby and what Tempest testing get impacted in vote and add-ons. ThanksPrakashFor InteropWG Sent from Yahoo Mail on Android On Thu, Apr 1, 2021 at 11:11 AM, prakash RAMCHANDRAN wrote: Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks,Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Apr 9 14:43:33 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 9 Apr 2021 14:43:33 +0000 (UTC) Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <633882697.270095.1617979075678@mail.yahoo.com> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> <1443143388.3156237.1617300716714@mail.yahoo.com> <633882697.270095.1617979075678@mail.yahoo.com> Message-ID: <481410903.269918.1617979413931@mail.yahoo.com> Typo:Wallaby not Vote. Tha's a different topic related to TC voting and who are contesting for TC? Plus PTG plans on 19th and beyond,  coverage and was planning to attend besides Interop, possibly Airship and Triple-O changes wrt Ironic. It's impact on Zun and COEs in OpenStack K over O. ThxPrakash Sent from Yahoo Mail on Android On Fri, Apr 9, 2021 at 7:37 AM, prakash RAMCHANDRAN wrote: Hi all, Have a vaccine apptmt at 9.40 PDT. Depending on the schedule may get late or may miss. Wanted to get some feedback  on testing results but still have 1 more week and next one will depend on where I am and try catch up if I miss on etherpad as what is the results and where we are Wallaby on way  next  week and Xena  release planned for Ocober https://releases.openstack.org/wallaby/schedule.html#w-final What's new in Wallaby and what Tempest testing get impacted in vote and add-ons. ThanksPrakashFor InteropWG Sent from Yahoo Mail on Android On Thu, Apr 1, 2021 at 11:11 AM, prakash RAMCHANDRAN wrote: Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks,Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Apr 9 15:59:58 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Apr 2021 10:59:58 -0500 Subject: [dev][cinder][keystone] Properly consuming system-scope in cinder In-Reply-To: References: <20210129172347.7wi3cv3gnneb46dj@localhost> Message-ID: <178b75c896c.10c83cd90176713.774777712214605451@ghanshyammann.com> ---- On Thu, 18 Feb 2021 17:53:06 -0600 Lance Bragstad wrote ---- > Brian and I had a discussion about all of this yesterday and we revisited the idea of a project-less URL template. This would allow us to revisit system-scope support for Wallaby under the assumption the client handles project IDs properly for system-scoped requests and cinder relaxes its project ID validation for system-scoped contexts. > > It's possible to get a cinder endpoint in the service catalog if you create a separate endpoint without project ID templating in the URL. I hacked this together in devstack [0] using a couple of changes to python-cinderclient [1] and cinder's API [2]. After that, I was able to list all volumes in a deployment as a system-administrator (using a system-scoped admin token) [3]. > The only hiccup I hit was that I was supplying two endpoints for the volumev3 service. If the endpoint without project ID templating appears first in the catalog for project-scoped tokens, then requests to cinder will fail because the project ID isn't in the URL. Remember, the only cinder endpoint in the catalog for system-scoped tokens was the one without templating, so this issue doesn't appear there. Also, we would need a separate patch to the tempest volume client before we could add any system-scope testing there. > > Thoughts? To solve the issue of which service catalog Tempest service clients should query, We can register the new endpoint with new name. In nova case when we moved the URL without project-id, we did move the old endpoint (with project_id) with name 'compute_legacy' and added new endpoint (without project-id)to 'compute' which is the default service catalog in Tempest to query[2]. Same way we can do for cinder too, the new endpoint with the name 'volumev3' (default catalog for cinder in Tempest) and old one can be moved to 'volumev3_legacy'. And to keep testing the old endpoint, we can add a separate job to query on old endpoint and rest of everything default to new endpoint. [1] https://github.com/openstack/devstack/blob/e53142ed0d314f07d974a104005be2120056d629/lib/nova#L357-L363 [2] https://github.com/openstack/tempest/blob/fa0a40b8bbc4f7e93a976f5575f8ad7c1890e0f4/tempest/config.py#L331 -gmann > > [0] https://review.opendev.org/c/openstack/devstack/+/776520[1] https://review.opendev.org/c/openstack/python-cinderclient/+/776469[2] https://review.opendev.org/c/openstack/cinder/+/776468[3] http://paste.openstack.org/show/802786/ > > > > On Wed, Feb 17, 2021 at 12:11 PM Lance Bragstad wrote: > Circling back on this topic. > I marked all the patches that incorporate system-scope support as WIP [0]. I think we can come back to these after we have a chance to decouple project IDs from cinder's API in Xena. I imagine that's going to be a pretty big change so we can push those reviews to the back burner for now. > > In the meantime, I reproposed all patches that touch the ADMIN_OR_OWNER rule and updated them to use the member and reader roles [1]. I also removed any system-scope policies from those patches. The surface area of these changes is a lot less than what we were originally expecting to get done for Wallaby. These changes should at least allow operators to use the member and reader roles on projects consistently with cinder when Wallaby goes out the door. > > To recap, this would mean anyone with the admin role on a project is still considered a system administrator in cinder (we can try and fix this in Xena). Operators can now use the member role to denote owners and give users the reader role on a project and those users shouldn't be able to make writable changes within cinder. > > [0] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac+label:Workflow%253C0[1] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac+label:Workflow%253E-1 > On Fri, Jan 29, 2021 at 11:24 AM Gorka Eguileor wrote: > On 28/01, Lance Bragstad wrote: > > Hey folks, > > > > As I'm sure some of the cinder folks are aware, I'm updating cinder > > policies to include support for some default personas keystone ships with. > > Some of those personas use system-scope (e.g., system-reader and > > system-admin) and I've already proposed a series of patches that describe > > what those changes look like from a policy perspective [0]. > > > > The question now is how we test those changes. To help guide that decision, > > I worked on three different testing approaches. The first was to continue > > testing policy using unit tests in cinder with mocked context objects. The > > second was to use DDT with keystonemiddleware mocked to remove a dependency > > on keystone. The third also used DDT, but included changes to update > > NoAuthMiddleware so that it wasn't as opinionated about authentication or > > authorization. I brought each approach in the cinder meeting this week > > where we discussed a fourth approach, doing everything in tempest. I > > summarized all of this in an etherpad [1] > > > > Up to yesterday morning, the only approach I hadn't tinkered with manually > > was tempest. I spent some time today figuring that out, resulting in a > > patch to cinderlib [2] to enable a protection test job, and > > cinder_tempest_plugin [3] that adds the plumbing and some example tests. > > > > In the process of implementing support for tempest testing, I noticed that > > service catalogs for system-scoped tokens don't contain cinder endpoints > > [4]. This is because the cinder endpoint contains endpoint templating in > > the URL [5], which keystone will substitute with the project ID of the > > token, if and only if the catalog is built for a project-scoped token. > > System and domain-scoped tokens do not have a reasonable project ID to use > > in this case, so the templating is skipped, resulting in a cinder service > > in the catalog without endpoints [6]. > > > > This cascades in the client, specifically tempest's volume client, because > > it can't find a suitable endpoint for request to the volume service [7]. > > > > Initially, my testing approaches were to provide examples for cinder > > developers to assess the viability of each approach before committing to a > > protection testing strategy. But, the tempest approach highlighted a larger > > issue for how we integrate system-scope support into cinder because of the > > assumption there will always be a project ID in the path (for the majority > > of the cinder API). I can think of two ways to approach the problem, but > > I'm hoping others have more. > > > > Hi Lance, > > Sorry to hear that the Cinder is giving you such trouble. > > > First, we remove project IDs from cinder's API path. > > > > This would be similar to how nova (and I assume other services) moved away > > from project-specific URLs (e.g., /v3/%{project_id}s/volumes would become > > /v3/volumes). This would obviously require refactoring to remove any > > assumptions cinder has about project IDs being supplied on the request > > path. But, this would force all authorization information to come from the > > context object. Once a deployer removes the endpoint URL templating, the > > endpoints will populate in the cinder entry of the service catalog. Brian's > > been helping me understand this and we're unsure if this is something we > > could even do with a microversion. I think nova did it moving from /v2/ to > > /v2.0/, which was technically classified as a major bump? This feels like a > > moon shot. > > > > In my opinion such a change should not be treated as a microversion and > would require us to go into v4, which is not something that is feasible > in the short term. > > > > Second, we update cinder's clients, including tempest, to put the project > > ID on the URL. > > > > After we update the clients to append the project ID for cinder endpoints, > > we should be able to remove the URL templating in keystone, allowing cinder > > endpoints to appear in system-scoped service catalogs (just like the first > > approach). Clients can use the base URL from the catalog and append the > > I'm not familiar with keystone catalog entries, so maybe I'm saying > something stupid, but couldn't we have multiple entries? A > project-specific URL and another one for the project and system scoped > requests? > > I know it sounds kind of hackish, but if we add them in the right order, > first the project one and then the new one, it would probably be > backward compatible, as older clients would get the first endpoint and > new clients would be able to select the right one. > > > admin project ID before putting the request on the wire. Even though the > > request has a project ID in the path, cinder would ignore it for > > system-specific APIs. This is already true for users with an admin role on > > a project because cinder will allow you to get volumes in one project if > > you have a token scoped to another with the admin role [8]. One potential > > side-effect is that cinder clients would need *a* project ID to build a > > request, potentially requiring another roundtrip to keystone. > > What would happen in this additional roundtrip? Would we be converting > provided project's name into its UUID? > > If that's the case then it wouldn't happen when UUIDs are being > provided, so for cases where this extra request means a performance > problem they could just provide the UUID. > > > > > Thoughts? > > Truth is that I would love to see the Cinder API move into URLs without > the project id as well as move out everything from contrib, but that > doesn't seem like a realistic piece of work we can bite right now. > > So I think your second proposal is the way to go. > > Thanks for all the work you are putting into this. > > Cheers, > Gorka. > > > > > > [0] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac > > [1] https://etherpad.opendev.org/p/cinder-secure-rbac-protection-testing > > [2] https://review.opendev.org/c/openstack/cinderlib/+/772770 > > [3] https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/772915 > > [4] http://paste.openstack.org/show/802117/ > > [5] http://paste.openstack.org/show/802097/ > > [6] > > https://opendev.org/openstack/keystone/src/commit/c239cc66615b41a0c09e031b3e268c82678bac12/keystone/catalog/backends/sql.py > > [7] http://paste.openstack.org/show/802092/ > > [8] http://paste.openstack.org/show/802118/ > > From marios at redhat.com Fri Apr 9 16:02:26 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Apr 2021 19:02:26 +0300 Subject: [TripleO] stable/wallaby branching Message-ID: Hello TripleO, quick update on the plan for stable/wallaby branching. The goal is to release tripleo stable/wallaby just after PTG i.e. last week of April. The tripleo-ci team have spent the previous sprint preparing and we now have the integration and component pipelines in place [1][2]. As of today we should also have the upstream check/gate multinode branchful jobs. We are planning to use this current sprint to resolve issues and ensure we have the CI coverage in place so we can safely release all the tripleo things. As we usually do, we are going to first branch python-tripleoclient and tripleo-common so we can exercise and sanity check the CI jobs. The stable/wallaby for client and common will appear after we merge [3]. *** PLEASE AVOID *** posting patches to stable/wallaby python-tripleoclient or tripleo-common until the CI team has completed our testing. Basically until we are ready to create a stable/wallaby for all the tripleo things (which will be announced in due course). Obviously as always please speak up if you disagree with any of the above or if something doesn't make sense or if you have any concerns about the proposed timings regards, marios [1] https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-integration-stable1 [2] https://review.rdoproject.org/zuul/builds?pipeline=openstack-component-tripleo [3] https://review.opendev.org/c/openstack/releases/+/785670 From marios at redhat.com Fri Apr 9 16:18:24 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Apr 2021 19:18:24 +0300 Subject: [TripleO] next irc meeting Tuesday Apr 13 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 13 April at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on Mar 30 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-03-30-14.00.html Hope you can make it on Tuesday, regards, marios From tonyliu0592 at hotmail.com Fri Apr 9 16:45:45 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 9 Apr 2021 16:45:45 +0000 Subject: [kolla] RHEL based container image In-Reply-To: References: Message-ID: Thank you Mark and Marcin for clarification! Will stay with Kolla and CentOS Stream. Tony > -----Original Message----- > From: Mark Goddard > Sent: Friday, April 9, 2021 12:57 AM > To: Marcin Juszkiewicz > Cc: openstack-discuss > Subject: Re: [kolla] RHEL based container image > > On Fri, 9 Apr 2021 at 08:52, Marcin Juszkiewicz > wrote: > > > > W dniu 08.04.2021 o 22:04, Tony Liu pisze: > > > > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > > > Where can I get RHEL based container images? I see CentOS and Ubuntu > > > based images on docker hub, but can't find RHEL based images. > > > > > > I have RHEL subscription and I want to know if it's possible to use > > > Kolla deploy OpenStack. It's supposed to be yes based on the doc. I > > > just want to know where I can get container images. The container > > > image on RedHat is only for TripleO. > > > > We (as a project) do not build RHEL based container images. During PTG > > we will discuss dropping it from code [1]. > > > > Please use Wallaby CentOS images instead. They are using CentOS Stream > > 8 so the only difference you would get is what container image was > > used as a base. > > > > 1. https://review.opendev.org/c/openstack/kolla/+/785569 > > > > For those not at the coal face... Wallaby isn't released yet - please > use Victoria or earlier! From mark at stackhpc.com Fri Apr 9 16:48:29 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 17:48:29 +0100 Subject: [kolla] PTL on holiday next week Message-ID: Hi, I'll be on holiday next week. Please keep in mind the current feature freeze and aim for stabilisation of the code and preparation for RC1 & branching. Let's aim to branch in the week beginning 19th April. Please also remember it's the PTG in the same week. Remember to add topics to the PTG Etherpad [1] as they come up in discussion or your thoughts. [1] https://etherpad.opendev.org/p/kolla-xena-ptg Thanks, Mark From mark at stackhpc.com Fri Apr 9 16:50:13 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 17:50:13 +0100 Subject: [kolla] PTG Message-ID: Hi, Just a reminder that it's the Kolla Xena PTG from 19th - 21st April. Anyone is welcome to attend, but please add your name to the Etherpad [1], and follow the instructions to sign up. If there is something you would like to discuss, please add it to the list of topics. Thanks, Mark [1] https://etherpad.opendev.org/p/kolla-xena-ptg From allison at openstack.org Fri Apr 9 19:06:52 2021 From: allison at openstack.org (Allison Price) Date: Fri, 9 Apr 2021 14:06:52 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Hi Julia, It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. Let me know if there is any other data you would like pulled. Thanks! Allison > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > Hey Allison, > > Metrics would be awesome and I'm just looking for the key high level > adoption information as that is good to put into the presentation. > > -Julia > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: >> >> Hi Julia, >> >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. >> >> Thanks! >> Allison >> >> >> >> >> >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: >> >> Related, Is there 2020 user survey data available? >> >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org >> wrote: >> >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! >> >> >> >> Thank you for your participation, >> >> Helena >> >> >> >> > From juliaashleykreger at gmail.com Fri Apr 9 19:28:00 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 9 Apr 2021 12:28:00 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Thanks Allison! Even telling friends doesn't really help, since we would be self-skewing. I guess part of the conundrum is it is easy for people not to really be fully aware of the extent of their usage and the mix of various projects under the hood. They know they get a star ship and it has warp engines, but they may not know the factory that turned out the starship. Only the geekiest might know those details. Anyway, I've been down this path before w/r/t the user survey. C'est la vie. Back to work! -Julia On Fri, Apr 9, 2021 at 12:07 PM Allison Price wrote: > > Hi Julia, > > It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. > > Let me know if there is any other data you would like pulled. > > Thanks! > Allison > > > > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > > > Hey Allison, > > > > Metrics would be awesome and I'm just looking for the key high level > > adoption information as that is good to put into the presentation. > > > > -Julia > > > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > >> > >> Hi Julia, > >> > >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > >> > >> Thanks! > >> Allison > >> > >> > >> > >> > >> > >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > >> > >> Related, Is there 2020 user survey data available? > >> > >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > >> wrote: > >> > >> > >> Hello ptls, > >> > >> > >> > >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > >> > >> > >> > >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > >> > >> > >> > >> Let me know if you have any other questions! > >> > >> > >> > >> Thank you for your participation, > >> > >> Helena > >> > >> > >> > >> > > > From gmann at ghanshyammann.com Fri Apr 9 19:32:25 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Apr 2021 14:32:25 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 9th April, 21 Message-ID: <178b81f0933.10e4b896f183324.6966564323891095362@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. If you feel this email is lengthy and can take time to read, I tried to categorize the topic for an easy read and should not take more than 5 min of your time. 1. What we completed this week: ========================= Project updates: ------------------- ** Keystone is switched to DPL model[1]. ** Mistral is switched to DPL model[2]. ** Made devstack-plugin-(amqp1|kafka) branchless[3] ** Deprecated project/deliverables: 1. networking-midonet[4] 2. monasca-transform[5] 3. monasca-analytics[6] 4. monasca-ceilometer[7] 5. monasca-log-api[7] Other updates: ------------------ ** PTL assignment for Xena cycle leaderless projects: We have finished the leader assignments for the leaderless project for Xena cycle[8]. Total 8 projects were leaderless in Xena election. PTL assigned to 6 projects, and 2 projects (Keystone and Mistral) adopted DPL model. ** Radosław Piliszek(yoctozepto) is vice-chair of TC for Xena cycle. ** Prepared the Community newsletter: "OpenStack project news" for this month[9]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-04-08-15.00.log.html * We will have next week's meeting on April 15th, Thursday 15:00 UTC. 3. Activities In progress: ================== Open Reviews ----------------- * No open reviews this week[10]. This is good progress by TC this week. Gate performance and heavy job configs ------------------------------------------------ * dansmith sent the progress on ML[11], and there is a good improvement on gate utilization. Thanks to dansmith for keep monitoring it and collecting the data. Election for one Vacant TC seat ------------------------------------- Voting is started for one open seat for TC and open until April 15, 2021 23:45 UTC. You might have got the email with the voting link; if not please read the instruction in the email from fungi[12]. PTG ----- TC is planning to meet in PTG for Thursday 2 hrs and Friday 4 hrs, details are in etherpad[13], feel free to add topic you would like to discuss with TC in PTG. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Office hours: The Technical Committee offers two office hours per week in #openstack-tc [15]: * Tuesday at 0100 UTC * Wednesday at 1500 UTC 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/784102 [2] https://review.opendev.org/c/openstack/governance/+/782195 [3] https://review.opendev.org/c/openstack/governance/+/784544 [4 ]https://review.opendev.org/c/openstack/governance/+/783799 [5] https://review.opendev.org/c/openstack/governance/+/783624 [6] https://review.opendev.org/c/openstack/governance/+/783659 [7] https://review.opendev.org/c/openstack/governance/+/783657 [8] https://etherpad.opendev.org/p/xena-leaderless [9] https://etherpad.opendev.org/p/newsletter-openstack-news [10] https://review.opendev.org/q/project:openstack/governance+status:open [11] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021534.html [12] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021718.html [13] https://etherpad.opendev.org/p/tc-xena-ptg [14] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [15] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [16] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From manteshpatil347 at gmail.com Fri Apr 9 03:18:07 2021 From: manteshpatil347 at gmail.com (Mantesh Patil) Date: Fri, 9 Apr 2021 08:48:07 +0530 Subject: [Group-based-policy] not able to create the policies Message-ID: Hi, I am deploying devstack Ussuri with GBP (stable/ussuri) on ubuntu 18.04 LTS(Updated the packages), after installation I am able to create the policies as mentioned in the wiki . But after creation, I am not able to list the policies. I am using the following command to list the policies and it is giving a warning "/usr/local/lib/python3.6/dist-packages/keystoneauth1/adapter.py:235: UserWarning: Using keystoneclient sessions has been deprecated. Please update your software to use keystoneauth1. warnings.warn('Using keystoneclient sessions has been deprecated. '" Command1: source admin-openrc.sh *Command2: gbp group-create web * and also getting the following error while creating the group [image: image.png] Command3: *gbp group-list -c name -c tenant_id -f value* Please give the information that how can I get a list of group policies using CLI and new authentication. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 37031 bytes Desc: not available URL: From zhaochenzhou at live.cn Fri Apr 9 03:42:23 2021 From: zhaochenzhou at live.cn (=?utf-8?B?6LW16ZmI5rSy?=) Date: Fri, 9 Apr 2021 11:42:23 +0800 Subject: migration maybe a iaas service for openstack Message-ID: An HTML attachment was scrubbed... URL: From zhaochenzhou at live.cn Fri Apr 9 03:55:21 2021 From: zhaochenzhou at live.cn (=?utf-8?B?6LW16ZmI5rSy?=) Date: Fri, 9 Apr 2021 11:55:21 +0800 Subject: =?utf-8?Q?Between_openstack_and_offline_IDC,_a_large_intranet_an?= =?utf-8?Q?d_a_large_second-tier_network_in_the_world_improve_the_feasibil?= =?utf-8?Q?ity_of_data_migration_and_disaster_recovery.?= Message-ID: An HTML attachment was scrubbed... URL: From rpittau at redhat.com Fri Apr 9 10:24:36 2021 From: rpittau at redhat.com (Riccardo Pittau) Date: Fri, 9 Apr 2021 12:24:36 +0200 Subject: [ironic] APAC-Europe SPUC time? In-Reply-To: References: Message-ID: 10am UTC works for me too, always in to talk about food! :) Thanks, Riccardo On Fri, Apr 9, 2021 at 11:14 AM Jacob Anders wrote: > Hi Dmitry, > > Thanks for your email and apologies for slow reply. > > Keeping the APAC SPUC at 10am UTC would work well for me. > > The only concern is it may fall in the lunch time slot in Europe but that > might actually be a good thing - we can do lunch-dinner sessions and talk > food if we want to :) @Riccardo what do you reckon? > > Cheers, > Jacob > > On Thu, Apr 8, 2021 at 12:01 AM Dmitry Tantsur > wrote: > >> Hi folks! >> >> The initial SPUC datetime was for 10am UTC, which was 11am for us in >> central Europe, now is supposed to be 12pm. On one hand, I find it more >> convenient to have SPUC at 11am still, on the other - I have German classes >> at this time for a few months starting mid-April. >> >> What do you think? Should we keep it in UTC, i.e. 12pm for us in Europe? >> Will that work for you Jacob? >> >> Dmitry >> >> -- >> Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >> Commercial register: Amtsgericht Muenchen, HRB 153243, >> Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael >> O'Neill >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Fri Apr 9 21:04:41 2021 From: dmeng at uvic.ca (dmeng) Date: Fri, 09 Apr 2021 14:04:41 -0700 Subject: [sdk]: compute service create_server method, how to create multiple servers Message-ID: Hello there, Hope this email finds you well. We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" attribute is supported by the create_server() method? We tried to create multiple servers by setting the max_count value, but only one server has been created. While when we use the python-novaclient package, novaclient.servers.create() method has the attribute max_count which allows creating multiple servers at once if set the value. So we would like to know is there a similar attribute like "max_count" that could allow us to create multiple servers at once in openstacksdk? Thanks and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Sat Apr 10 11:25:32 2021 From: hberaud at redhat.com (Herve Beraud) Date: Sat, 10 Apr 2021 13:25:32 +0200 Subject: [release] Release countdown for week R-0 Apr 12 - Apr 16 Message-ID: Development Focus ----------------- We will be releasing the coordinated OpenStack Wallaby release next week, on Wednesday, 14 April, 2021. Thanks to everyone involved in the Wallaby cycle! We are now in pre-release freeze, so no new deliverable will be created until final release, unless a release-critical regression is spotted. Otherwise, teams attending the virtual PTG should start to plan what they will be discussing there, by creating and filling team etherpads. You can access the list of PTG etherpads at: http://ptg.openstack.org/etherpads.html General Information ------------------- On release day, the release team will produce final versions of deliverables following the cycle-with-rc release model, by re-tagging the commit used for the last RC. A patch doing just that will be proposed. PTLs and release liaisons should watch for that final release patch from the release team. While not required, we would appreciate having an ack from each team before we approve it on the 14th, so that their approval is included in the metadata that goes onto the signed tag. Upcoming Deadlines & Dates -------------------------- Final Wallaby release: 14 April, 2021 Xena virtual PTG: 19 - 23 April, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Apr 10 16:42:56 2021 From: zigo at debian.org (Thomas Goirand) Date: Sat, 10 Apr 2021 18:42:56 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: Hi, Thanks a lot for the initiative, I very much enjoy the updates each cycle. However... On 4/7/21 8:37 PM, helena at openstack.org wrote: > Hello ptls, > > The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live > session via Zoom as well as live-streamed to YouTube. I'm sorry, but as nothing changes, I have to state it again. https://www.zdnet.com/article/critical-zoom-vulnerability-triggers-remote-code-execution-without-user-input/ And that's not the first time. There's free software alternatives (like Jitsi) and the tooling to deploy them are also available [1]. It has been proven to work very well and scale nicely with thousands of viewers. I regret that I'm the only person protesting about Zoom... Cheers, Thomas Goirand (zigo) [1] https://debconf-video-team.pages.debian.net/docs/ From Arkady.Kanevsky at dell.com Sun Apr 11 00:33:46 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 00:33:46 +0000 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Dell Customer Communication - Confidential Suggest that for next survey we also ask which protocol(s) customer using in Ironic. -----Original Message----- From: Julia Kreger Sent: Friday, April 9, 2021 2:28 PM To: Allison Price Cc: helena at openstack.org; OpenStack Discuss Subject: Re: [ptl] Wallaby Release Community Meeting [EXTERNAL EMAIL] Thanks Allison! Even telling friends doesn't really help, since we would be self-skewing. I guess part of the conundrum is it is easy for people not to really be fully aware of the extent of their usage and the mix of various projects under the hood. They know they get a star ship and it has warp engines, but they may not know the factory that turned out the starship. Only the geekiest might know those details. Anyway, I've been down this path before w/r/t the user survey. C'est la vie. Back to work! -Julia On Fri, Apr 9, 2021 at 12:07 PM Allison Price wrote: > > Hi Julia, > > It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. > > Let me know if there is any other data you would like pulled. > > Thanks! > Allison > > > > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > > > Hey Allison, > > > > Metrics would be awesome and I'm just looking for the key high level > > adoption information as that is good to put into the presentation. > > > > -Julia > > > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > >> > >> Hi Julia, > >> > >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > >> > >> Thanks! > >> Allison > >> > >> > >> > >> > >> > >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > >> > >> Related, Is there 2020 user survey data available? > >> > >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > >> wrote: > >> > >> > >> Hello ptls, > >> > >> > >> > >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > >> > >> > >> > >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > >> > >> > >> > >> Let me know if you have any other questions! > >> > >> > >> > >> Thank you for your participation, > >> > >> Helena > >> > >> > >> > >> > > > From Arkady.Kanevsky at dell.com Sun Apr 11 20:23:06 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:23:06 +0000 Subject: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Brian, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for cinder tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:25:13 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:25:13 +0000 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Adding comminuty From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:25 PM To: 'johnsomor at gmail.com' Subject: [Designate][Interop] request for 15-30 min on Xena PTG for Interop John, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:26:50 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:26:50 +0000 Subject: [Glance][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for glance tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:27:58 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:27:58 +0000 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Brian, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Heat tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:30:24 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:30:24 +0000 Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Kristi, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Keystone tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:31:28 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:31:28 +0000 Subject: [Manila][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Goutham, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Manila tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:32:55 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:32:55 +0000 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Brian, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for neutron tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:34:09 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:34:09 +0000 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Balazs, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Nova tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:37:19 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:37:19 +0000 Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Tim, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Swift tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Apr 11 23:27:55 2021 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 11 Apr 2021 18:27:55 -0500 Subject: [neutron] bug deputy report Abril 5th to 11th Message-ID: Hi, Here is this week's bugs deputy report: Critical ====== https://bugs.launchpad.net/neutron/+bug/1922563 [UT] py38 CI job failing frequently with TIMED_OUT. In progress. Proposed patch https://review.opendev.org/c/openstack/neutron/+/784771 High ==== https://bugs.launchpad.net/neutron/+bug/1922684 Functional dhcp agent tests fails to spawn metadata proxy. In progress. Proposed fix https://review.opendev.org/c/openstack/neutron/+/784903 https://bugs.launchpad.net/neutron/+bug/1923198 custom kill scripts don't works after migration to privsep. In progress. Proposed fix https://review.opendev.org/c/openstack/neutron/+/785638 https://bugs.launchpad.net/neutron/+bug/1923201 neutron-centos-8-tripleo-standalone in periodic queue runs Neutron from Victroria release. In progress. Proposed fix https://review.opendev.org/c/openstack/neutron/+/785660 Medium ====== https://bugs.launchpad.net/neutron/+bug/1922653 [L3][Port forwarding] multiple floating_ip:port to same internal fixed_ip:port (N-to-1 rule support). Waiting for owner, although it seems Liu Yulong might work on it https://bugs.launchpad.net/neutron/+bug/1922824 [ovn] external port always be scheduled on a single gateway. Needs owner https://bugs.launchpad.net/neutron/+bug/1922892 "ebtables-nft" returns error 4 when a new chain is created. . Needs owner https://bugs.launchpad.net/neutron/+bug/1922919 [FT] BaseOVSTestCase retrieving the wrong min BW queue/qos. In progress. Proposed patch https://review.opendev.org/c/openstack/neutron/+/785158 https://bugs.launchpad.net/neutron/+bug/1922934 [OVN] LSP register race condition with two controllers. In progress. Owner ralonsoh https://bugs.launchpad.net/neutron/+bug/1922923 OVS port issue. Liu Yulong suggested a solution. Awaiting update from submitter https://bugs.launchpad.net/neutron/+bug/1923083 python 3.9 failures. Confirmed. haleyb suggested a work around, which submitter reported as successful. Seems haleyb will work on a fix https://bugs.launchpad.net/neutron/+bug/1923161 DHCP notification could be optimized. In progress. Proposed patch: https://review.opendev.org/c/openstack/neutron/+/785581 RFE === https://bugs.launchpad.net/neutron/+bug/1922716 [RFE] BFD for BGP Dynamic Routing -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazumasa.nomura.rx at hitachi.com Mon Apr 12 04:52:24 2021 From: kazumasa.nomura.rx at hitachi.com (=?iso-2022-jp?B?GyRCTG5CPE9CQDUbKEIgLyBOT01VUkEbJEIhJBsoQktBWlVNQVNB?=) Date: Mon, 12 Apr 2021 04:52:24 +0000 Subject: [cinder] How to post multiple patches. Message-ID: Hi everyone, Hitachi has developed the out-of-tree driver as Cinder driver. But we want to deprecate the out-of-tree driver and support only the in-tree driver. We need to submit about ten more patches(*1) for full features which the out-of-tree driver has such as Consistency Group and Volume Replication. In that case, we have two options: 1. Submit two or three patches at once. In other words, submit two or three patches to Xena, then submit another two or three patches after previous patches were merged, and so on. This may give reviewers the feeling of endless. 2. Submit all patches at once to Xena. This will give reviewers the information how many patches remains from the beginning, but many pathes may bother them. Does anyone have an opinion as to which option is better? Thanks, Kazumasa Nomura E-mail: kazumasa.nomura.rx at hitachi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Apr 12 06:21:09 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 12 Apr 2021 08:21:09 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: <2752308.ClrQMDxLba@p1> Hi, Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > Brian, > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for neutron tempest or > tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. I just added it to our etherpad https://etherpad.opendev.org/p/neutron-xena-ptg I will be working on schedule of the sessions later this week and I will let You know what timeslot this session with Interop WG will be. Please let me know if You have any preferences. We have our sessions scheduled: Monday 1300 - 1600 UTC Tuesday 1300 - 1600 UTC Thursday 1300 - 1600 UTC Friday 1300 - 1600 UTC Our time slots which are already booked are: - Monday 15:00 - 16:00 UTC - Thursday 14:00 - 15:30 UTC - Friday 14:00 - 15:00 UTC > > Thanks, > Arkady > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From balazs.gibizer at est.tech Mon Apr 12 07:11:45 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 12 Apr 2021 09:11:45 +0200 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi Arkady, What about Wednesday 14:00 - 15:00 UTC? We don't have to fill a whole hour of course. Cheers, gibi On Sun, Apr 11, 2021 at 20:34, "Kanevsky, Arkady" wrote: > Balazs, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > min on PTG meeting to go over Interop testing and any changes for > Nova tempest or tempest configuration in Wallaby cycle or changes > planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > From ildiko.vancsa at gmail.com Mon Apr 12 10:50:32 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 12 Apr 2021 12:50:32 +0200 Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG In-Reply-To: References: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> Message-ID: <7361B0B1-10FB-4533-A168-2571BFFE39C7@gmail.com> > On Apr 8, 2021, at 19:39, John Fulton wrote: > > On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa wrote: > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as one of the topics that the working group would like to discuss. It would be great to have the session also as a continuation to earlier discussions that we had on previous PTGs with relevant OpenStack project contributors. > > We have a few cross-community sessions scheduled already, but we still have some flexibility in our agenda to schedule this topic so the most people who are interested in participating can join. Our available options are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > > I'm not available Monday but could join Tuesday. I'd be curious to hear what others are doing with Storage on the Edge and could share some info on how TripleO does it. Sounds good! We currently have storage scheduled for Tuesday. It may move within the 2-hour slot we have but I think we can consider the day fixed. If you or anyone has a time slot preference for the storage edge discussion next Tuesday please respond to this thread or reach out to me ASAP. Thanks, Ildikó (IRC: ildikov on Freenode) > > John > > > __Please let me know if you or your project would like to participate and if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > From hjensas at redhat.com Mon Apr 12 11:03:08 2021 From: hjensas at redhat.com (Harald Jensas) Date: Mon, 12 Apr 2021 13:03:08 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On 4/8/21 6:32 PM, James Slagle wrote: > > > On Thu, Apr 8, 2021 at 12:24 PM Marios Andreou > wrote: > > On Wed, Apr 7, 2021 at 7:55 PM John Fulton > wrote: > > > > On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou > wrote: > >> > >> Hello TripleO o/ > >> > >> Thanks again to everybody who has volunteered to lead a session for > >> the coming Xena TripleO project teams gathering. > >> > >> I've had a go at the agenda [1] trying to keep it to max 4 or 5 > >> sessions per day with some breaks. > >> > >> Please review the slot assigned for your session at [1]. If that > time > >> is not ok then please let me know as soon as possible and > indicate if > >> you want it later or earlier or on any other day. > > > > > > On Monday I see: > > > > 1. STORAGE: 1430-1510 (ceph) > > 2. DF: 1510-1550 (ephemeral heat) > > 3. DF/Networking: 1600-1700 (ports v2 "no heat") > > > > If Harald and James are OK with it, could it be changed to the > following? > > > > A. DF: 1430-1510 (ephemeral heat) > > B. DF/Networking: 1510-1550 (ports v2 "no heat") > > C. STORAGE: 1600-1700 (ceph) > > > > I ask because a portion of C depends on B, so it would be helpful > to have that context first. If the presenters have conflicts > however, we don't need this change. > > > > ACK thanks John that totally makes sense... as just discussed on irc > [1] I've updated the schedule to reflect your proposal. > > I haven't heard back from slagle yet but cc'ing him here and if there > are any issues we can work them out > > > The change wfm, thanks. > Works for me too. -- Harald From toky0ghoul at yandex.com Mon Apr 12 12:01:23 2021 From: toky0ghoul at yandex.com (toky0) Date: Mon, 12 Apr 2021 12:01:23 +0000 Subject: MAAS dhcpd issue Message-ID: Hi, Just started a maas+juju deployment on bare metal. I’m facing some errors while PXE booting[1]. Any leads ? Regards, Sami [1] Mar 22 09:57:13 maas kernel: [ 1885.666813] audit: type=1400 audit(1616407033.279:95): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1131 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:57:22 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:0c:bb via eno1: network vlan-5002: no free leases Mar 22 09:57:37 maas systemd[1]: systemd-timedated.service: Succeeded. Mar 22 09:57:43 maas kernel: [ 1915.677372] audit: type=1400 audit(1616407063.291:96): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1132 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:58:13 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:13 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:59:13 maas kernel: [ 2005.666324] audit: type=1400 audit(1616407153.280:97): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:59:43 maas kernel: [ 2035.667450] audit: type=1400 audit(1616407183.280:98): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 From C-Albert.Braden at charter.com Mon Apr 12 12:54:40 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 12 Apr 2021 12:54:40 +0000 Subject: [EXTERNAL] MAAS dhcpd issue In-Reply-To: References: Message-ID: Apparmor is causing the errors. This Stack Exchange post explains how to read the error message: https://unix.stackexchange.com/questions/116591/why-am-i-getting-apparmor-error-messages-in-the-syslog-about-ntp-and-ldap -----Original Message----- From: toky0 Sent: Monday, April 12, 2021 8:01 AM To: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] MAAS dhcpd issue CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi, Just started a maas+juju deployment on bare metal. I’m facing some errors while PXE booting[1]. Any leads ? Regards, Sami [1] Mar 22 09:57:13 maas kernel: [ 1885.666813] audit: type=1400 audit(1616407033.279:95): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1131 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:57:22 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:0c:bb via eno1: network vlan-5002: no free leases Mar 22 09:57:37 maas systemd[1]: systemd-timedated.service: Succeeded. Mar 22 09:57:43 maas kernel: [ 1915.677372] audit: type=1400 audit(1616407063.291:96): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1132 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:58:13 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:13 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:59:13 maas kernel: [ 2005.666324] audit: type=1400 audit(1616407153.280:97): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:59:43 maas kernel: [ 2035.667450] audit: type=1400 audit(1616407183.280:98): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From bkslash at poczta.onet.pl Mon Apr 12 13:24:06 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 12 Apr 2021 15:24:06 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer Message-ID: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Hi, Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? I’ve tried to use [service_providers] service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default in neutron.conf, but it doesn’t work either… Best regards Adam From radoslaw.piliszek at gmail.com Mon Apr 12 13:24:08 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 12 Apr 2021 15:24:08 +0200 Subject: [EXTERNAL] MAAS dhcpd issue In-Reply-To: References: Message-ID: Well, to me it looks like the apparmor errors are irrelevant. >From this log, I would assume that the dhcp client (PXE) has issues with the DHCPOFFER that it is receiving (or perhaps it cannot see it) as it sends DHCPDISCOVER again and again. Can you squeeze anything from the faulty PXE node? Did you manage to PXE boot any other machine? -yoctozepto On Mon, Apr 12, 2021 at 2:55 PM Braden, Albert wrote: > > Apparmor is causing the errors. This Stack Exchange post explains how to read the error message: > > https://unix.stackexchange.com/questions/116591/why-am-i-getting-apparmor-error-messages-in-the-syslog-about-ntp-and-ldap > > -----Original Message----- > From: toky0 > Sent: Monday, April 12, 2021 8:01 AM > To: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] MAAS dhcpd issue > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > Hi, > > Just started a maas+juju deployment on bare metal. I’m facing some errors while PXE booting[1]. > Any leads ? > > Regards, > Sami > > [1] > Mar 22 09:57:13 maas kernel: [ 1885.666813] audit: type=1400 audit(1616407033.279:95): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1131 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > Mar 22 09:57:22 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:0c:bb via eno1: network vlan-5002: no free leases > Mar 22 09:57:37 maas systemd[1]: systemd-timedated.service: Succeeded. > Mar 22 09:57:43 maas kernel: [ 1915.677372] audit: type=1400 audit(1616407063.291:96): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1132 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > Mar 22 09:58:13 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:13 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:15 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:15 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:17 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:17 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:21 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:21 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:59:13 maas kernel: [ 2005.666324] audit: type=1400 audit(1616407153.280:97): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > Mar 22 09:59:43 maas kernel: [ 2035.667450] audit: type=1400 audit(1616407183.280:98): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > > > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From oliver.wenz at dhbw-mannheim.de Mon Apr 12 13:28:15 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 12 Apr 2021 15:28:15 +0200 (CEST) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1208016377.154990.1618234095430@ox.dhbw-mannheim.de> Hi Dmitriy, I checked nginx logs on the keystone container and there was no obvious error: ontainer-6f64e2e1 nginx: 192.168.110.211 - - [12/Apr/2021:13:25:56 +0000] "HEAD / HTTP/1.0" 300 0 "-" "osa-haproxy-healthcheck" Apr 12 13:26:07 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:07 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 316 "-" "openstack_auth keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:07 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:07 +0000] "POST /v3/auth/tokens HTTP/1.1" 401 109 "-" "openstack_auth keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:07 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:07 +0000] "GET /v3/users/e4e88b0e800e4a79905a586738c32bf1/projects HTTP/1.1" 200 768 "-" "python-keystoneclient" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "openstack_auth keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.211 - - [12/Apr/2021:13:26:08 +0000] "HEAD / HTTP/1.0" 300 0 "-" "osa-haproxy-healthcheck" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.100 - - [12/Apr/2021:13:26:08 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:18 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.28 - - [12/Apr/2021:13:26:18 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:18 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.100 - - [12/Apr/2021:13:26:18 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:19 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:19 +0000] "GET /v3/users/e4e88b0e800e4a79905a586738c32bf1/projects HTTP/1.1" 200 768 "-" "python-keystoneclient" Apr 12 13:26:20 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:20 +0000] "GET /v3 HTTP/1.1" 200 255 "-" "openstack_dashboard keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:20 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.211 - - [12/Apr/2021:13:26:20 +0000] "HEAD / HTTP/1.0" 300 0 "-" "osa-haproxy-healthcheck" Apr 12 13:26:21 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:21 +0000] "GET /v3/users/e4e88b0e800e4a79905a586738c32bf1/projects HTTP/1.1" 200 768 "-" "python-keystoneclient" Apr 12 13:26:24 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:24 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:24 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:24 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:25 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.100 - - [12/Apr/2021:13:26:25 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:27 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:27 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:28 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:28 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" I also noticed, that glance logs show the keystone_authtoken message even when I successfully create a snapshot (e.g. for a cirros instance). So could this be a nova problem afterall? I'm confused why there's a NotImplementedError in the nova logs: http://paste.openstack.org/show/804398/ Kind regards, Oliver > ------------------------------ > > Message: 2 > Date: Fri, 09 Apr 2021 08:24:53 +0300 > From: Dmitriy Rabotyagov > To: "openstack-discuss at lists.openstack.org" > > Subject: Re: [glance][openstack-ansible] Snapshots disappear during > saving > Message-ID: <500221617945664 at mail.yandex.ru> > Content-Type: text/plain; charset="utf-8" > > An HTML attachment was scrubbed... > URL: > > From mnaser at vexxhost.com Mon Apr 12 13:33:23 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 12 Apr 2021 09:33:23 -0400 Subject: [sdk]: compute service create_server method, how to create multiple servers In-Reply-To: References: Message-ID: Hi Catherine, Have a look at min_count option :) Thanks Mohammed On Fri, Apr 9, 2021 at 5:12 PM dmeng wrote: > > Hello there, > > Hope this email finds you well. > > We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" attribute is supported by the create_server() method? We tried to create multiple servers by setting the max_count value, but only one server has been created. > > While when we use the python-novaclient package, novaclient.servers.create() method has the attribute max_count which allows creating multiple servers at once if set the value. So we would like to know is there a similar attribute like "max_count" that could allow us to create multiple servers at once in openstacksdk? > > Thanks and have a great day! > > Catherine -- Mohammed Naser VEXXHOST, Inc. From smooney at redhat.com Mon Apr 12 13:46:59 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 12 Apr 2021 14:46:59 +0100 Subject: [sdk]: compute service create_server method, how to create multiple servers In-Reply-To: References: Message-ID: <213fb67d-e907-401f-ca75-6555004d1261@redhat.com> On 12/04/2021 14:33, Mohammed Naser wrote: > Hi Catherine, > > Have a look at min_count option :) min_count is what you want yes altough nova generally discuorages use fo our server multi create feature and advise peopel to make multiple independent boot calls instead. the cageate to that is if you are using server groups and affinity or anti affinity. in that case multi create makes sense to use but if you can boot them serially in seperate boot requsts that is generally better. if you ask for min_count 4 and 3 boot successfully and the 4th fails we will remove all 4 instances and set them to error that is not necessary the behaviour your want so you should really orchestrate this your self and not realy on the basic support in nova if you want anything more complex. > > Thanks > Mohammed > > On Fri, Apr 9, 2021 at 5:12 PM dmeng wrote: >> Hello there, >> >> Hope this email finds you well. >> >> We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" attribute is supported by the create_server() method? We tried to create multiple servers by setting the max_count value, but only one server has been created. >> >> While when we use the python-novaclient package, novaclient.servers.create() method has the attribute max_count which allows creating multiple servers at once if set the value. So we would like to know is there a similar attribute like "max_count" that could allow us to create multiple servers at once in openstacksdk? >> >> Thanks and have a great day! >> >> Catherine > > From rosmaita.fossdev at gmail.com Mon Apr 12 15:53:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 11:53:43 -0400 Subject: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: On 4/11/21 4:23 PM, Kanevsky, Arkady wrote: > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min > on PTG meeting to go over Interop testing and any changes for cinder > tempest or tempest configuration in Wallaby cycle or changes planned for > Xena. Hi Arkady, I've virtually penciled you in for 1430-1500 on Tuesday 20 April. > Once on agenda one of the Interop WG person will attend and lead the > discussion. Sounds good. I've scheduled 30 min instead of 15 because it would be helpful for the cinder team to hear a quick synopsis of the current goals of the Interop WG and what the aim of the project is before we discuss the specifics of W and X. cheers, brian > > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > From johnsomor at gmail.com Mon Apr 12 15:57:24 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 12 Apr 2021 08:57:24 -0700 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi Arkady, I have added Interop to the Designate topics list (https://etherpad.opendev.org/p/xena-ptg-designate) and will schedule a slot this week when I put a rough agenda together. Thanks, Michael On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > Adding comminuty > > > > From: Kanevsky, Arkady > Sent: Sunday, April 11, 2021 3:25 PM > To: 'johnsomor at gmail.com' > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for Interop > > > > John, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > Thanks, > > > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > From rosmaita.fossdev at gmail.com Mon Apr 12 16:18:38 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 12:18:38 -0400 Subject: [cinder] How to post multiple patches. In-Reply-To: References: Message-ID: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> On 4/12/21 12:52 AM, 野村和正 / NOMURA,KAZUMASA wrote: > Hi everyone, > > Hitachi has developed the out-of-tree driver as Cinder driver. But we > want to deprecate the out-of-tree driver and support only the in-tree > driver. > > We need to submit about ten more patches(*1) for full features which the > out-of-tree driver has such as Consistency Group and Volume Replication. > > In that case, we have two options: > > 1. Submit two or three patches at once. In other words, submit two or > three patches to Xena, then submit another two or three patches after > previous patches were merged, and so on. This may give reviewers the > feeling of endless. > > 2. Submit all patches at once to Xena. This will give reviewers the > information how many patches remains from the beginning, but many pathes > may bother them. > > Does anyone have an opinion as to which option is better? My opinion is that option #1 is better, because as the initial patches are reviewed, issues will come up in review that you will be able to apply proactively to later patches on your own without reviewers having to bring them up, which will result in a better experience for all concerned. Also, we can have an idea of how many patches to expect (without your filing them all at once) if you file blueprints in Launchpad for each feature. Please name them 'hitachi-consistency-group-support', 'hitachi-volume-replication', etc., so it's easy to see what driver they're for. The blueprint doesn't need much detail; it's primarily for tracking purposes. You can see some examples here: https://blueprints.launchpad.net/cinder/wallaby cheers, brian > > Thanks, > > Kazumasa Nomura > > E-mail: kazumasa.nomura.rx at hitachi.com > > From rosmaita.fossdev at gmail.com Mon Apr 12 16:34:25 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 12:34:25 -0400 Subject: [ops][cinder][nova] os-brick upcoming releases In-Reply-To: References: Message-ID: <7547ddfe-1182-8f72-fd3a-d13563fceb19@gmail.com> Just wanted to update this message: the os-brick releases mentioned below have occurred. On 3/30/21 11:24 AM, Brian Rosmaita wrote: > Hello operators, > > You may have heard about a potential data-loss bug [0] that was recently > discovered.  It has been fixed in the upcoming wallaby release and we > are planning to backport to all stable branches and do new os-brick > releases from the releasable stable branches. > > In the meantime, the bug occurs if the multipath configuration option on > a compute is changed while volumes are attached to instances on that > compute.  The possible data loss may occur when the volumes are detached > (migration, volume-detach, etc.).  Thus, before the new os-brick > releases are available, the issue can nonetheless be averted by not > making such a configuration change under those circumstances. > > The new os-brick releases will be: > - victoria: 4.0.3 Tagged on 2021-04-01 17:20:07 +0000 > - ussuri: 3.0.6 Tagged on 2021-04-06 13:50:04 +0000 > - train: 2.10.6 Tagged on 2021-04-08 10:49:41 +0000 > The stein, rocky, and queens branches are in Extended Maintenance mode > and are no longer released from, but critical fixes are backported to > them when possible, though it may take a while before these are merged. > > > [0] https://launchpad.net/bugs/1921381 From fungi at yuggoth.org Mon Apr 12 16:36:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 12 Apr 2021 16:36:35 +0000 Subject: [oslo][security-sig] Please revisit your open vulnerability report In-Reply-To: <53ba75c8-dd82-c470-e564-d4dedfb5090a@nemebean.com> References: <20210218144904.xeek6zwlyntm24u5@yuggoth.org> <3c103743-898f-79e1-04cc-2f97a52fece3@nemebean.com> <20210218170318.kysdpzibsqnferj5@yuggoth.org> <203cbbfd-9ca8-3f0c-83e4-6d57588103cf@nemebean.com> <20210218191305.5psn6p3kp6tlexoq@yuggoth.org> <53ba75c8-dd82-c470-e564-d4dedfb5090a@nemebean.com> Message-ID: <20210412163635.nq45un4fw25m5cwr@yuggoth.org> On 2021-03-26 16:52:52 -0500 (-0500), Ben Nemec wrote: [...] > I have added the openstack-vuln-mgmt team to most of the Oslo > projects. Great, happy to help there. > I apparently don't have permission to change settings in > oslo.policy, This is maintained by oslo-policy-core which has Adam as its owner and only administrator, so he's currently the only one who can add more members to that group though any one of the group members could help us by switching the oslo.core maintainer to some other group owned by openstack-admins if Adam can't be reached to make openstack-admins the owner of oslo-policy-core. > oslo.windows, Similarly, maintainer is oslo-windows-drivers which has Claudiu as its owner and only administrator, but the project maintainer could optionally be adjusted to another group by Alessandro if Claudiu can't be reached. > and taskflow, Maintained by the taskflow-dev group for which Joshua is the owner and only administrator, but there are a lot of group members one of whom could switch the project maintainer for you. > so I will need help with that. After going through all of the > projects, my guess is that the individual people who have access > to the private security bugs are the ones who created the project > in the first place. I guess that's fine, but there's an argument > to be made that some of those should be cleaned up too. In all three cases, I expect the people who have access to these are no longer active in OpenStack, so yes getting them fixed would be a "good idea." > I also noticed that oslo-coresec is not listed in most of the > projects. Is there any sort of global setting that should give > coresec memebers access to private security bugs, or do I need to > add that to each project? You'd have to add it separately to each of them, yes. Though for any with VMT oversight, we suggest you not do that and instead let one of the vulnerability coordinators subscribe your security reviewer group after we've confirmed the report isn't misdirected at the wrong project, in order to minimize unnecessary initial spread of sensitive information. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Mon Apr 12 17:00:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 13:00:51 -0400 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train Message-ID: The neutron-grenade job on stable/train has been mostly failing since 8 April: https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch=stable%2Ftrain I spot-checked a few and it looks like the culprit is "Could not open requirements file: [Errno 2] No such file or directory: '/opt/stack/requirements/upper-constraints.txt'". See: - https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/log/logs/grenade.sh.txt#28855 - https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/log/logs/grenade.sh.txt#28849 - https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/log/logs/grenade.sh.txt#28855 The last 2 neutron-grenade jobs on openstack/devstack have passed, but there don't look to have been any changes in the devstack repo stable/train since 10 March, so I'm not sure if that was luck or if the QA team has made a change to get the job working. Any ideas? thanks, brian From ltoscano at redhat.com Mon Apr 12 17:05:26 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 12 Apr 2021 19:05:26 +0200 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train In-Reply-To: References: Message-ID: <4330364.cEBGB3zze1@whitebase.usersys.redhat.com> On Monday, 12 April 2021 19:00:51 CEST Brian Rosmaita wrote: > The neutron-grenade job on stable/train has been mostly failing since 8 > April: > > https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch= > stable%2Ftrain > > I spot-checked a few and it looks like the culprit is "Could not open > requirements file: [Errno 2] No such file or directory: > '/opt/stack/requirements/upper-constraints.txt'". See: > - > https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/ > log/logs/grenade.sh.txt#28855 - > https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/ > log/logs/grenade.sh.txt#28849 - > https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/ > log/logs/grenade.sh.txt#28855 > > The last 2 neutron-grenade jobs on openstack/devstack have passed, but > there don't look to have been any changes in the devstack repo > stable/train since 10 March, so I'm not sure if that was luck or if the > QA team has made a change to get the job working. > > Any ideas? https://review.opendev.org/c/openstack/grenade/+/785831 and all its backports (actually in reverse order) are in the gate queue. Ciao -- Luigi From iurygregory at gmail.com Mon Apr 12 17:08:00 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 12 Apr 2021 19:08:00 +0200 Subject: [ironic] No Review Jam Tomorrow Message-ID: Hello ironicers! During the upstream meeting today we decided to skip the Review Jam that will happen tomorrow, since we don't have any topics that would require attention. We also skipped the review jam from today (we totally forgot about it). -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 12 17:09:19 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 12 Apr 2021 12:09:19 -0500 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train In-Reply-To: References: Message-ID: <178c70f1d0e.f997989d279125.7112266830106959000@ghanshyammann.com> ---- On Mon, 12 Apr 2021 12:00:51 -0500 Brian Rosmaita wrote ---- > The neutron-grenade job on stable/train has been mostly failing since 8 > April: > > https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch=stable%2Ftrain > > I spot-checked a few and it looks like the culprit is "Could not open > requirements file: [Errno 2] No such file or directory: > '/opt/stack/requirements/upper-constraints.txt'". See: > - > https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/log/logs/grenade.sh.txt#28855 > - > https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/log/logs/grenade.sh.txt#28849 > - > https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/log/logs/grenade.sh.txt#28855 > > The last 2 neutron-grenade jobs on openstack/devstack have passed, but > there don't look to have been any changes in the devstack repo > stable/train since 10 March, so I'm not sure if that was luck or if the > QA team has made a change to get the job working. There were multiple changes merged in devstack and grenade for tempest venv constraints at the same time and there were a few issues in sourcing the stackrc for checking the tempest venv constraints, let's wait for the below fixes to merged to get all cases green -https://review.opendev.org/q/If5f14654ab9aee2a140bbfb869b50d63cb289fdf -gmann > > Any ideas? > > > thanks, > brian > > From iurygregory at gmail.com Mon Apr 12 17:13:12 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 12 Apr 2021 19:13:12 +0200 Subject: [ironic] Meetings next week (19-23 April) Message-ID: Hello ironicers, Since next week is the PTG, we decided during our upstream meeting today to skip the following meetings: - Upstream Meeting (Monday) - Review Jams (Monday/Tuesday) - SPUC in the APAC time (Friday) because it overlaps with the PTG. Thank you =) -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Apr 12 19:40:59 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 12 Apr 2021 21:40:59 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Message-ID: On 4/12/21 3:24 PM, Adam Tomas wrote: > Hi, > Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? > I’ve tried to use > [service_providers] > service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default > in neutron.conf, but it doesn’t work either… > > Best regards > Adam > > Hi Adam, It's up to Ceilometer to report it. Do create the resource types, add this to /etc/ceilometer/gnocchi_resources.yaml (note: if you don't have such a file in /etc/ceilometer, copy it there from somewhere below /usr/lib/python3/dist-packages/ceilometer): - resource_type: loadbalancer metrics: network.services.lb.outgoing.bytes: network.services.lb.incoming.bytes: network.services.lb.pool: network.services.lb.listener: network.services.lb.member: network.services.lb.health_monitor: network.services.lb.loadbalancer: network.services.lb.total.connections: network.services.lb.active.connections: Then do a ceilometer db_sync to populate the Gnocchi resource types. I hope this helps, Cheers, Thomas Goirand (zigo) From allison at openstack.org Mon Apr 12 19:59:45 2021 From: allison at openstack.org (Allison Price) Date: Mon, 12 Apr 2021 14:59:45 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: Hi Thomas, Yes, we have been exploring free software alternatives like Jitsi. We are actually using Jitsi for our upcoming PTG for almost half of the project teams. Researching the best solution that has the most accessibility for the global community is an ongoing initiative and we are trying to identify and implement the tools that make sense for the different use cases based on our experience. As we host more community meetings and continue our search of other tools (including some test runs), the tool may change, but for now we are planning to move forward with Zoom so that we can stream to YouTube and record for community members in different time zones. We will continue to keep the mailing list updated if we switch tools / move in a different direction. Appreciate your patience as we continue to navigate virtual meetings and events. Cheers, Allison > On Apr 10, 2021, at 11:42 AM, Thomas Goirand wrote: > > Hi, > > Thanks a lot for the initiative, I very much enjoy the updates each > cycle. However... > > On 4/7/21 8:37 PM, helena at openstack.org wrote: >> Hello ptls, >> >> The community meeting for the Wallaby release will be next Thursday, >> April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live >> session via Zoom as well as live-streamed to YouTube. > > I'm sorry, but as nothing changes, I have to state it again. > > https://www.zdnet.com/article/critical-zoom-vulnerability-triggers-remote-code-execution-without-user-input/ > > And that's not the first time. > > There's free software alternatives (like Jitsi) and the tooling to > deploy them are also available [1]. It has been proven to work very well > and scale nicely with thousands of viewers. > > I regret that I'm the only person protesting about Zoom... > > Cheers, > > Thomas Goirand (zigo) > > [1] https://debconf-video-team.pages.debian.net/docs/ > From zigo at debian.org Mon Apr 12 20:06:33 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 12 Apr 2021 22:06:33 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <6b0c7754-e237-10a4-d924-749aebead2dd@debian.org> Hi Allison, Thanks for the update. On 4/12/21 9:59 PM, Allison Price wrote: > Hi Thomas, > > for now we are planning to move forward with Zoom so that we can stream to YouTube and record for community members in different time zones. FYI, the Debian Video (for online Debconf /Mini-Debconf) can stream to the web or to VLC to a (very) large audience, the output of a Jitsi meeting. That proved to work perfectly for the last summer Debconf. Maybe you could dig into it? Of course, the video gets also recorded, so you may later upload it to Youtube / Peertube... Hoping this helps. > We will continue to keep the mailing list updated if we switch tools / move in a different direction. Appreciate your patience as we continue to navigate virtual meetings and events. > > Cheers, > Allison Cheers, Thomas Goirand (zigo) From allison at openstack.org Mon Apr 12 20:09:20 2021 From: allison at openstack.org (Allison Price) Date: Mon, 12 Apr 2021 15:09:20 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <6b0c7754-e237-10a4-d924-749aebead2dd@debian.org> References: <1617820623.770226846@apps.rackspace.com> <6b0c7754-e237-10a4-d924-749aebead2dd@debian.org> Message-ID: <9CBABC7C-18EC-4670-BEF3-B0C0FC391FBB@openstack.org> > On Apr 12, 2021, at 3:06 PM, Thomas Goirand wrote: > > Hi Allison, > > Thanks for the update. > > On 4/12/21 9:59 PM, Allison Price wrote: >> Hi Thomas, >> >> for now we are planning to move forward with Zoom so that we can stream to YouTube and record for community members in different time zones. > > FYI, the Debian Video (for online Debconf /Mini-Debconf) can stream to > the web or to VLC to a (very) large audience, the output of a Jitsi > meeting. That proved to work perfectly for the last summer Debconf. > Maybe you could dig into it? > > Of course, the video gets also recorded, so you may later upload it to > Youtube / Peertube... > > Hoping this helps. That is really helpful - I’ll share with the team and share any questions we may have along the way. > >> We will continue to keep the mailing list updated if we switch tools / move in a different direction. Appreciate your patience as we continue to navigate virtual meetings and events. >> >> Cheers, >> Allison > > Cheers, > > Thomas Goirand (zigo) > From victoria at vmartinezdelacruz.com Mon Apr 12 20:49:30 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Mon, 12 Apr 2021 22:49:30 +0200 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: <20210407194436.vbtmfwts3r7ighh3@barron.net> References: <20210407194436.vbtmfwts3r7ighh3@barron.net> Message-ID: +1! Happy to see these proposals! Thanks Vida and Liron for all your contributions! On Wed, Apr 7, 2021 at 9:46 PM Tom Barron wrote: > Ditto, including the big thanks. > > On 07/04/21 16:04 -0300, Carlos Silva wrote: > >Big +1! > > > >Thank you, Liron and Vida! :) > > > >Em qua., 7 de abr. de 2021 às 15:40, Goutham Pacha Ravi < > >gouthampravi at gmail.com> escreveu: > > > >> Hello Zorillas, > >> > >> Vida's been our bug czar since the Ussuri release and she's > >> conceptualized and executed our successful bug triage strategy. She > >> has also painstakingly organized several documentation and code bug > >> squash events and kept the pulse on multi-release efforts. She's > >> taught me a lot about project management and you can see tangible > >> results here, I suppose :) > >> > >> Liron's fixed a lot of test code bugs and covered some old and > >> important test gaps over the past few releases. He's driving > >> standardization of the tempest plugin and bringing in best practices > >> from tempest, refstack and elsewhere into our testing. It's always a > >> pleasure to work with Liron since he's happy to provide and welcome > >> feedback. > >> > >> More recently, Liron and Vida have enabled us to work with the > >> InteropWG and define refstack guidelines. They've also gotten us > >> closer to members from the QA community who they work with more > >> closely downstream. In short, they bring in different perspectives > >> while also espousing the team's core values. So I'd like to propose > >> their addition to the manila-tempest-plugin-core team. > >> > >> Please give me your +/- 1s for this proposal. > >> > >> Thanks, > >> Goutham > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 12 21:35:47 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 12 Apr 2021 16:35:47 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 15th at 1500 UTC Message-ID: <178c8031241.1163a8d15287294.6003502774058757783@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for April 15th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, April 14th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From jungleboyj at gmail.com Tue Apr 13 01:43:45 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 12 Apr 2021 20:43:45 -0500 Subject: [cinder] How to post multiple patches. In-Reply-To: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> References: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> Message-ID: <98285557-7db7-592a-c160-b6dad4971a1e@gmail.com> On 4/12/2021 11:18 AM, Brian Rosmaita wrote: > On 4/12/21 12:52 AM, 野村和正 / NOMURA,KAZUMASA wrote: >> Hi everyone, >> >> Hitachi has developed the out-of-tree driver as Cinder driver. But >> wewant to deprecate the out-of-tree driver and support only the >> in-treedriver. >> >> We need to submit about ten more patches(*1) for full features which >> theout-of-tree driver has such as Consistency Group and Volume >> Replication. >> >> In that case, we have two options: >> >> 1. Submit two or three patches at once. In other words, submit two >> orthree patches to Xena, then submit another two or three patches >> afterprevious patches were merged, and so on. This may give reviewers >> thefeeling of endless. >> >> 2. Submit all patches at once to Xena. This will give reviewers >> theinformation how many patches remains from the beginning, but many >> pathesmay bother them. >> >> Does anyone have an opinion as to which option is better? > > My opinion is that option #1 is better, because as the initial patches > are reviewed, issues will come up in review that you will be able to > apply proactively to later patches on your own without reviewers > having to bring them up, which will result in a better experience for > all concerned. > > Also, we can have an idea of how many patches to expect (without your > filing them all at once) if you file blueprints in Launchpad for each > feature.  Please name them 'hitachi-consistency-group-support', > 'hitachi-volume-replication', etc., so it's easy to see what driver > they're for.  The blueprint doesn't need much detail; it's primarily > for tracking purposes. You can see some examples here: >   https://blueprints.launchpad.net/cinder/wallaby > > I concur with Brian.  I think doing a few at a time will be less likely to overwhelm the review team and it will help to prevent repeated comments in subsequent patches if you are able to proactively fix the subsequent patches before they are submitted. Thanks for seeking input on this! Jay > cheers, > brian > >> >> Thanks, >> >> Kazumasa Nomura >> >> E-mail: >> kazumasa.nomura.rx at hitachi.com >> > > From gouthampravi at gmail.com Tue Apr 13 05:21:48 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 12 Apr 2021 22:21:48 -0700 Subject: [Manila][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: On Sun, Apr 11, 2021 at 1:31 PM Kanevsky, Arkady wrote: > Goutham, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > PTG meeting to go over Interop testing and any changes for Manila tempest > or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > Thank you for your email Arkady. We’ll add this to the agenda - I’ll work out the schedule in a couple of days, please stay tuned for a specific time/day slot. In the meanwhile, happy to accommodate a recommendation if you have one. > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Apr 13 05:57:52 2021 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 13 Apr 2021 11:27:52 +0530 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Ruslanas, On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis wrote: > > Hi Yatin, > > I have spotted that version of puppet-tripleo, but even after downgrade I had/have same issue. should I downgrade even more? :) OR You know when fixed version might get in for production centos ussuri release repo? > I have requested the tag release of puppet-neutron to clear this https://review.opendev.org/c/openstack/releases/+/786006. Once it's merged it can be included in centos ussuri release repo, RDO bots will take care of it. If you want to test before it's released you can pick puppet-neutron from RDO trunk repo[1]. [1] https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ > As you know now that it is affected also :) > > > > > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >> >> Hi Ruslanas, >> >> For the issue see >> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, >> The puppet-neutron issue in above was specific to victoria but since >> there is new release for ussuri recently, it also hit there too. >> >> >> Thanks and Regards >> Yatin Karel >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis wrote: >> > >> > Hi all, >> > >> > While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: >> > openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml >> > >> > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ >> > >> > builddir/install-undercloud.log ( contains info about container-puppet-neutron ) >> > http://paste.openstack.org/show/804181/ >> > >> > undercloud.conf: >> > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> > >> > dnf list installed >> > http://paste.openstack.org/show/804182/ >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 Thanks and Regards Yatin Karel From yasufum.o at gmail.com Tue Apr 13 06:06:26 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 13 Apr 2021 15:06:26 +0900 Subject: [tacker] irc meeting Message-ID: <188f15f1-4bc9-5c46-15f8-b73cfb33e353@gmail.com> Hi team, I'll be off today's irc meeting, so please skip, or someone host the meeting if any topic. Thanks, Yasufumi From zhangbailin at inspur.com Tue Apr 13 06:46:26 2021 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Tue, 13 Apr 2021 06:46:26 +0000 Subject: =?utf-8?B?562U5aSNOiBbbm92YV0gTm9taW5hdGUgc2Vhbi1rLW1vb25leSBmb3Igbm92?= =?utf-8?Q?a-specs-core?= In-Reply-To: References: Message-ID: <96fd5ea1d09a40288a26810a64bfdc3d@inspur.com> +1 from me, I saw it late, but I think it's worth +1. brinzhang Inspur Electronic Information Industry Co.,Ltd. -----邮件原件----- 发件人: Stephen Finucane [mailto:stephenfin at redhat.com] 发送时间: 2021年3月31日 0:46 收件人: openstack-discuss 主题: [nova] Nominate sean-k-mooney for nova-specs-core Hey, Sean has been working on nova for what seems like yonks now. Each cycle, they spend a significant amount of time reviewing proposed specs and contributing to discussions at the PTG. This is important work and their contributions provide everyone with a deep pool of knowledge on all things networking and hardware upon which to draw. I think the nova project would benefit from their addition to the specs core reviewer team and I therefore propose we add Sean to nova- specs-core. Assuming there are no objections, I'll work with gibi to add Sean to nova-specs- core next week. Cheers, Stephen From sbauza at redhat.com Tue Apr 13 07:09:23 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 13 Apr 2021 09:09:23 +0200 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: On Tue, Mar 30, 2021 at 6:51 PM Stephen Finucane wrote: > Hey, > > Sean has been working on nova for what seems like yonks now. Each cycle, > they > spend a significant amount of time reviewing proposed specs and > contributing to > discussions at the PTG. This is important work and their contributions > provide > everyone with a deep pool of knowledge on all things networking and > hardware > upon which to draw. I think the nova project would benefit from their > addition > to the specs core reviewer team and I therefore propose we add Sean to > nova- > specs-core. > > Assuming there are no objections, I'll work with gibi to add Sean to > nova-specs- > core next week. > > +1, sorry for the late approval, forgot to reply. Cheers, > Stephen > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Tue Apr 13 07:15:19 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 13 Apr 2021 09:15:19 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Message-ID: <83D97106-9C93-407E-8F5F-F93BABDAC0EB@poczta.onet.pl> Hi Thomas, thank you for the answer. I have this content in my gnocchi_resources.yaml - resource_type: loadbalancer metrics: network.services.lb.outgoing.bytes: network.services.lb.incoming.bytes: network.services.lb.pool: network.services.lb.listener: network.services.lb.member: network.services.lb.health_monitor: dynamic.network.services.lb.loadbalancer: network.services.lb.total.connections: network.services.lb.active.connections: But to be honest I didn’t do db_sync. I’m using kolla-ansible and I have all services in container, so I should run db_sync inside ceilometer-central container? It’s not automatically synced when a service/container is restarted? Best regards Adam > Wiadomość napisana przez Thomas Goirand w dniu 12.04.2021, o godz. 21:40: > > On 4/12/21 3:24 PM, Adam Tomas wrote: >> Hi, >> Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? >> I’ve tried to use >> [service_providers] >> service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default >> in neutron.conf, but it doesn’t work either… >> >> Best regards >> Adam >> >> > > Hi Adam, > > It's up to Ceilometer to report it. Do create the resource types, add > this to /etc/ceilometer/gnocchi_resources.yaml (note: if you don't have > such a file in /etc/ceilometer, copy it there from somewhere below > /usr/lib/python3/dist-packages/ceilometer): > > - resource_type: loadbalancer > metrics: > network.services.lb.outgoing.bytes: > network.services.lb.incoming.bytes: > network.services.lb.pool: > network.services.lb.listener: > network.services.lb.member: > network.services.lb.health_monitor: > network.services.lb.loadbalancer: > network.services.lb.total.connections: > network.services.lb.active.connections: > > Then do a ceilometer db_sync to populate the Gnocchi resource types. > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) From dsneddon at redhat.com Tue Apr 13 09:05:05 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 13 Apr 2021 02:05:05 -0700 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 9, 2021 at 12:10 AM Marios Andreou wrote: > On Fri, Apr 9, 2021 at 9:46 AM Michele Baldessari > wrote: > > > > On Fri, Apr 09, 2021 at 08:27:55AM +0200, Carlos Goncalves wrote: > > > On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > > > > > > > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better > for me > > > > if it was earlier in the day. I'll probably make more sense at 1am > than 3am > > > > :) > > > > > > > ouch sorry Steve and thank you for participating despite the bad > time-difference for you! Yes we can make this change see below > > > > > > Could I maybe swap with NETWORKING: 1300-1340? > > > > > > > Fine with me. > > > Michele, Dan? > > > > Totally fine by me > > Great thanks folks - this works well actually since Dan S. already > indicated (in another reply to me) that your current slot (1300-1340 > UTC) is too early (like 5 am) so moving it to the later slot should > work better for him too. > > I have just updated the schedule so on Tuesday 20 we have Baremetal > sbaker @ 1300-1340 and then the networking/bgp/frr folks at 1510-1550 > > thank you! > > regards, marios > > Thanks, I could do either, but 1510-1550 is better for me. -Dan > > > > > > > > On 8/04/21 4:24 am, Marios Andreou wrote: > > > > > > > > Hello TripleO o/ > > > > > > > > Thanks again to everybody who has volunteered to lead a session for > > > > the coming Xena TripleO project teams gathering. > > > > > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > > > sessions per day with some breaks. > > > > > > > > Please review the slot assigned for your session at [1]. If that time > > > > is not ok then please let me know as soon as possible and indicate if > > > > you want it later or earlier or on any other day. If you've decided > > > > the session no longer makes sense then also please tell me and we can > > > > move things around accordingly to finish earlier. > > > > > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > > > week before PTG. We can and likely will make changes after this date > > > > but last minute changes are best avoided to allow folks to schedule > > > > their PTG attendance across projects. > > > > > > > > Thanks everybody for your help! Looking forward to interesting > > > > presentations and discussions as always > > > > > > > > regards, marios > > > > > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > > > > > > > -- > > Michele Baldessari > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Apr 13 10:02:58 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 13 Apr 2021 12:02:58 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: <83D97106-9C93-407E-8F5F-F93BABDAC0EB@poczta.onet.pl> References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> <83D97106-9C93-407E-8F5F-F93BABDAC0EB@poczta.onet.pl> Message-ID: <658f3ac3-65dd-5e93-5c44-604ad0e7c0ad@debian.org> On 4/13/21 9:15 AM, Adam Tomas wrote: > Hi Thomas, thank you for the answer. > I have this content in my gnocchi_resources.yaml > - resource_type: loadbalancer > metrics: > network.services.lb.outgoing.bytes: > network.services.lb.incoming.bytes: > network.services.lb.pool: > network.services.lb.listener: > network.services.lb.member: > network.services.lb.health_monitor: > dynamic.network.services.lb.loadbalancer: > network.services.lb.total.connections: > network.services.lb.active.connections: > > But to be honest I didn’t do db_sync. I’m using kolla-ansible and I have all services in container, so I should run db_sync inside ceilometer-central container? It’s not automatically synced when a service/container is restarted? > Best regards > Adam Hi Adam, I have zero knowledge with Kolla, and can't help you with it. I'm using my own installer, which is developed for Debian (and released in the soon coming Debian 11, aka Bullseye): https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer Cheers, Thomas Goirand (zigo) From bkslash at poczta.onet.pl Tue Apr 13 10:17:16 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 13 Apr 2021 12:17:16 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Message-ID: <7714915B-3B68-44CC-8270-412CDBAC3350@poczta.onet.pl> OK. I have some progress. I created meters.d and pollster.d (in pollsters.d I’ve created octavia.yaml with sample type gauge and unit load balancer) and now I can see some measures, but only if load balancer exists or not. Is there any way to force dynamic pollster to ask for url v2/lbaas/loadbalancers/[ID]/stats? Best regards Adam > Wiadomość napisana przez Thomas Goirand w dniu 12.04.2021, o godz. 21:40: > > On 4/12/21 3:24 PM, Adam Tomas wrote: >> Hi, >> Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? >> I’ve tried to use >> [service_providers] >> service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default >> in neutron.conf, but it doesn’t work either… >> >> Best regards >> Adam >> >> > > Hi Adam, > > It's up to Ceilometer to report it. Do create the resource types, add > this to /etc/ceilometer/gnocchi_resources.yaml (note: if you don't have > such a file in /etc/ceilometer, copy it there from somewhere below > /usr/lib/python3/dist-packages/ceilometer): > > - resource_type: loadbalancer > metrics: > network.services.lb.outgoing.bytes: > network.services.lb.incoming.bytes: > network.services.lb.pool: > network.services.lb.listener: > network.services.lb.member: > network.services.lb.health_monitor: > network.services.lb.loadbalancer: > network.services.lb.total.connections: > network.services.lb.active.connections: > > Then do a ceilometer db_sync to populate the Gnocchi resource types. > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) From dtantsur at redhat.com Tue Apr 13 11:48:32 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 13 Apr 2021 13:48:32 +0200 Subject: [ironic] Meetings next week (19-23 April) In-Reply-To: References: Message-ID: On Mon, Apr 12, 2021 at 7:15 PM Iury Gregory wrote: > Hello ironicers, > > Since next week is the PTG, we decided during our upstream meeting today > to skip the following meetings: > - Upstream Meeting (Monday) > - Review Jams (Monday/Tuesday) > - SPUC in the APAC time (Friday) because it overlaps with the PTG. > A small correction: in the USA time. The APAC one does not seem to overlap with the PTG. > > Thank you =) > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the ironic-core and puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Apr 13 12:42:13 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 13 Apr 2021 15:42:13 +0300 Subject: [openstack-ansible] OSA Meeting Poll In-Reply-To: <202851617797329@mail.yandex.ru> References: <170911617794404@mail.yandex.ru> <202851617797329@mail.yandex.ru> Message-ID: <1324211618317615@mail.yandex.ru> An HTML attachment was scrubbed... URL: From josephine.seifert at secustack.com Tue Apr 13 13:26:35 2021 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Tue, 13 Apr 2021 15:26:35 +0200 Subject: [OSSN-0089] Missing configuration option in Secure Live Migration guide leads to unencrypted traffic Message-ID: Missing configuration option in Secure Live Migration guide leads to unencrypted traffic -------------------------------------------------------------------------------------------------------------------- ### Summary ### The guide to enable secure live migration with QEMU-native tls on nova compute nodes missed an important config option. Without this option a hard-coded part in nova is triggerd which sets the default route to TCP instead of TLS. This leads to an unecrypted migration of the ram without throwing any kind of Error. ### Affected Services / Software ### Nova / Victoria, Ussuri, Train, Stein (might also be affected: Rocky, Queens, Pike, Ocata) ### Discussion ### In the OpenStack guide to setup secure live migration with QEMU-native tls there are a few configuration options given, which have to be applied to nova compute nodes. After following the instructions and setting up everything it seems to work as expected. But after checking that libvirt is able to use tls using tcpdump to listen on the port for tls while manually executing libvirt commands, the same check for live migration of an instance through openstack fails. Listening on the port for unencrypted tcp-traffic shows that OpenStack still uses the unencrypted TCP path instead of the TLS one for the migration. The reason for this is a patch from Ocata which adds the calculation of the live-migration-uri in code: https://review.opendev.org/c/openstack/nova/+/410817/ The config parameter ``live_migration_uri`` was deprecated in favor of ``live_migration_scheme`` and the default set to tcp. This leads to the problem that if none of these two config options are set, libvirt will always use the default tcp connection. To enable QEMU-native TLS to be used in nova one of them has to be set so that a TLS connection can be established. Currently the guide does not show that this is necessary and there was no other documentation indicating that these config options are important for the usage of QEMU-native TLS. As there is no documentation which recognizes this and it is hard to find this problem as the migration happens even without those config option set - not stating that it is still unencrypted, it might have been unrecognized in various deployments, which followed the guide. ### Recommended Actions ### For deployments using secure live migration with QEMU-native TLS: 1. Check the config of all nova compute nodes. The ``libvirt`` section needs to have either ``live_migration_uri`` (deprecated) or ``live_migration_scheme`` configured. 2. If neither of those config options are present, add ``live_migration_scheme = tls`` to enable the use of the tls connection. #### Patches #### The guide for secure live migration was updated to reflect the necessary configuration options and now has a note, which warns users that not setting all config options may lead into a seemingly working deployment, which still uses unencrypted traffic for the ram-migration. Master(Wallaby): https://review.opendev.org/c/openstack/nova/+/781030 Victoria: https://review.opendev.org/c/openstack/nova/+/781211 Ussuri: https://review.opendev.org/c/openstack/nova/+/782126 Train: https://review.opendev.org/c/openstack/nova/+/782430 Stein: https://review.opendev.org/c/openstack/nova/+/783199 ### Contacts / References ### Author: Josephine Seifert, secustack GmbH This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0089 Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1919357 Mailing List : [Security] tag on openstack-discuss at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg From whayutin at redhat.com Tue Apr 13 14:22:57 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 13 Apr 2021 08:22:57 -0600 Subject: [tripleo][ci] long queues, ceph / scenario001 issue Message-ID: Greetings, FYI.. https://bugs.launchpad.net/tripleo/+bug/1923529 The ceph folks and CI are working together to successfully transition ceph from octopus to pacific. At the moment any scenario001 job will fail. To ease the transition, we're proposing [1] Please note: If you have a change in the gate that triggers scenario001 I will be rebasing or adding a depends-on w/ [1] to ensure the gate is not reset multiple times today. The gate queue will probably peak well over 26hr today.. so please lay off the workflows a bit. Your patience is appreciated! [1] https://review.opendev.org/c/openstack/tripleo-common/+/786053 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Tue Apr 13 14:29:54 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 13 Apr 2021 17:29:54 +0300 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Yatin, Thank you for your work on this. Much appreciated! On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: > Hi Ruslanas, > > On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis > wrote: > > > > Hi Yatin, > > > > I have spotted that version of puppet-tripleo, but even after downgrade > I had/have same issue. should I downgrade even more? :) OR You know when > fixed version might get in for production centos ussuri release repo? > > > I have requested the tag release of puppet-neutron to clear this > https://review.opendev.org/c/openstack/releases/+/786006. Once it's > merged it can be included in centos ussuri release repo, RDO bots will > take care of it. If you want to test before it's released you can pick > puppet-neutron from RDO trunk repo[1]. > > [1] > https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ > > > As you know now that it is affected also :) > > > > > > > > > > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: > >> > >> Hi Ruslanas, > >> > >> For the issue see > >> > https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, > >> The puppet-neutron issue in above was specific to victoria but since > >> there is new release for ussuri recently, it also hit there too. > >> > >> > >> Thanks and Regards > >> Yatin Karel > >> > >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis > wrote: > >> > > >> > Hi all, > >> > > >> > While deploying undercloud, always fails on puppet-container-neutron > configuration, it fails with missing ml2 ovs_driver plugin... downloading > them using: > >> > openstack tripleo container image prepare default --output-env-file > containers-prepare-parameters.yaml > >> > > >> > grep -v Warning > /var/log/containers/stdouts/container-puppet-neutron.log > http://paste.openstack.org/show/804180/ > >> > > >> > builddir/install-undercloud.log ( contains info about > container-puppet-neutron ) > >> > http://paste.openstack.org/show/804181/ > >> > > >> > undercloud.conf: > >> > > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > >> > > >> > dnf list installed > >> > http://paste.openstack.org/show/804182/ > >> > > >> > -- > >> > Ruslanas Gžibovskis > >> > +370 6030 7030 > >> > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > Thanks and Regards > Yatin Karel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Apr 13 14:30:44 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 13 Apr 2021 17:30:44 +0300 Subject: [openstack-ansible] OSA Meeting Poll In-Reply-To: <1324211618317615@mail.yandex.ru> References: <170911617794404@mail.yandex.ru> <202851617797329@mail.yandex.ru> <1324211618317615@mail.yandex.ru> Message-ID: <1304221618324125@mail.yandex.ru> Thanks everyone for participating. New selected OpenStack-Ansible meeting time is: 15:00 UTC, Tuesday. New time is applicable starting from today, Apr 13, 2021. 13.04.2021, 15:48, "Dmitriy Rabotyagov" : > Despite time for vote has passed, I will hold voting opened for several more hours. So it's final call to vote for the new meeting time for interested parties. > > Link to poll is: https://doodle.com/poll/m554dx4mrsideuzi > > 07.04.2021, 15:15, "Dmitriy Rabotyagov" : >> Sorry for the typo in the link, added extra slash in the end. >> >> Correct link is: https://doodle.com/poll/m554dx4mrsideuzi >> >> 07.04.2021, 14:31, "Dmitriy Rabotyagov" : >>>  Hi! >>> >>>  We haven't changed OSA meeting time for a while and stick with the current option (Tuesday, 16:00 UTC) for a while. >>> >>>  So we decided it's time to make a poll regarding preferred time for OSA meetings since list of the interested parties and circumstances might have changed since picking meeting time. >>> >>>  You can find the poll via link [1]. Poll is open till Monday, April 12 2021. Please, make sure you vote before this time. >>> >>>  [1] https://doodle.com/poll/m554dx4mrsideuzi/ >>> >>>  -- >>>  Kind Regards, >>>  Dmitriy Rabotyagov >> >> -- >> Kind Regards, >> Dmitriy Rabotyagov > > -- > Kind Regards, > Dmitriy Rabotyagov --  Kind Regards, Dmitriy Rabotyagov From kennelson11 at gmail.com Tue Apr 13 15:01:44 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 13 Apr 2021 08:01:44 -0700 Subject: [PTG] vPTG April 2021 PTGBot, Etherpads, & IRC Message-ID: Hello! We just wanted to take a second to point out a couple things as we all get ready for the PTG. Firstly, the PTGBot is up to date and ready to go-- as are the autogenerated etherpads! You can see the schedule page, etherpads, etc here[1]. If you/your team have already created an etherpad, please feel free to use the PTGBot[2] to override the default, auto-generated one. Secondly, but perhaps more importantly, with the migration to being more inclusive of projects outside of just openstack, we will be using the #openinfra-events IRC channel! The redirect is in place so you should automatically get sent to the right channel if you try to join the old one. And one more plug: Please register! Its free and important for getting the zoom information, etc. Thanks! -The Kendalls (diablo_rojo & wendallkaters) [1] PTG Website www.openstack.org/ptg [2] PTGbot Etherpad Override Command: https://github.com/openstack/ptgbot#etherpad [3] PTG Registration: https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Tue Apr 13 15:56:37 2021 From: abishop at redhat.com (Alan Bishop) Date: Tue, 13 Apr 2021 08:56:37 -0700 Subject: [cinder] How to post multiple patches. In-Reply-To: <98285557-7db7-592a-c160-b6dad4971a1e@gmail.com> References: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> <98285557-7db7-592a-c160-b6dad4971a1e@gmail.com> Message-ID: On Mon, Apr 12, 2021 at 6:47 PM Jay Bryant wrote: > > On 4/12/2021 11:18 AM, Brian Rosmaita wrote: > > On 4/12/21 12:52 AM, 野村和正 / NOMURA,KAZUMASA wrote: > >> Hi everyone, > >> > >> Hitachi has developed the out-of-tree driver as Cinder driver. But > >> wewant to deprecate the out-of-tree driver and support only the > >> in-treedriver. > >> > >> We need to submit about ten more patches(*1) for full features which > >> theout-of-tree driver has such as Consistency Group and Volume > >> Replication. > >> > >> In that case, we have two options: > >> > >> 1. Submit two or three patches at once. In other words, submit two > >> orthree patches to Xena, then submit another two or three patches > >> afterprevious patches were merged, and so on. This may give reviewers > >> thefeeling of endless. > I just want to add that you are not limited to submitting a single batch of patches in a cycle. If you can get the first batch accepted in Xena, you are free to submit other batches in Xena. Just continue to bear in mind the date for freezing driver patches. The bottom line is the sooner you submit patches and work on resolving reviewer feedback, the sooner you can propose additional patches. Alan >> > >> 2. Submit all patches at once to Xena. This will give reviewers > >> theinformation how many patches remains from the beginning, but many > >> pathesmay bother them. > >> > >> Does anyone have an opinion as to which option is better? > > > > My opinion is that option #1 is better, because as the initial patches > > are reviewed, issues will come up in review that you will be able to > > apply proactively to later patches on your own without reviewers > > having to bring them up, which will result in a better experience for > > all concerned. > > > > Also, we can have an idea of how many patches to expect (without your > > filing them all at once) if you file blueprints in Launchpad for each > > feature. Please name them 'hitachi-consistency-group-support', > > 'hitachi-volume-replication', etc., so it's easy to see what driver > > they're for. The blueprint doesn't need much detail; it's primarily > > for tracking purposes. You can see some examples here: > > https://blueprints.launchpad.net/cinder/wallaby > > > > > I concur with Brian. I think doing a few at a time will be less likely > to overwhelm the review team and it will help to prevent repeated > comments in subsequent patches if you are able to proactively fix the > subsequent patches before they are submitted. > > Thanks for seeking input on this! > > Jay > > > cheers, > > brian > > > >> > >> Thanks, > >> > >> Kazumasa Nomura > >> > >> E-mail: > >> kazumasa.nomura.rx at hitachi.com > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Tue Apr 13 16:50:27 2021 From: helena at openstack.org (helena at openstack.org) Date: Tue, 13 Apr 2021 12:50:27 -0400 (EDT) Subject: Join Us Live for the Wallaby Release Community Meeting Message-ID: <1618332627.338524703@apps.rackspace.com> The Wallaby release is here (whoop! whoop!) and the Community Meeting for it will be hosted by Technical Committee members, Ghanshyam Mann and Kendall Nelson, this Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom [1] as well as live-streamed to YouTube [2]. The meeting will be kicked off with some exciting news from the OpenInfra Foundation and will be followed by updates from PTLs. The Community Meeting agenda is as follows: Wallaby Overview - Ghanshyam Mann & Kendall Nelson Cinder Update - Brian Rosmaita Neutron Update - Slawek Kaplonski Ironic Update - Julia Kreger Nova Update - Balazs Gibizer Cyborg Update - Xin-Ran Wang Masakari Update - Radosław Piliszek Manila Update - Goutham Pacha Ravi Live Q&A session - Ghanshyam Mann & Kendall Nelson PTLs that are presenting: Please make sure your slides are turned in to me EOD (Tuesday, April 13th). Let me know if you have any other questions. Cheers, Helena [1] [ https://zoom.us/j/94881181840?pwd=cmc2Wk1wYlcwNnVOTk9lYWQxVlRadz09 ]( https://zoom.us/j/94881181840?pwd=cmc2Wk1wYlcwNnVOTk9lYWQxVlRadz09 ) [2] [ https://www.youtube.com/channel/UCQ74G2gKXdpwZkXEsclzcrA ]( https://www.youtube.com/channel/UCQ74G2gKXdpwZkXEsclzcrA ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Apr 13 16:51:25 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 13 Apr 2021 09:51:25 -0700 Subject: [tact-sig][dev][infra][qa] Join OpenDev and the TaCT SIG at the PTG Message-ID: <1f425c10-d677-4935-8db7-247755b3d96d@www.fastmail.com> The PTG is next week, and OpenDev is participating alongside the OpenStack TaCT SIG. We are going to try something a bit different this time around, which is to treat the time as office hours rather than time for our own projects. We will be meeting on April 22 from 14:00 - 16:00 UTC and 22:00 - 00:00 UTC in https://meetpad.opendev.org/apr2021-ptg-opendev. Join us if you would like to: * Start contributing to either OpenDev or the TaCT sig. * Debug a particular job problem. * Learn how to write and review Zuul jobs and related configs. * Learn about specific services or how they are deployed. * And anything else related to OpenDev and our project infrastructure. Feel free to add your topics and suggest preferred times for those topics here: https://etherpad.opendev.org/p/apr2021-ptg-opendev. This etherpad corresponds to the document that will be auto loaded in our meetpad room above. I will also be around next week and will try to keep a flexible schedule. Feel free to reach out if you would like us to join discussions as they happen. See you there, Clark From ildiko.vancsa at gmail.com Tue Apr 13 17:48:54 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 13 Apr 2021 19:48:54 +0200 Subject: [edge][ptg] Edge WG sessions at the PTG Message-ID: <0A4AC929-2304-45BC-9A82-6B2BB5CD73BC@gmail.com> Hi, I’m reaching out to you to draw your attention to the Edge Computing Group’s agenda for the PTG next week. We are having a couple of cross-project and cross-community discussions next week as well as discussing topics around use cases and reference architectures which can be a good input for most OpenStack project teams as well. Our agenda is the following: * Monday (April 19) * 1400 UTC - Intro and Agenda bashing * 1415 UTC - Use cases * 1500 UTC - ETSI MEC cross-community session * Tuesday (April 20) * 1400 UTC - Storage discussion * 1430 UTC - Applications and underlying network transport * 1500 UTC - Reference architectures * Wednesday * 1300 UTC - StarlingX cross-project session * 1400 UTC - Akraino cross-community session * 1500 UTC - GSMA cross-community session For more detailed information about the above topics please see our etherpad: https://etherpad.opendev.org/p/ecg-ptg-april-2021 Please let me know if you have additional topics for the sessions next week or if you have questions to the items on the agenda. Thanks and Best Regards, Ildikó (IRC: ildikov on Freenode) From amy at demarco.com Tue Apr 13 19:15:51 2021 From: amy at demarco.com (Amy Marrich) Date: Tue, 13 Apr 2021 14:15:51 -0500 Subject: Diversity and Inclusion Social Hour Sponsored by RDO at the PTG Message-ID: On behalf of the RDO Community, please join us for an hour of Trivia on Thursday April 22 at 17:00 UTC. We will have trivia related to OpenStack and the other OIF projects as well as the cities we've held events. Time permitting we'll have some Pop Culture trivia as well. Prizes for the first 3 placings and registration is Free! https://eventyay.com/e/5f05de57 Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Apr 13 19:26:54 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 13 Apr 2021 14:26:54 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: Message-ID: Just a quick follow up that the 2020 User Survey Analytics are up on the openstack.org site: https://www.openstack.org/analytics Cheers, Jimmy On Apr 9 2021, at 2:28 pm, Julia Kreger wrote: > Thanks Allison! > > Even telling friends doesn't really help, since we would be > self-skewing. I guess part of the conundrum is it is easy for people > not to really be fully aware of the extent of their usage and the mix > of various projects under the hood. They know they get a star ship and > it has warp engines, but they may not know the factory that turned out > the starship. Only the geekiest might know those details. Anyway, I've > been down this path before w/r/t the user survey. > > C'est la vie. Back to work! > -Julia > On Fri, Apr 9, 2021 at 12:07 PM Allison Price wrote: > > > > Hi Julia, > > > > It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. > > > > Let me know if there is any other data you would like pulled. > > > > Thanks! > > Allison > > > > > > > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > > > > > Hey Allison, > > > > > > Metrics would be awesome and I'm just looking for the key high level > > > adoption information as that is good to put into the presentation. > > > > > > -Julia > > > > > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > > >> > > >> Hi Julia, > > >> > > >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > > >> > > >> Thanks! > > >> Allison > > >> > > >> > > >> > > >> > > >> > > >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > > >> > > >> Related, Is there 2020 user survey data available? > > >> > > >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > > >> wrote: > > >> > > >> > > >> Hello ptls, > > >> > > >> > > >> > > >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > >> > > >> > > >> > > >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > >> > > >> > > >> > > >> Let me know if you have any other questions! > > >> > > >> > > >> > > >> Thank you for your participation, > > >> > > >> Helena > > >> > > >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Apr 13 21:19:42 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 13 Apr 2021 23:19:42 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: Message-ID: <6e6f3449-b621-d327-d12e-d4e29e2920f4@debian.org> On 4/13/21 9:26 PM, Jimmy McArthur wrote: > Just a quick follow up that the 2020 User Survey Analytics are up on the > openstack.org  site: > > https://www.openstack.org/analytics > Cheers, > Jimmy Hi Jimmy, Could we get the possibility to choose Wallaby in the market place admin please? It's already working for me in Debian (I could spawn VMs) and I'd like to edit the part for Debian. Cheers, Thomas Goirand (zigo) From jimmy at openstack.org Tue Apr 13 21:27:25 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 13 Apr 2021 16:27:25 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <6e6f3449-b621-d327-d12e-d4e29e2920f4@debian.org> References: <6e6f3449-b621-d327-d12e-d4e29e2920f4@debian.org> Message-ID: <21AAF6D4-BEA9-44EC-BC5D-12FB2124D2D3@getmailspring.com> Hi Thomas - Working on that one :) Thanks for the heads up. I'll ping you as soon as available. Cheers, Jimmy On Apr 13 2021, at 4:19 pm, Thomas Goirand wrote: > On 4/13/21 9:26 PM, Jimmy McArthur wrote: > > Just a quick follow up that the 2020 User Survey Analytics are up on the > > openstack.org site: > > > > https://www.openstack.org/analytics > > Cheers, > > Jimmy > > Hi Jimmy, > Could we get the possibility to choose Wallaby in the market place admin > please? It's already working for me in Debian (I could spawn VMs) and > I'd like to edit the part for Debian. > > Cheers, > Thomas Goirand (zigo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Apr 13 21:59:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 13 Apr 2021 16:59:17 -0500 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train In-Reply-To: <178c70f1d0e.f997989d279125.7112266830106959000@ghanshyammann.com> References: <178c70f1d0e.f997989d279125.7112266830106959000@ghanshyammann.com> Message-ID: <178cd3ef0ad.dba74c6e357934.970580417774215231@ghanshyammann.com> ---- On Mon, 12 Apr 2021 12:09:19 -0500 Ghanshyam Mann wrote ---- > ---- On Mon, 12 Apr 2021 12:00:51 -0500 Brian Rosmaita wrote ---- > > The neutron-grenade job on stable/train has been mostly failing since 8 > > April: > > > > https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch=stable%2Ftrain > > > > I spot-checked a few and it looks like the culprit is "Could not open > > requirements file: [Errno 2] No such file or directory: > > '/opt/stack/requirements/upper-constraints.txt'". See: > > - > > https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/log/logs/grenade.sh.txt#28855 > > - > > https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/log/logs/grenade.sh.txt#28849 > > - > > https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/log/logs/grenade.sh.txt#28855 > > > > The last 2 neutron-grenade jobs on openstack/devstack have passed, but > > there don't look to have been any changes in the devstack repo > > stable/train since 10 March, so I'm not sure if that was luck or if the > > QA team has made a change to get the job working. > > There were multiple changes merged in devstack and grenade for tempest venv constraints at the same time and > there were a few issues in sourcing the stackrc for checking the tempest venv constraints, let's > wait for the below fixes to merged to get all cases green > > -https://review.opendev.org/q/If5f14654ab9aee2a140bbfb869b50d63cb289fdf All patches are merged now along with the making stackviz non-failing [1]. All stable branch should be green now for this issue, please recheck. [1] https://review.opendev.org/q/Ifee04f28ecee52e74803f1623aba5cfe5ee5ec90 -gmann > > -gmann > > > > > Any ideas? > > > > > > thanks, > > brian > > > > > > From gmann at ghanshyammann.com Tue Apr 13 22:00:57 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 13 Apr 2021 17:00:57 -0500 Subject: [qa][heat][stable] grenade jobs with tempest plugins on stable/train broken In-Reply-To: <178a4b1e326.db78f8f289143.8139427571865552389@ghanshyammann.com> References: <178a4b1e326.db78f8f289143.8139427571865552389@ghanshyammann.com> Message-ID: <178cd4077a2.ca9b6368357948.1378438997562868912@ghanshyammann.com> Just updating the status here too. All fixes are merged on devstack, grenade side, and those should make the stable branch green now. -gmann ---- On Mon, 05 Apr 2021 20:00:24 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > I capped stable/stein to use the Tempest 26.0.0 which means grenade jobs that > run the tests from tempest plugins started using the Tempest 26.0.0. But the constraints > used in Tempest virtual env are mismatched between when Tempest virtual env was created > and when tests are run from grenade or grenade plugins scripts. > > Due to these two different constraint used, tox recreate the tempest virtual env which remove > all already installed tempest plugins and their deps and it fails to run the smoke tests. > > This constraints mismatch issue occurred in stable/train and I standardized these for devstack based jobs > - https://review.opendev.org/q/topic:%2522standardize-tempest-tox-constraints%2522+status:merged > > But this issue is occurring for grenade jobs that do not run the tests via run-tempest role (run-tempest role > take care of constraints things). Rabi observed this in threat grenade jobs today. I have reported this as a bug > in LP[1] and making it standardize from the master branch so that this kind of issue does not occur again when > any stable branch starts using the non-master Tempest. > > Please don't recheck if your grenade job is failing with the same issue and wait for the updates on this ML thread. > > [1] https://bugs.launchpad.net/grenade/+bug/1922597 > > -gmann > > > From jay.faulkner at verizonmedia.com Tue Apr 13 22:48:19 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Tue, 13 Apr 2021 15:48:19 -0700 Subject: [cinder] Requesting reviews for mTLS support in client Message-ID: Hi all, A few of us here at Verizon Media have been working to ensure all services can support authenticating with mTLS certificate and key. We've had success with this, and if you've reviewed patches related to this, thank you! There's still an outstanding patch for python-cinderclient that has not gotten attention. It has +1s from contributors across the community, but hasn't gotten any core review attention. It's a trivial change, adding support for mTLS certificate passing for server version requests. The bug is here: https://bugs.launchpad.net/python-cinderclient/+bug/1915996 and the code is here: https://review.opendev.org/c/openstack/python-cinderclient/+/776311. If there are any concerns about this change, please let us know on the gerrit change itself, or feel free to reach out to me on IRC (JayF in #openstack-ironic, among many others). We have been successfully running this code downstream for a while, and hope to share the added mTLS love with the rest of the community. Thanks, Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Wed Apr 14 05:04:17 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 14 Apr 2021 10:34:17 +0530 Subject: [tripleo][ci] long queues, ceph / scenario001 issue In-Reply-To: References: Message-ID: On Tue, Apr 13, 2021 at 7:57 PM Wesley Hayutin wrote: > Greetings, > > FYI.. > https://bugs.launchpad.net/tripleo/+bug/1923529 > > The ceph folks and CI are working together to successfully transition ceph > from octopus to pacific. > At the moment any scenario001 job will fail. > To ease the transition, we're proposing [1] > > Please note: > If you have a change in the gate that triggers scenario001 I will be > rebasing or adding a depends-on w/ [1] to ensure the gate is not reset > multiple times today. > The changes have lost all their votes by adding the depends-on. We could have abandoned/restored the changes like we used to do earlier(?), if they are already in the gate and there was no better way to clear the gate I guess. > The gate queue will probably peak well over 26hr today.. so please lay off > the workflows a bit. Your patience is appreciated! > > [1] https://review.opendev.org/c/openstack/tripleo-common/+/786053 > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Apr 14 06:13:48 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 14 Apr 2021 11:43:48 +0530 Subject: [glance] Xena PTG schedule Message-ID: Hello All, Greetings!!! Xena PTG is around the corner and if you haven't already registered, please do so as soon as possible [1]. I have created a Virtual PTG planning etherpad [2] and also added day wise topics along with timings we are going to discuss. Kindly let me know if you have any concerns with allotted time slots. We also have some slots open on Tuesday, Thursday and Friday for unplanned discussions. So please feel free to add your topics if you still haven't added yet. As a reminder, these are the time slots for our discussion. Tuesday 20 April 2021 1400 UTC to 1700 UTC Wednesday 21 April 2021 1400 UTC to 1700 UTC Thursday 22 April 2021 1400 UTC to 1700 UTC Friday 23 April 2021 1400 UTC to 1700 UTC We will be using bluejeans for our discussion, kindly try to use it once before the actual discussion. The meeting URL is mentioned in etherpad [2] and will be the same throughout the PTG. [1] https://april2021-ptg.eventbrite.com/ [2] https://etherpad.opendev.org/p/xena-glance-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 14 08:42:35 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 14 Apr 2021 10:42:35 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: <2752308.ClrQMDxLba@p1> References: <2752308.ClrQMDxLba@p1> Message-ID: <4135616.GcyNBQpf4Z@p1> Hi Arkady, Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > Hi, > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > > PTG meeting to go over Interop testing and any changes for neutron tempest or > > > tempest configuration in Wallaby cycle or changes planned for Xena. Once on > > agenda one of the Interop WG person will attend and lead the discussion. > > I just added it to our etherpad https://etherpad.opendev.org/p/neutron-xena-ptg > I will be working on schedule of the sessions later this week and I will let > You know what timeslot this session with Interop WG will be. > Please let me know if You have any preferences. We have our sessions > scheduled: > > Monday 1300 - 1600 UTC > Tuesday 1300 - 1600 UTC > Thursday 1300 - 1600 UTC > Friday 1300 - 1600 UTC > > Our time slots which are already booked are: > - Monday 15:00 - 16:00 UTC > - Thursday 14:00 - 15:30 UTC > - Friday 14:00 - 15:00 UTC > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. Please let me know if that isn't good time slot for You. Please also add topics which You want to discuss to our etherpad https:// etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Wed Apr 14 08:48:35 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 14 Apr 2021 10:48:35 +0200 Subject: [neutron] Xena PTG schedule Message-ID: <6863474.G8OYYvop51@p1> Hi neutrinos, I just prepared agenda for our PTG sessions. It's available in our etherpad [1]. Please let me know if topics You are interested in are in not good time slots for You. I will try to move things around if possible. Also, if You have any other topic to discuss, please let me know too so I can include it in the agenda. [1] https://etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Wed Apr 14 12:40:12 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Apr 2021 12:40:12 +0000 Subject: [all][elections][tc] TC Vacancy Special Election voting ends soon Message-ID: <20210414120553.fe7r7q5e65mckdiu@yuggoth.org> We are coming down to the last hours for voting in the TC Vacancy special election. Voting ends Apr 15, 2021 23:45 UTC. Search your gerrit preferred email address[0] for the following subject: Poll: April 2021 Special Technical Committee Election That is your ballot and links you to the voting application. Please vote. If you have voted, please encourage your colleagues to vote. Candidate statements are linked to the names of all confirmed candidates: https://governance.openstack.org/election/#xena-tc-candidates What to do if you don't see the email and have a commit in at least one of the official project teams' deliverable repositories[1]: * check the trash of your gerrit Preferred Email address[0], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from an official deliverable repo[1] and email the election officials[2]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Please vote! Thank you, [0] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. [1] https://opendev.org/openstack/governance/src/commit/892c4f3a851428cf41bab57c6c283e82f1df06d8/reference/projects.yaml [2] https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Elections Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mephmanx at gmail.com Wed Apr 14 13:05:09 2021 From: mephmanx at gmail.com (Chris Lyons) Date: Wed, 14 Apr 2021 13:05:09 +0000 Subject: Openstack External Network connectivity assistance Message-ID: All, I am working on getting a private Openstack cloud set up to support a project I am planning. I have the install complete and do not get any errors and I can connect to hosted VM’s but the VM’s cannot connect to the internet. I have checked OVS logs and it looks like packets are being dropped. I have 2 network nodes and 1 compute node. All are using centos 8. The nodes have 3 NIC’s; NIC 1 is for the internal network and has no connectivity outside of the Openstack cluster; NIC’s 2 & 3 have external & internet connectivity (behind another router/firewall). The br-int,br-ex,and br-tun exist on all nodes. Here is where I think I see packets being dropped: [root at compute01 ~]# docker exec openvswitch_vswitchd ovs-dpctl show -s system at ovs-system: lookups: hit:38597645 missed:256444 lost:0 flows: 38 masks: hit:40505463 total:5 hit/pkt:1.04 port 0: ovs-system (internal) RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 1: br-ex (internal) RX packets:0 errors:0 dropped:566073 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 2: eth2 RX packets:60413543 errors:0 dropped:384 overruns:0 frame:0 TX packets:11059 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:43092601338 (40.1 GiB) TX bytes:1099133 (1.0 MiB) port 3: br-int (internal) RX packets:0 errors:0 dropped:539653 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 4: br-tun (internal) RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 5: qr-317618cc-cc (internal) RX packets:14050 errors:0 dropped:0 overruns:0 frame:0 TX packets:4164 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:900953 (879.8 KiB) TX bytes:318526 (311.1 KiB) port 6: vxlan_sys_4789 (vxlan: packet_type=ptap) RX packets:0 errors:? dropped:? overruns:? frame:? TX packets:0 errors:? dropped:? aborted:? carrier:? collisions:? RX bytes:0 TX bytes:0 port 7: qvoa777fa8d-fb RX packets:3259 errors:0 dropped:0 overruns:0 frame:0 TX packets:13643 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:193660 (189.1 KiB) TX bytes:1126219 (1.1 MiB) port 8: fg-cbe0bbae-e9 (internal) RX packets:518682 errors:0 dropped:24 overruns:0 frame:0 TX packets:4386 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:145164850 (138.4 MiB) TX bytes:328630 (320.9 KiB) port 9: qvo83a439a5-52 RX packets:2642 errors:0 dropped:0 overruns:0 frame:0 TX packets:9553 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:308479 (301.2 KiB) TX bytes:718738 (701.9 KiB) port 10: qvo5fe2d158-f0 …. I would appreciate any ideas or assistance. Id be willing to pay for help as well. Horizon console is at https://app-external.lyonsgroup.family user: support pwd: default -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Apr 14 13:27:53 2021 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 14 Apr 2021 18:57:53 +0530 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Ruslanas, On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis wrote: > > Hi Yatin, > > Thank you for your work on this. Much appreciated! > > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: >> >> Hi Ruslanas, >> >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis wrote: >> > >> > Hi Yatin, >> > >> > I have spotted that version of puppet-tripleo, but even after downgrade I had/have same issue. should I downgrade even more? :) OR You know when fixed version might get in for production centos ussuri release repo? >> > >> I have requested the tag release of puppet-neutron to clear this >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's >> merged it can be included in centos ussuri release repo, RDO bots will >> take care of it. If you want to test before it's released you can pick >> puppet-neutron from RDO trunk repo[1]. >> >> [1] https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ >> It's released, and updated rpm now available at both c8 and c8-stream CloudSIG repos:- - http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D - http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >> > As you know now that it is affected also :) >> > >> > >> > >> > >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >> >> >> >> Hi Ruslanas, >> >> >> >> For the issue see >> >> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, >> >> The puppet-neutron issue in above was specific to victoria but since >> >> there is new release for ussuri recently, it also hit there too. >> >> >> >> >> >> Thanks and Regards >> >> Yatin Karel >> >> >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis wrote: >> >> > >> >> > Hi all, >> >> > >> >> > While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: >> >> > openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml >> >> > >> >> > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ >> >> > >> >> > builddir/install-undercloud.log ( contains info about container-puppet-neutron ) >> >> > http://paste.openstack.org/show/804181/ >> >> > >> >> > undercloud.conf: >> >> > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> >> > >> >> > dnf list installed >> >> > http://paste.openstack.org/show/804182/ >> >> > >> >> > -- >> >> > Ruslanas Gžibovskis >> >> > +370 6030 7030 >> >> >> > >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> >> Thanks and Regards >> Yatin Karel >> Thanks and Regards Yatin Karel From ruslanas at lpic.lt Wed Apr 14 13:31:46 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 14 Apr 2021 16:31:46 +0300 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Thank you, will check in the eve. Will let you know. Thanks 🎉 On Wed, 14 Apr 2021, 16:28 Yatin Karel, wrote: > Hi Ruslanas, > > On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis > wrote: > > > > Hi Yatin, > > > > Thank you for your work on this. Much appreciated! > > > > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: > >> > >> Hi Ruslanas, > >> > >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis > wrote: > >> > > >> > Hi Yatin, > >> > > >> > I have spotted that version of puppet-tripleo, but even after > downgrade I had/have same issue. should I downgrade even more? :) OR You > know when fixed version might get in for production centos ussuri release > repo? > >> > > >> I have requested the tag release of puppet-neutron to clear this > >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's > >> merged it can be included in centos ussuri release repo, RDO bots will > >> take care of it. If you want to test before it's released you can pick > >> puppet-neutron from RDO trunk repo[1]. > >> > >> [1] > https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ > >> > It's released, and updated rpm now available at both c8 and c8-stream > CloudSIG repos:- > - > http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D > - > http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D > >> > As you know now that it is affected also :) > >> > > >> > > >> > > >> > > >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: > >> >> > >> >> Hi Ruslanas, > >> >> > >> >> For the issue see > >> >> > https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, > >> >> The puppet-neutron issue in above was specific to victoria but since > >> >> there is new release for ussuri recently, it also hit there too. > >> >> > >> >> > >> >> Thanks and Regards > >> >> Yatin Karel > >> >> > >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis > wrote: > >> >> > > >> >> > Hi all, > >> >> > > >> >> > While deploying undercloud, always fails on > puppet-container-neutron configuration, it fails with missing ml2 > ovs_driver plugin... downloading them using: > >> >> > openstack tripleo container image prepare default > --output-env-file containers-prepare-parameters.yaml > >> >> > > >> >> > grep -v Warning > /var/log/containers/stdouts/container-puppet-neutron.log > http://paste.openstack.org/show/804180/ > >> >> > > >> >> > builddir/install-undercloud.log ( contains info about > container-puppet-neutron ) > >> >> > http://paste.openstack.org/show/804181/ > >> >> > > >> >> > undercloud.conf: > >> >> > > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > >> >> > > >> >> > dnf list installed > >> >> > http://paste.openstack.org/show/804182/ > >> >> > > >> >> > -- > >> >> > Ruslanas Gžibovskis > >> >> > +370 6030 7030 > >> >> > >> > > >> > > >> > -- > >> > Ruslanas Gžibovskis > >> > +370 6030 7030 > >> > >> Thanks and Regards > >> Yatin Karel > >> > Thanks and Regards > Yatin Karel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 14 13:32:21 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Apr 2021 13:32:21 +0000 Subject: Openstack External Network connectivity assistance In-Reply-To: References: Message-ID: <20210414133220.5o2v3yk5yiaevagj@yuggoth.org> On 2021-04-14 13:05:09 +0000 (+0000), Chris Lyons wrote: [...] > Horizon console is at > > https://app-external.lyonsgroup.family > > user: > > support > > pwd: > > default You might want to change that password, you've E-mailed it to a public mailing list for which the archive is published on the World Wide Web. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From senrique at redhat.com Wed Apr 14 13:56:51 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 14 Apr 2021 10:56:51 -0300 Subject: [cinder] Bug deputy report for week of 2021-04-14 Message-ID: Hello, This is a bug report from 2021-04-07 to 2021-04-14. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC in #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High: - Medium: - https://bugs.launchpad.net/cinder/+bug/1922920 "Incorrect volume usage notifications on migration". Assigned to Gorka Eguileo. - https://bugs.launchpad.net/cinder/+bug/1923830 "Backup of in-use volume using temp snapshot messes up quota usage". Assigned to Gorka Eguileo. - https://bugs.launchpad.net/cinder/+bug/1923829 " Backup of in-use volume using temp snapshot messes up quota usage". Assigned to Gorka Eguileo. - https://bugs.launchpad.net/cinder/+bug/1923828 " Snapshot quota usage sync counts temporary snapshots". Assigned to Gorka Eguileo. Low:- Undecided: - https://bugs.launchpad.net/cinder/+bug/1922939 "Volume backup deletion leaves orphaned files on object storage". Unassigned Regards, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Apr 14 14:45:44 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 14 Apr 2021 09:45:44 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <21AAF6D4-BEA9-44EC-BC5D-12FB2124D2D3@getmailspring.com> References: <21AAF6D4-BEA9-44EC-BC5D-12FB2124D2D3@getmailspring.com> Message-ID: <624F5D81-3792-400B-89EF-A8B16AF2A425@getmailspring.com> Wallaby is now an option in the Marketplace Admin. Cheers, Jimmy On Apr 13 2021, at 4:27 pm, Jimmy McArthur wrote: > Hi Thomas - > > Working on that one :) Thanks for the heads up. I'll ping you as soon as available. > Cheers, > Jimmy > > On Apr 13 2021, at 4:19 pm, Thomas Goirand wrote: > > On 4/13/21 9:26 PM, Jimmy McArthur wrote: > > > Just a quick follow up that the 2020 User Survey Analytics are up on the > > > openstack.org site: > > > > > > https://www.openstack.org/analytics > > > Cheers, > > > Jimmy > > > > Hi Jimmy, > > Could we get the possibility to choose Wallaby in the market place admin > > please? It's already working for me in Debian (I could spawn VMs) and > > I'd like to edit the part for Debian. > > > > Cheers, > > Thomas Goirand (zigo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Apr 14 15:30:36 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 14 Apr 2021 17:30:36 +0200 Subject: OpenStack Wallaby is officially released! Message-ID: The official OpenStack Wallaby release announcement has been sent out: http://lists.openstack.org/pipermail/openstack-announce/2021-April/002047.html Thanks to all who were a part of the Wallaby development cycle! This marks the official opening of the releases repo for Xena, and freezes are now lifted. Wallaby is now a fully normal stable branch, and the normal stable policy now applies. Thanks! Hervé Beraud and the Release Management team -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Apr 14 15:59:02 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 14 Apr 2021 11:59:02 -0400 Subject: [cinder] festival of XS reviews 16 April 2021 Message-ID: <0c73cc15-92ed-2f95-efdd-dead8f60125e@gmail.com> Hello Cinder community members, This is a reminder that the Third Cinder Festival of XS Reviews will be held at the end of this week on Friday 16 April. what: The Cinder Festival of XS Reviews when: Friday 16 April 2021 from 1400-1600 UTC where: https://meetpad.opendev.org/cinder-festival-of-reviews (Note that we've moved to meetpad!) Now that we've made this a recurring meeting, here's an ICS file for your calendar: http://eavesdrop.openstack.org/calendars/cinder-festival-of-reviews.ics See you there! brian From gthiemon at redhat.com Wed Apr 14 16:20:48 2021 From: gthiemon at redhat.com (Gregory Thiemonge) Date: Wed, 14 Apr 2021 18:20:48 +0200 Subject: [octavia] Next week meeting Message-ID: Hi, Next week is the PTG, so we decided during our weekly upstream meeting to cancel the next Octavia meeting (April 21st). Thank you, Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Apr 14 16:25:44 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 14 Apr 2021 09:25:44 -0700 Subject: Retiring the Infra mqtt service running on firehose Message-ID: <08ecc3f0-4777-4f66-8bec-af472a936342@www.fastmail.com> Hello everyone, This is a short note to announce that we will be retiring the mqtt service that was running on our firehose.openstack.org server. The server itself will also be removed. This should happen in the next day or two. This service never saw production use. It was a great little experiment, and I think several of us learned a lot in the process. Unfortunately, the service needs more care than we can provide it (config management updates and upgrades primarily). Considering the maintenance needs and the lack of use(rs) our best option appears to be simply turning it off. As a note, it appears the service may have died at some point anyway and hasn't been functioning. The lack of complaints are another indications that we are fine to turn it off. If you need access to the data the firehose was providing, you should be able to procure it via other methods (like the Gerrit event stream). Clark From ruslanas at lpic.lt Wed Apr 14 16:52:08 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 14 Apr 2021 18:52:08 +0200 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Yatin, I still see the same version. puppet-tripleo noarch 12.5.0-1.el8 centos-openstack-ussuri 278 k Will try to monitor changes. On Wed, 14 Apr 2021 at 15:31, Ruslanas Gžibovskis wrote: > Thank you, will check in the eve. Will let you know. Thanks 🎉 > > On Wed, 14 Apr 2021, 16:28 Yatin Karel, wrote: > >> Hi Ruslanas, >> >> On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis >> wrote: >> > >> > Hi Yatin, >> > >> > Thank you for your work on this. Much appreciated! >> > >> > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: >> >> >> >> Hi Ruslanas, >> >> >> >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis >> wrote: >> >> > >> >> > Hi Yatin, >> >> > >> >> > I have spotted that version of puppet-tripleo, but even after >> downgrade I had/have same issue. should I downgrade even more? :) OR You >> know when fixed version might get in for production centos ussuri release >> repo? >> >> > >> >> I have requested the tag release of puppet-neutron to clear this >> >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's >> >> merged it can be included in centos ussuri release repo, RDO bots will >> >> take care of it. If you want to test before it's released you can pick >> >> puppet-neutron from RDO trunk repo[1]. >> >> >> >> [1] >> https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ >> >> >> It's released, and updated rpm now available at both c8 and c8-stream >> CloudSIG repos:- >> - >> http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >> - >> http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >> >> > As you know now that it is affected also :) >> >> > >> >> > >> >> > >> >> > >> >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >> >> >> >> >> >> Hi Ruslanas, >> >> >> >> >> >> For the issue see >> >> >> >> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html >> , >> >> >> The puppet-neutron issue in above was specific to victoria but since >> >> >> there is new release for ussuri recently, it also hit there too. >> >> >> >> >> >> >> >> >> Thanks and Regards >> >> >> Yatin Karel >> >> >> >> >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis < >> ruslanas at lpic.lt> wrote: >> >> >> > >> >> >> > Hi all, >> >> >> > >> >> >> > While deploying undercloud, always fails on >> puppet-container-neutron configuration, it fails with missing ml2 >> ovs_driver plugin... downloading them using: >> >> >> > openstack tripleo container image prepare default >> --output-env-file containers-prepare-parameters.yaml >> >> >> > >> >> >> > grep -v Warning >> /var/log/containers/stdouts/container-puppet-neutron.log >> http://paste.openstack.org/show/804180/ >> >> >> > >> >> >> > builddir/install-undercloud.log ( contains info about >> container-puppet-neutron ) >> >> >> > http://paste.openstack.org/show/804181/ >> >> >> > >> >> >> > undercloud.conf: >> >> >> > >> https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> >> >> > >> >> >> > dnf list installed >> >> >> > http://paste.openstack.org/show/804182/ >> >> >> > >> >> >> > -- >> >> >> > Ruslanas Gžibovskis >> >> >> > +370 6030 7030 >> >> >> >> >> > >> >> > >> >> > -- >> >> > Ruslanas Gžibovskis >> >> > +370 6030 7030 >> >> >> >> Thanks and Regards >> >> Yatin Karel >> >> >> Thanks and Regards >> Yatin Karel >> >> -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 14 17:01:08 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Apr 2021 17:01:08 +0000 Subject: Retiring the Infra mqtt service running on firehose In-Reply-To: <08ecc3f0-4777-4f66-8bec-af472a936342@www.fastmail.com> References: <08ecc3f0-4777-4f66-8bec-af472a936342@www.fastmail.com> Message-ID: <20210414170108.mi6ibxr6kaw4x6n5@yuggoth.org> On 2021-04-14 09:25:44 -0700 (-0700), Clark Boylan wrote: [...] > If you need access to the data the firehose was providing, you > should be able to procure it via other methods (like the Gerrit > event stream). Also, while we are likely to retire the various MQTT bridge projects we developed around it in the near future, you can still fork or ask to have control of them transferred to you if you find them useful for running a similar service yourself. None of the things we were reporting in the firehose required privileged access (well, except for the configuration management update stream, which we stopped publishing there a while back), so there's nothing stopping someone from setting up their own firehose as a fully functional replacement. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ruslanas at lpic.lt Wed Apr 14 17:24:56 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 14 Apr 2021 19:24:56 +0200 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: it passed the step it was failing. Thank you Yatin On Wed, 14 Apr 2021 at 18:52, Ruslanas Gžibovskis wrote: > Yatin, I still see the same version. > > puppet-tripleo noarch 12.5.0-1.el8 > centos-openstack-ussuri 278 k > > Will try to monitor changes. > > On Wed, 14 Apr 2021 at 15:31, Ruslanas Gžibovskis > wrote: > >> Thank you, will check in the eve. Will let you know. Thanks 🎉 >> >> On Wed, 14 Apr 2021, 16:28 Yatin Karel, wrote: >> >>> Hi Ruslanas, >>> >>> On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis >>> wrote: >>> > >>> > Hi Yatin, >>> > >>> > Thank you for your work on this. Much appreciated! >>> > >>> > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: >>> >> >>> >> Hi Ruslanas, >>> >> >>> >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis >>> wrote: >>> >> > >>> >> > Hi Yatin, >>> >> > >>> >> > I have spotted that version of puppet-tripleo, but even after >>> downgrade I had/have same issue. should I downgrade even more? :) OR You >>> know when fixed version might get in for production centos ussuri release >>> repo? >>> >> > >>> >> I have requested the tag release of puppet-neutron to clear this >>> >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's >>> >> merged it can be included in centos ussuri release repo, RDO bots will >>> >> take care of it. If you want to test before it's released you can pick >>> >> puppet-neutron from RDO trunk repo[1]. >>> >> >>> >> [1] >>> https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ >>> >> >>> It's released, and updated rpm now available at both c8 and c8-stream >>> CloudSIG repos:- >>> - >>> http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >>> - >>> http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >>> >> > As you know now that it is affected also :) >>> >> > >>> >> > >>> >> > >>> >> > >>> >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >>> >> >> >>> >> >> Hi Ruslanas, >>> >> >> >>> >> >> For the issue see >>> >> >> >>> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html >>> , >>> >> >> The puppet-neutron issue in above was specific to victoria but >>> since >>> >> >> there is new release for ussuri recently, it also hit there too. >>> >> >> >>> >> >> >>> >> >> Thanks and Regards >>> >> >> Yatin Karel >>> >> >> >>> >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis < >>> ruslanas at lpic.lt> wrote: >>> >> >> > >>> >> >> > Hi all, >>> >> >> > >>> >> >> > While deploying undercloud, always fails on >>> puppet-container-neutron configuration, it fails with missing ml2 >>> ovs_driver plugin... downloading them using: >>> >> >> > openstack tripleo container image prepare default >>> --output-env-file containers-prepare-parameters.yaml >>> >> >> > >>> >> >> > grep -v Warning >>> /var/log/containers/stdouts/container-puppet-neutron.log >>> http://paste.openstack.org/show/804180/ >>> >> >> > >>> >> >> > builddir/install-undercloud.log ( contains info about >>> container-puppet-neutron ) >>> >> >> > http://paste.openstack.org/show/804181/ >>> >> >> > >>> >> >> > undercloud.conf: >>> >> >> > >>> https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >>> >> >> > >>> >> >> > dnf list installed >>> >> >> > http://paste.openstack.org/show/804182/ >>> >> >> > >>> >> >> > -- >>> >> >> > Ruslanas Gžibovskis >>> >> >> > +370 6030 7030 >>> >> >> >>> >> > >>> >> > >>> >> > -- >>> >> > Ruslanas Gžibovskis >>> >> > +370 6030 7030 >>> >> >>> >> Thanks and Regards >>> >> Yatin Karel >>> >> >>> Thanks and Regards >>> Yatin Karel >>> >>> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Apr 14 19:29:46 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 14 Apr 2021 15:29:46 -0400 Subject: [cinder] Xena PTG schedule Message-ID: As mentioned at today's cinder meeting, the Xena PTG schedule for cinder next week is available: https://etherpad.opendev.org/p/apr2021-ptg-cinder The sessions will be recorded. Connection info is on the etherpad. As usual, there are a few items scheduled for specific times; otherwise, we'll just go through topics in the order listed, giving each one as much time as it needs. If we are running long or short on any given day during the PTG, I'll move one of my topics. So you should be able to figure out the day/time of your topic within a half hour or so. Please look the schedule over and let me know about any conflicts as soon as possible. Also, feel free to start adding any notes or references about your topic to the etherpad. Depending on how things go, there may be room for another topic on Friday. Let me know if there's something you'd like to see discussed. Finally, don't forget to register for the PTG: https://april2021-ptg.eventbrite.com/ See you at the PTG! From gmann at ghanshyammann.com Thu Apr 15 00:40:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Apr 2021 19:40:17 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 15th at 1500 UTC In-Reply-To: <178c8031241.1163a8d15287294.6003502774058757783@ghanshyammann.com> References: <178c8031241.1163a8d15287294.6003502774058757783@ghanshyammann.com> Message-ID: <178d2f8b278.c8d3464f37790.5375231383528536308@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on April 15th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * PTG ** https://etherpad.opendev.org/p/tc-xena-ptg * Gate performance and heavy job configs (dansmith) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Election for one Vacant TC seat (gmann) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021334.html * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 12 Apr 2021 16:35:47 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for April 15th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, April 14th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gouthampravi at gmail.com Thu Apr 15 07:57:52 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 15 Apr 2021 00:57:52 -0700 Subject: [manila][ptg] Xena PTG Planning In-Reply-To: References: Message-ID: Hello Zorillas, Thank you for proposing topics for the Xena PTG Discussions. I've assigned time slots to the discussion items proposed. Please see them in the etherpad we'll use on the day [0]. If you'd like to move something around, please let me know. If you have some last minute topics, please let me know, and add them to the planning etherpad [3]. We won't have our regularly scheduled IRC meeting during the PTG week. Hope to see you all virtually! Thanks, Goutham [0] https://etherpad.opendev.org/p/xena-ptg-manila [3] https://etherpad.opendev.org/p/xena-ptg-manila-planning On Wed, Mar 24, 2021 at 11:53 PM Goutham Pacha Ravi wrote: > > Hello Zorillas and Interested Stackers, > > As you're aware, the virtual PTG for the Xena release cycle is between > April 19-23, 2021. If you haven't registered yet, you must do so as > soon as possible! [1]. We've signed up for some slots on the PTG > timeslots ethercalc [2]. > > The PTG Planning etherpad [3] is now live. Please go ahead and add > your name/irc nick and propose any topics. You may propose topics even > if you wouldn't like to moderate the discussion. > > Thanks, and hope to see you all there! > Goutham > > [1] https://april2021-ptg.eventbrite.com/ > [2] https://ethercalc.net/oz7q0gds9zfi > [3] https://etherpad.opendev.org/p/xena-ptg-manila-planning From syedammad83 at gmail.com Thu Apr 15 08:51:37 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Thu, 15 Apr 2021 13:51:37 +0500 Subject: Openstack Databases Support Message-ID: Hi, I was working to have high availability of openstack components databases. I have used Percona XtraDB cluster 8.0 for one of my other project and it works pretty good. Is Percona XtraDB cluster 8.0 supported for openstack components databases ? - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Apr 15 13:02:42 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 15 Apr 2021 15:02:42 +0200 Subject: [neutron] Drivers meeting agenda - 16.04.2021 Message-ID: <6804092.R4j1StIJWZ@p1> Hi, Agenda for our tomorrow's drivers meeting is at [1]. We have 1 new RFE to discuss: - https://bugs.launchpad.net/neutron/+bug/1922716 - [RFE] BFD for BGP Dynamic Routing [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From Arkady.Kanevsky at dell.com Thu Apr 15 14:03:23 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:03:23 +0000 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: <4135616.GcyNBQpf4Z@p1> References: <2752308.ClrQMDxLba@p1> <4135616.GcyNBQpf4Z@p1> Message-ID: Thanks Slawek. I will check with the team and will get back to you. For now assume that Friday will work. Thanks, Arkady -----Original Message----- From: Slawek Kaplonski Sent: Wednesday, April 14, 2021 3:43 AM To: openstack-discuss at lists.openstack.org Cc: OpenStack Discuss; Kanevsky, Arkady Subject: Re: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop Hi Arkady, Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > Hi, > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > > min on > > PTG meeting to go over Interop testing and any changes for neutron > tempest or > > > tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on > > agenda one of the Interop WG person will attend and lead the discussion. > > I just added it to our etherpad > https://etherpad.opendev.org/p/neutron-xena-ptg > I will be working on schedule of the sessions later this week and I > will let You know what timeslot this session with Interop WG will be. > Please let me know if You have any preferences. We have our sessions > scheduled: > > Monday 1300 - 1600 UTC > Tuesday 1300 - 1600 UTC > Thursday 1300 - 1600 UTC > Friday 1300 - 1600 UTC > > Our time slots which are already booked are: > - Monday 15:00 - 16:00 UTC > - Thursday 14:00 - 15:30 UTC > - Friday 14:00 - 15:00 UTC > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. Please let me know if that isn't good time slot for You. Please also add topics which You want to discuss to our etherpad https:// etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat From Arkady.Kanevsky at dell.com Thu Apr 15 14:18:09 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:18:09 +0000 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Michael. Interop team will have a rep there. If you can schedule us at 14:00 UTC or 14:30, or 14:45 that will be the best. I think 15 min will be enough. Thanks, Arkady -----Original Message----- From: Michael Johnson Sent: Monday, April 12, 2021 10:57 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi Arkady, I have added Interop to the Designate topics list (https://urldefense.com/v3/__https://etherpad.opendev.org/p/xena-ptg-designate__;!!LpKI!yXIFUxciVfW5bKHaFIxjMmhoQrGASnWQVIz9UZY3oXExCpXgnM52TrpaajTFMP1HP3fc$ [etherpad[.]opendev[.]org]) and will schedule a slot this week when I put a rough agenda together. Thanks, Michael On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > Adding comminuty > > > > From: Kanevsky, Arkady > Sent: Sunday, April 11, 2021 3:25 PM > To: 'johnsomor at gmail.com' > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for > Interop > > > > John, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > Thanks, > > > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > From Arkady.Kanevsky at dell.com Thu Apr 15 14:18:59 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:18:59 +0000 Subject: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Brian. I will be there. -----Original Message----- From: Brian Rosmaita Sent: Monday, April 12, 2021 10:54 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] On 4/11/21 4:23 PM, Kanevsky, Arkady wrote: > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > min on PTG meeting to go over Interop testing and any changes for > cinder tempest or tempest configuration in Wallaby cycle or changes > planned for Xena. Hi Arkady, I've virtually penciled you in for 1430-1500 on Tuesday 20 April. > Once on agenda one of the Interop WG person will attend and lead the > discussion. Sounds good. I've scheduled 30 min instead of 15 because it would be helpful for the cinder team to hear a quick synopsis of the current goals of the Interop WG and what the aim of the project is before we discuss the specifics of W and X. cheers, brian > > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > From Arkady.Kanevsky at dell.com Thu Apr 15 14:22:58 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:22:58 +0000 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Gibi, That will work. We will have a person there. Can you provide a pointer to nova PTG agenda etherpad? Thanks, Arkady -----Original Message----- From: Balazs Gibizer Sent: Monday, April 12, 2021 2:12 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Nova][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi Arkady, What about Wednesday 14:00 - 15:00 UTC? We don't have to fill a whole hour of course. Cheers, gibi On Sun, Apr 11, 2021 at 20:34, "Kanevsky, Arkady" wrote: > Balazs, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > min on PTG meeting to go over Interop testing and any changes for Nova > tempest or tempest configuration in Wallaby cycle or changes planned > for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > From akekane at redhat.com Thu Apr 15 14:29:58 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 15 Apr 2021 19:59:58 +0530 Subject: [Glance][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi Arkady, We have some slots open on Tuesday, Thursday and Friday, you can go through the schedule [1] and decide on which day you want to sync with us. Kindly update the etherpad as well. [1] https://etherpad.opendev.org/p/xena-glance-ptg Thanks & Best Regards, Abhishek Kekane On Mon, Apr 12, 2021 at 2:08 AM Kanevsky, Arkady wrote: > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > PTG meeting to go over Interop testing and any changes for glance tempest > or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Thu Apr 15 14:32:39 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Thu, 15 Apr 2021 07:32:39 -0700 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> On 4/7/21 9:24 AM, Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > Marios, I have found a conflict between my Tuesday 1510-1550 "BGP Routing with FRR" and another discussion happening in the Neutron room about BGP. Would it be possible to move the "BGP Routing with FRR" talk on Tuesday to Wednesday? Perhaps a direct swap with the "One yaml to rule all tempest tests" discussion that is scheduled for Wednesday 1510-1550? Another time on Wednesday could also work. Thanks, -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From Arkady.Kanevsky at dell.com Thu Apr 15 14:43:16 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:43:16 +0000 Subject: [Glance][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Abhishek. Done. Added for Tuesday From: Abhishek Kekane Sent: Thursday, April 15, 2021 9:30 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Glance][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi Arkady, We have some slots open on Tuesday, Thursday and Friday, you can go through the schedule [1] and decide on which day you want to sync with us. Kindly update the etherpad as well. [1] https://etherpad.opendev.org/p/xena-glance-ptg [etherpad.opendev.org] Thanks & Best Regards, Abhishek Kekane On Mon, Apr 12, 2021 at 2:08 AM Kanevsky, Arkady > wrote: As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for glance tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Apr 15 15:04:14 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 15 Apr 2021 18:04:14 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> References: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> Message-ID: On Thu, Apr 15, 2021 at 5:32 PM Dan Sneddon wrote: > > > > On 4/7/21 9:24 AM, Marios Andreou wrote: > > Hello TripleO o/ > > > > Thanks again to everybody who has volunteered to lead a session for > > the coming Xena TripleO project teams gathering. > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > sessions per day with some breaks. > > > > Please review the slot assigned for your session at [1]. If that time > > is not ok then please let me know as soon as possible and indicate if > > you want it later or earlier or on any other day. If you've decided > > the session no longer makes sense then also please tell me and we can > > move things around accordingly to finish earlier. > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > week before PTG. We can and likely will make changes after this date > > but last minute changes are best avoided to allow folks to schedule > > their PTG attendance across projects. > > > > Thanks everybody for your help! Looking forward to interesting > > presentations and discussions as always > > > > regards, marios > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > Marios, > > I have found a conflict between my Tuesday 1510-1550 "BGP Routing with > FRR" and another discussion happening in the Neutron room about BGP. > > Would it be possible to move the "BGP Routing with FRR" talk on Tuesday > to Wednesday? Perhaps a direct swap with the "One yaml to rule all > tempest tests" discussion that is scheduled for Wednesday 1510-1550? > Another time on Wednesday could also work. > ACK I just pinged arx (adding him into cc here too) ... once I hear back from him and if he doesn't have another conflict we can make the change. Arx are you OK with the proposed swap? Your session would move to Tuesday same time. Otherwise we can explore something else, regards, marios > Thanks, > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter > From ltomasbo at redhat.com Thu Apr 15 15:12:56 2021 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Thu, 15 Apr 2021 17:12:56 +0200 Subject: [neutron] Xena PTG schedule In-Reply-To: <6863474.G8OYYvop51@p1> References: <6863474.G8OYYvop51@p1> Message-ID: Hi folks, In relation to the "updating OVN to Support BGP Routing" session at the next Neutron-Xena-PTG, I would like to bring up the attention to the next effort for context and discussions during the session. We are working on a solution based on FRR where a (python) agent reads from the OVN SB DB (port binding events) and triggers FRR so that the needed routes get advertised. It leverages host kernel networking to redirect the traffic to the OVN overlay, and therefore does not require any modifications to ovn itself (at least for now) though it won´t work for SR-IOV/DPDK use cases. The PoC code can be found here: https://github.com/luis5tb/bgp-agent There are a series of blog posts related to how to use it on OpenStack and how it works: - OVN-BGP agent introduction: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ - How to set ip up on DevStack Environment: https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/ - In-depth traffic flow inspection: https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/ See you next week! Best regards, Luis On Wed, Apr 14, 2021 at 10:53 AM Slawek Kaplonski wrote: > Hi neutrinos, > > I just prepared agenda for our PTG sessions. It's available in our > etherpad > [1]. > Please let me know if topics You are interested in are in not good time > slots > for You. I will try to move things around if possible. > Also, if You have any other topic to discuss, please let me know too so I > can > include it in the agenda. > > [1] https://etherpad.opendev.org/p/neutron-xena-ptg > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- LUIS TOMÁS BOLÍVAR Principal Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu Apr 15 15:30:08 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 15 Apr 2021 17:30:08 +0200 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: <823MRQ.2KUNTQCR99JH1@est.tech> Hi, Sure, the etherpad is here https://etherpad.opendev.org/p/nova-xena-ptg I noted the interop discussion slot at L41 Cheers, gibi On Thu, Apr 15, 2021 at 14:22, "Kanevsky, Arkady" wrote: > Gibi, > That will work. We will have a person there. > Can you provide a pointer to nova PTG agenda etherpad? > Thanks, > Arkady > > -----Original Message----- > From: Balazs Gibizer > Sent: Monday, April 12, 2021 2:12 AM > To: Kanevsky, Arkady > Cc: OpenStack Discuss > Subject: Re: [Nova][Interop] request for 15-30 min on Xena PTG for > Interop > > > [EXTERNAL EMAIL] > > Hi Arkady, > > What about Wednesday 14:00 - 15:00 UTC? We don't have to fill a whole > hour of course. > > Cheers, > gibi > > On Sun, Apr 11, 2021 at 20:34, "Kanevsky, Arkady" > wrote: >> Balazs, >> >> As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 >> min on PTG meeting to go over Interop testing and any changes for >> Nova >> tempest or tempest configuration in Wallaby cycle or changes planned >> for Xena. >> >> Once on agenda one of the Interop WG person will attend and lead the >> discussion. >> >> >> >> Thanks, >> >> Arkady >> >> >> >> Arkady Kanevsky, Ph.D. >> >> SP Chief Technologist & DE >> >> Dell Technologies office of CTO >> >> Dell Inc. One Dell Way, MS PS2-91 >> >> Round Rock, TX 78682, USA >> >> Phone: 512 7204955 >> >> >> > > From DHilsbos at performair.com Thu Apr 15 15:50:07 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 15 Apr 2021 15:50:07 +0000 Subject: [ops][victoria][cinder] Import volume? Message-ID: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> All; I'm looking to transfer several VMs from XenServer to an OpenStack Victoria cloud. Finding explanations for importing Glance images is easy, but I haven't been able to find a tutorial on importing Cinder volumes. Since they are currently independent servers / volumes it seems somewhat wasteful and messy to import each VMs disk as an image just to spawn a volume from it. We're using Ceph as the storage provider for Glance and Cinder. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From eblock at nde.ag Thu Apr 15 16:30:45 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 15 Apr 2021 16:30:45 +0000 Subject: [ops][victoria][cinder] Import volume? In-Reply-To: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> Message-ID: <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> Hi, there’s a ‚cinder manage‘ command to import an rbd image into openstack. But be aware that if you delete it in openstack it will be removed from ceph, too (like a regular cinder volume). I don’t have the exact command syntax at hand right now, but try ‚cinder help manage‘ Regards Eugen Zitat von DHilsbos at performair.com: > All; > > I'm looking to transfer several VMs from XenServer to an OpenStack > Victoria cloud. Finding explanations for importing Glance images is > easy, but I haven't been able to find a tutorial on importing Cinder > volumes. > > Since they are currently independent servers / volumes it seems > somewhat wasteful and messy to import each VMs disk as an image just > to spawn a volume from it. > > We're using Ceph as the storage provider for Glance and Cinder. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com From arxcruz at redhat.com Thu Apr 15 16:35:52 2021 From: arxcruz at redhat.com (Arx Cruz) Date: Thu, 15 Apr 2021 18:35:52 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> Message-ID: Hello, Sure, it's fine with me. Sorry the delay, I'm switching ISP, my internet is terrible today. On Thu, Apr 15, 2021 at 5:04 PM Marios Andreou wrote: > On Thu, Apr 15, 2021 at 5:32 PM Dan Sneddon wrote: > > > > > > > > On 4/7/21 9:24 AM, Marios Andreou wrote: > > > Hello TripleO o/ > > > > > > Thanks again to everybody who has volunteered to lead a session for > > > the coming Xena TripleO project teams gathering. > > > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > > sessions per day with some breaks. > > > > > > Please review the slot assigned for your session at [1]. If that time > > > is not ok then please let me know as soon as possible and indicate if > > > you want it later or earlier or on any other day. If you've decided > > > the session no longer makes sense then also please tell me and we can > > > move things around accordingly to finish earlier. > > > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > > week before PTG. We can and likely will make changes after this date > > > but last minute changes are best avoided to allow folks to schedule > > > their PTG attendance across projects. > > > > > > Thanks everybody for your help! Looking forward to interesting > > > presentations and discussions as always > > > > > > regards, marios > > > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > > > > > Marios, > > > > I have found a conflict between my Tuesday 1510-1550 "BGP Routing with > > FRR" and another discussion happening in the Neutron room about BGP. > > > > Would it be possible to move the "BGP Routing with FRR" talk on Tuesday > > to Wednesday? Perhaps a direct swap with the "One yaml to rule all > > tempest tests" discussion that is scheduled for Wednesday 1510-1550? > > Another time on Wednesday could also work. > > > > ACK I just pinged arx (adding him into cc here too) ... once I hear > back from him and if he doesn't have another conflict we can make the > change. > Arx are you OK with the proposed swap? Your session would move to > Tuesday same time. > > Otherwise we can explore something else, > > regards, marios > > > Thanks, > > -- > > Dan Sneddon | Senior Principal Software Engineer > > dsneddon at redhat.com | redhat.com/cloud > > dsneddon:irc | @dxs:twitter > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunil.kathait at hotmail.com Thu Apr 15 07:27:34 2021 From: sunil.kathait at hotmail.com (Sunil kathait) Date: Thu, 15 Apr 2021 07:27:34 +0000 Subject: Project List Message-ID: hi team, we have several projects created in openstack stein release. we have created all the projects with an additional property (like OU and location) and now we need to list the projects with their OU properties which was set in the projects. openstack project show : This command shows the property field with the value on it. openstack project list --long : this only shows ID, Name, description, long, Enabled. How can we list the projects along with their property field value which was set at the time of creation of the project. TIA -------------- next part -------------- An HTML attachment was scrubbed... URL: From eng.taha1928 at gmail.com Thu Apr 15 09:57:40 2021 From: eng.taha1928 at gmail.com (Taha Adel) Date: Thu, 15 Apr 2021 11:57:40 +0200 Subject: [Placement] Weird issue in placement-api Message-ID: Hello, I currently have OpenStack manually deployed by following the official install documentation, but I have faced a weird situation. When I send an api request to placement api service using the following command: *curl -H "X-Auth-Token: $T" http://controller:8778 * I received a status code of "*200*", which indicates a successful operation. But, when I issue the following request: *curl -H "X-Auth-Token: $T" http://controller:8778/resource_providers * I received a status code of "*503*", and when I checked the logs of placement and keystone, they say that the authentication failed. For the same reason, nova-compute can't register itself as a resource provider. I'm sure that the authentication credentials for placement are set properly, but I don't know what's the problem. Any suggestions, please? -------------- next part -------------- An HTML attachment was scrubbed... URL: From manish16054 at gmail.com Thu Apr 15 14:53:17 2021 From: manish16054 at gmail.com (Manish Mahalwal) Date: Thu, 15 Apr 2021 20:23:17 +0530 Subject: dynamic vendor data and cloud-init Message-ID: Hi All, I am working with OpenStack Pike and cloud-init 21.1. I am able to successfully pass dynamic vendor data to the config drive of an instance. However, cloud-init 21.1 just reads all the 'x' bytes of the vendor_data2.json but it doesn't execute the contents of the json. Although, static vendor data works perfectly fine and the YAML file in the JSON is executed as expected by cloud-init 21.1 * Now, the person who wrote the code for handling dynamic vendordata in cloud-init (https://github.com/canonical/cloud-init/pull/777) says that the JSON cloud-init expects is of the form: > {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - > black\nfqdn: cloud-overridden-by-vendordata2.example.org."} > * I believe that the JSON should have another outer key (as mentioned here https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html) which is the name of the microservice specified in nova.conf file and that the inner key should be cloud-init. In nova.conf: vendordata_dynamic_targets=name1@ http://example.com,name2 at http://example2.com { > "name1": { > "cloud-init": "#cloud-config\n..." > }, > "name2": { > "cloud-init": "#cloud-config\n..." > } > } >>Who is right and who is wrong? To read more on this please go through the following: https://bugs.launchpad.net/cloud-init/+bug/1841104 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Apr 15 17:03:29 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 15 Apr 2021 12:03:29 -0500 Subject: [ops][victoria][cinder] Import volume? In-Reply-To: <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> References: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> Message-ID: <20210415170329.GA2777639@sm-workstation> On Thu, Apr 15, 2021 at 04:30:45PM +0000, Eugen Block wrote: > Hi, > > there’s a ‚cinder manage‘ command to import an rbd image into openstack. > But be aware that if you delete it in openstack it will be removed from > ceph, too (like a regular cinder volume). > I don’t have the exact command syntax at hand right now, but try ‚cinder > help manage‘ > > Regards > Eugen > Here is the documentation for that command: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-manage Also note, if you no longer need to manage the volume in Cinder, but you do not want it to be deleted from your storage backend, there is also the inverse command of `cinder unmanage`. Details for that command can be found here: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-unmanage > > Zitat von DHilsbos at performair.com: > > > All; > > > > I'm looking to transfer several VMs from XenServer to an OpenStack > > Victoria cloud. Finding explanations for importing Glance images is > > easy, but I haven't been able to find a tutorial on importing Cinder > > volumes. > > > > Since they are currently independent servers / volumes it seems somewhat > > wasteful and messy to import each VMs disk as an image just to spawn a > > volume from it. > > > > We're using Ceph as the storage provider for Glance and Cinder. > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > From ricolin at ricolky.com Thu Apr 15 17:16:38 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 16 Apr 2021 01:16:38 +0800 Subject: [Multi-arch][tc][SIG][all] Multi-arch SIG report just published! Message-ID: Dear all We just publish a Multi-arch SIG report to introduce the current Multi-arch status in OpenStack community. You can found the link from superuser [1] or direct access to full report here [2]. I thank anyone who provides their time to any related works. If you also work on related stuff. We would reeeeeeeeeeeeeeeeeeeeally love and wish to learn/hear from you!!! There're more works OpenStack community can do to support multi-arch. But it won't be done fast if we don't have enough resources for it. We currently really need more volunteers, feedbacks, and more CI resources, and we welcome all kinds of help we can get. So if you have any resources/suggestions regarding multi-arch support in OpenStack community, please let us know. If you would like to find us, please join #openstack-multi-arch . Also, as PTG is near, I invite you all to join us in PTG [4]! *Time: 4/20 Tuesday from 07:00-08:00 and 15:00-16:00 (UTC time)* And here is our PTG Etherpad: [3] (feel free to suggest topics). [1] https://superuser.openstack.org/articles/openstack-multi-arch-sig-making-progress-addressing-hardware-diversification-requirements/ [2] https://www.openstack.org/multi-arch-sig-report [3] https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig [4] http://www.openstack.org/ptg *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Apr 15 17:31:28 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 15 Apr 2021 20:31:28 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> Message-ID: Great thank you for confirming I will make the change tomorrow On Thursday, April 15, 2021, Arx Cruz wrote: > Hello, > > Sure, it's fine with me. Sorry the delay, I'm switching ISP, my internet > is terrible today. > > On Thu, Apr 15, 2021 at 5:04 PM Marios Andreou wrote: > >> On Thu, Apr 15, 2021 at 5:32 PM Dan Sneddon wrote: >> > >> > >> > >> > On 4/7/21 9:24 AM, Marios Andreou wrote: >> > > Hello TripleO o/ >> > > >> > > Thanks again to everybody who has volunteered to lead a session for >> > > the coming Xena TripleO project teams gathering. >> > > >> > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 >> > > sessions per day with some breaks. >> > > >> > > Please review the slot assigned for your session at [1]. If that time >> > > is not ok then please let me know as soon as possible and indicate if >> > > you want it later or earlier or on any other day. If you've decided >> > > the session no longer makes sense then also please tell me and we can >> > > move things around accordingly to finish earlier. >> > > >> > > I'd like to finalise the schedule by next Monday 12 April which is a >> > > week before PTG. We can and likely will make changes after this date >> > > but last minute changes are best avoided to allow folks to schedule >> > > their PTG attendance across projects. >> > > >> > > Thanks everybody for your help! Looking forward to interesting >> > > presentations and discussions as always >> > > >> > > regards, marios >> > > >> > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena >> > > >> > > >> > >> > Marios, >> > >> > I have found a conflict between my Tuesday 1510-1550 "BGP Routing with >> > FRR" and another discussion happening in the Neutron room about BGP. >> > >> > Would it be possible to move the "BGP Routing with FRR" talk on Tuesday >> > to Wednesday? Perhaps a direct swap with the "One yaml to rule all >> > tempest tests" discussion that is scheduled for Wednesday 1510-1550? >> > Another time on Wednesday could also work. >> > >> >> ACK I just pinged arx (adding him into cc here too) ... once I hear >> back from him and if he doesn't have another conflict we can make the >> change. >> Arx are you OK with the proposed swap? Your session would move to >> Tuesday same time. >> >> Otherwise we can explore something else, >> >> regards, marios >> >> > Thanks, >> > -- >> > Dan Sneddon | Senior Principal Software Engineer >> > dsneddon at redhat.com | redhat.com/cloud >> > dsneddon:irc | @dxs:twitter >> > >> >> > > -- > > Arx Cruz > > Software Engineer > > Red Hat EMEA > > arxcruz at redhat.com > @RedHat Red Hat > Red Hat > > > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Thu Apr 15 18:05:38 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 15 Apr 2021 18:05:38 +0000 Subject: [ops][nova][victoria] Migrate cross CPU? Message-ID: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> All; I seem to have generated another issue for myself... I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. I also can't find an option to pass to the openstack server start command which requests a specific host. Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From gouthampravi at gmail.com Thu Apr 15 18:17:06 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 15 Apr 2021 11:17:06 -0700 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 11:38 AM Goutham Pacha Ravi wrote: > > Hello Zorillas, > > Vida's been our bug czar since the Ussuri release and she's > conceptualized and executed our successful bug triage strategy. She > has also painstakingly organized several documentation and code bug > squash events and kept the pulse on multi-release efforts. She's > taught me a lot about project management and you can see tangible > results here, I suppose :) > > Liron's fixed a lot of test code bugs and covered some old and > important test gaps over the past few releases. He's driving > standardization of the tempest plugin and bringing in best practices > from tempest, refstack and elsewhere into our testing. It's always a > pleasure to work with Liron since he's happy to provide and welcome > feedback. > > More recently, Liron and Vida have enabled us to work with the > InteropWG and define refstack guidelines. They've also gotten us > closer to members from the QA community who they work with more > closely downstream. In short, they bring in different perspectives > while also espousing the team's core values. So I'd like to propose > their addition to the manila-tempest-plugin-core team. > > Please give me your +/- 1s for this proposal. Amazing. Thank you all for your responses. I've added Vida and Liron to the manila-tempest-plugin-core group. > > Thanks, > Goutham From zigo at debian.org Thu Apr 15 19:55:21 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Apr 2021 21:55:21 +0200 Subject: [announce][debian][wallaby] general availability of OpenStack Wallaby in Debian Message-ID: <562a8d99-b9fa-6735-9c35-694c0ef327b7@debian.org> Hi! It's my pleasure to announce the general availability of OpenStack Wallaby in Debian. I've just finished uploading everything to Debian Experimental today (not in unstable, as Bullseye is frozen), and the Bullseye backports are available the usual way, for example using extrepo (which is in the official Debian backports): apt-get install extrepo extrepo enable openstack_wallaby apt-get update ... or directly setting-up the http://bullseye-wallaby.debian.net repository the usual way. Note that while Victoria is available on Buster and Bullseye (to ease the transition), Wallaby is only available on Bullseye, which is due to be released next month (if everything goes as planned, there's no official release date decided yet if I understand well). New in this release =================== Masakari is now packaged. Masakari-dashboard is still waiting on the FTP master NEW queue though. A quick update on openstack-cluster-installer ============================================= Wallaby can already be installed by OCI [1]. Note that OCI is now public-cloud ready, and can deploy specific nodes for the billing (with cloudkitty): - messaging (separate RabbitMQ cluster and Galera for Gnocchi) - billmon / billosd (separate Ceph cluster for Gnocchi) Our tests showed that this setup can scale to 10k+ VMs without any issue (the separate Galera + RabbitMQ bus really helps) reporting 400+ metrics per seconds. As always, OCI is smart enough so the additional nodes are all optional (and not needed for smaller scales), and the cluster reconfigures itself if you decide to add new node types in your deployment. Thanks to new features added to puppet-openstack [2], the number of uwsgi process adapts automatically to the number of cores available in controller nodes. Last, with OCI you may now enjoy a full BGP-to-the-host (over IPv6 un-numbered link local) networking setup. This also works with compute nodes, as long as you decide to not use the DVR mode (if you do with to use DVR, then you need L2 connectivity on the computes: that's a Neutron "feature", unfortunately), or if you decide to use Neutron BGP routed networking [3] (though this mode also still has some limitations at this time, such as no support for virtual router external gateways). In this setup, only the Network nodes need L2 connectivity to the outside world. This also scales very nicely to *a lot* of nodes... without any ARP spanning tree problems. We (at Infomaniak) now only use this deployment mode in production due to its scalability. Final words =========== Please report any issue you may find, on OCI or on the Debian packages. Cheers, Thomas Goirand (zigo) [1] https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer [2] https://review.opendev.org/q/topic:%22debian-uwsgi-support%22+(status:open%20OR%20status:merged) [3] https://docs.openstack.org/neutron/latest/admin/config-bgp-floating-ip-over-l2-segmented-network.html From radoslaw.piliszek at gmail.com Thu Apr 15 20:06:05 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 15 Apr 2021 22:06:05 +0200 Subject: [announce][debian][wallaby] general availability of OpenStack Wallaby in Debian In-Reply-To: <562a8d99-b9fa-6735-9c35-694c0ef327b7@debian.org> References: <562a8d99-b9fa-6735-9c35-694c0ef327b7@debian.org> Message-ID: On Thu, Apr 15, 2021 at 9:56 PM Thomas Goirand wrote: > New in this release > =================== > > Masakari is now packaged. Once again, thank you for packaging Masakari! :-) -yoctozepto From kennelson11 at gmail.com Thu Apr 15 23:56:37 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 15 Apr 2021 16:56:37 -0700 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results Message-ID: Hello! Please join me in congratulating the 1 newly elected member of the Technical Committee (TC). Amy Marrich (spotz)! Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. We need to ensure your voice is heard. Thank you for another great round! -Kendall Nelson (diablo_rojo) & the Election Officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Apr 16 06:12:47 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:12:47 +0200 Subject: [neutron] Drivers meeting agenda - 16.04.2021 - cancelled In-Reply-To: <6804092.R4j1StIJWZ@p1> References: <6804092.R4j1StIJWZ@p1> Message-ID: <5858367.tOFUnugRee@p1> Hi, Dnia czwartek, 15 kwietnia 2021 15:02:42 CEST Slawek Kaplonski pisze: > Hi, > > Agenda for our tomorrow's drivers meeting is at [1]. We have 1 new RFE to > discuss: > > - https://bugs.launchpad.net/neutron/+bug/1922716 - [RFE] BFD for BGP Dynamic > Routing > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat I got info from Miguel and Brian that they both can't be on today's meeting. Giving that I worry that we may not have a quorum on today's meeting so I think it may be better to cancel it. Next week there is PTG and that RFE https://bugs.launchpad.net/neutron/+bug/ 1922716 is already in the agenda (to be discussed on Tuesday) so we will discuss it there. In the meantime, please spent some time reading that rfe and maybe ask some questions to the owner so we will have as much info as possible before the PTG. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:14:15 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:14:15 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: <4135616.GcyNBQpf4Z@p1> Message-ID: <10882430.7SCC9sQoDL@p1> Hi, Dnia czwartek, 15 kwietnia 2021 16:03:23 CEST Kanevsky, Arkady pisze: > Thanks Slawek. > I will check with the team and will get back to you. > For now assume that Friday will work. Sure thing. Thx a lot. Please let me know if You would need to move it to other day/time slot. Also, if it's possible, please add topics which You want to discuss to the etherpad https://etherpad.opendev.org/p/neutron-xena-ptg - our session is under line 163. > Thanks, > Arkady > > -----Original Message----- > From: Slawek Kaplonski > Sent: Wednesday, April 14, 2021 3:43 AM > To: openstack-discuss at lists.openstack.org > Cc: OpenStack Discuss; Kanevsky, Arkady > Subject: Re: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop > > Hi Arkady, > > Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > > > Hi, > > > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > > > > Brian, > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > > > min on > > > > > > PTG meeting to go over Interop testing and any changes for neutron > > tempest > > or > > > > > > > > tempest configuration in Wallaby cycle or changes planned for Xena. > > > Once on > > > > > > agenda one of the Interop WG person will attend and lead the discussion. > > > > I just added it to our etherpad > > https://etherpad.opendev.org/p/neutron-xena-ptg > > I will be working on schedule of the sessions later this week and I > > will let You know what timeslot this session with Interop WG will be. > > Please let me know if You have any preferences. We have our sessions > > scheduled: > > > > Monday 1300 - 1600 UTC > > Tuesday 1300 - 1600 UTC > > Thursday 1300 - 1600 UTC > > Friday 1300 - 1600 UTC > > > > Our time slots which are already booked are: > > - Monday 15:00 - 16:00 UTC > > - Thursday 14:00 - 15:30 UTC > > - Friday 14:00 - 15:00 UTC > > > > > > > Thanks, > > > Arkady > > > > > > Arkady Kanevsky, Ph.D. > > > SP Chief Technologist & DE > > > Dell Technologies office of CTO > > > Dell Inc. One Dell Way, MS PS2-91 > > > Round Rock, TX 78682, USA > > > Phone: 512 7204955 > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. > Please let me know if that isn't good time slot for You. > Please also add topics which You want to discuss to our etherpad https:// etherpad.opendev.org/p/neutron-xena-ptg > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:41:58 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:41:58 +0200 Subject: [neutron] Team meeting - Tuesday 20.04.2021 Message-ID: <22887068.HZfpmljPZv@p1> Hi, As we have PTG, let's cancel next week's team meeting. See You on the PTG sessions. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:42:30 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:42:30 +0200 Subject: [neutron] CI meeting - Tuesday 20.04.2021 Message-ID: <6147595.Qk4cETbaLc@p1> Hi, As we have PTG, let's cancel next week's CI meeting. See You on the PTG sessions. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:44:13 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:44:13 +0200 Subject: [neutron] Drivers meeting - Friday 23.04.2021 cancelled Message-ID: <8794951.JCYDN9oZMe@p1> Hi, As we have PTG, let's cancel next week's drivers meeting. See You on the PTG sessions. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 10:55:29 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 12:55:29 +0200 Subject: [neutron][all] PTG session about OVN as default backend in Devstack Message-ID: <32241215.rSAhrytHMa@p1> Hi, We discussed this topic couple of times in the Neutron team and with wider community also. And now we really feel like it is good time to pull the trigger and switch default Neutron backend in Devstack from ML2/OVS to ML2/ OVN. Lucas already prepared patches for that and all should be already in goo shape. But before we will do that, we want to have PTG session about it. It is scheduled to be on Thursday 22nd of April at 13:00 UTC time in the Neutron session. We want to give some short summary of current status of this but also we would like to do something like "AMA" about it for people from other projects. So if You have any questions/concerns about that, please go to that session on Thursday to discuss that with us. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Fri Apr 16 13:27:32 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Apr 2021 14:27:32 +0100 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> Message-ID: On 15/04/2021 19:05, DHilsbos at performair.com wrote: > All; > > I seem to have generated another issue for myself... > > I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. > > I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. > > My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. > > I also can't find an option to pass to the openstack server start command which requests a specific host. > > Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? you should be able to cold migrate them using the migrate command but that should put the servers into resize_verify and then you need to confirm the migration to complte it. we will not clean up the vm on the source node until you do that last step. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From nate.johnston at redhat.com Fri Apr 16 14:23:02 2021 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 16 Apr 2021 10:23:02 -0400 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results In-Reply-To: References: Message-ID: <20210416142302.7sewz2ppmr45gp7c@grind.home> Congratulations Amy! Nate On Thu, Apr 15, 2021 at 04:56:37PM -0700, Kendall Nelson wrote: > Hello! > > Please join me in congratulating the 1 newly elected member of the > Technical Committee (TC). > > Amy Marrich (spotz)! > > Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps > engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We need to > ensure your voice is heard. > > Thank you for another great round! > > -Kendall Nelson (diablo_rojo) & the Election Officials From smooney at redhat.com Fri Apr 16 14:30:22 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Apr 2021 15:30:22 +0100 Subject: dynamic vendor data and cloud-init In-Reply-To: References: Message-ID: On 15/04/2021 15:53, Manish Mahalwal wrote: > Hi All, > > I am working with OpenStack Pike and cloud-init 21.1. I am able to > successfully pass dynamic vendor data to the config drive of an > instance. However, cloud-init 21.1 just reads all the 'x' bytes of the > vendor_data2.json but it doesn't execute the contents of the json. > Although, static vendor data works perfectly fine and the YAML file in > the JSON is executed as expected by cloud-init 21.1 > > * Now, the person who wrote the code for handling dynamic vendordata > in cloud-init (https://github.com/canonical/cloud-init/pull/777 > ) says that the JSON > cloud-init expects is of the form: > > {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n > - black\nfqdn: cloud-overridden-by-vendordata2.example.org."} > > the reference implementation for the dynamic vendor data  backend was https://github.com/mikalstill/vendordata and it was a feature developed specificaly for rackspace. the data format that service should return is # { # "hostname": "foo", # "image-id": "75a74383-f276-4774-8074-8c4e3ff2ca64", # "instance-id": "2ae914e9-f5ab-44ce-b2a2-dcf8373d899d", # "metadata": {}, # "project-id": "039d104b7a5c4631b4ba6524d0b9e981", # "user-data": null # } # An example of this data: https://github.com/mikalstill/vendordata/blob/master/app.py#L34-L42 this blog post explains how it should work https://www.madebymikal.com/nova-vendordata-deployment-an-excessively-detailed-guide/ > * I believe that the JSON should have another outer key (as mentioned > here > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html > ) > which is the name of the microservice specified in nova.conf file and > that the inner key should be cloud-init. > > In nova.conf: > vendordata_dynamic_targets=name1 at http://example.com,name2 at http://example2.com > > > { >     "name1": { >  "cloud-init": "#cloud-config\n..." >     }, >     "name2": { >  "cloud-init": "#cloud-config\n..." >     } > } > > > > > >>Who is right and who is wrong? > > To read more on this please go through the following: > https://bugs.launchpad.net/cloud-init/+bug/1841104 > > From Arkady.Kanevsky at dell.com Fri Apr 16 14:33:19 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 14:33:19 +0000 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results In-Reply-To: References: Message-ID: Hurray to Amy. From: Kendall Nelson Sent: Thursday, April 15, 2021 6:57 PM To: OpenStack Discuss Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results [EXTERNAL EMAIL] Hello! Please join me in congratulating the 1 newly elected member of the Technical Committee (TC). Amy Marrich (spotz)! Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c [civs1.civs.us] Election process details and results are also available here: https://governance.openstack.org/election/ [governance.openstack.org] Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. We need to ensure your voice is heard. Thank you for another great round! -Kendall Nelson (diablo_rojo) & the Election Officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Apr 16 15:22:35 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Fri, 16 Apr 2021 17:22:35 +0200 Subject: [kolla-ansible][horizon][keystone] policy and token In-Reply-To: References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: <076D9347-4CF7-4588-97A9-3A960E45537F@poczta.onet.pl> Hi again, After some struggling I modified policies so most of them works fine. But I have problem with identity: create_user and identity: create_group. In the case of create group I can do it from Horizon (domain_admin user), but I can’t do it from CLI (with command Openstack group create —domain 3a08xxxx82c1 SOME_GROUP_NAME) and I was wondering why. After analyzing logs it turned out, that tokens from Horizon and CLI are different! The one from CLI does not contain domain_id (which I specify from CLI???), while the one from Horizon contains it, and there is a match for policy rules. Token from CLI: DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-b00bccae-c3d2-4a53-a8e2-bd9b0bbdfd84 9adbxxxx02ef 61d4xxxx9c0f <- user default Project_ID here - 3a08xxxxb82c1 3a08xxxx82c1] RBAC: auth_context: {'token': , 'domain_id': None, <- no domain_id 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': None, <- no domain name 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': '61d4xxxx9c0f',<- user default Project_ID here 'project_domain_id': '3a08xxxx82c1’, <- default user project domain_id 'roles': ['reader', 'member', 'project_admin', 'domain_admin'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 Token from Horizon: DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-aeeec218-fe13-4048-98a7-3240df0dacae 9adbxxxxb02ef <- no user default Project_ID here - 3a08xxxx82c1 3a08xxxx82c1 -] RBAC: auth_context: {'token': , 'domain_id': '3a08xxxx82c1’, <- domain_id 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': ’some_domain’, <- domain name 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': None,<- no user default Project_ID here 'project_domain_id': None,<- default user project domain_id 'roles': ['member', 'domain_admin', 'project_admin', 'reader'], 'is_admin_project': False, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 The policy rules: "identity:create_group": "rule:cloud_admin or rule:admin_and_matching_target_group_domain_id”, "admin_and_matching_target_group_domain_id": "rule:admin_required and domain_id:%(target.group.domain_id)s”, "admin_required": "role:admin or role:domain_admin or role:project_admin", CLI user openrc file: export OS_AUTH_URL=http://some-fancy-url:5000 export OS_PROJECT_ID=61d4xxxx9c0f export OS_PROJECT_NAME=„some_project_name" export OS_USER_DOMAIN_NAME=„some_domain" if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi export OS_PROJECT_DOMAIN_ID="3a08xxxx82c1" if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi unset OS_TENANT_ID unset OS_TENANT_NAME export OS_USERNAME=„some_user" echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="RegionOne" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi export OS_INTERFACE=public export OS_IDENTITY_API_VERSION=3 How to put domain_id into CLI token if —domain xxxxx doesn’t do that? The same situation is with create_user. And the best part - ofcource cloud_admin=admin is able to do both, because he don’t need to be checked against domain_id. Ofcourse there is also some kind of a bug, that prevents displaying „Create user” button in the horizon interface, but when you eneter direct link (…/users/create) you can create user. After some struggling with horizon (as suggested here: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/575272/1/templates/mitaka/keystonev3_policy.json#b38 ) „create group” button showed up, but not "create user” - not even for admin user… What’s wrong?? Best regards Adam > Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 12:51: > > On Tue, 30 Mar 2021 at 10:52, Adam Tomas wrote: >> >> >> Without any custom policies when I look inside the horizon container I see (in /etc/openstack-dashboard) current/default policies. If I override (for example keystone_policy.json) with a file placed in /etc/kolla/config/horizon which contains only 3 rules, then after kolla-ansible reconfigure inside horizon container there is of course keystone_police.json file, but only with my 3 rules - should I assume, that previously seen default rules (other than the ones overridden by my rules) still works, whether I see them in the file or not? -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Apr 16 16:04:38 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 16 Apr 2021 16:04:38 +0000 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> Message-ID: <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> Sean; Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. Unfortunately, at present, the state change is not occurring. Here's a series of commands, with output: #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-elcom-1 | | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | | OS-EXT-STS:power_state | Shutdown | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | stopped | | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | it-network=10.255.127.208, 10.0.160.35 | | config_drive | | | created | 2021-03-06T04:35:51Z | | flavor | m4.large (8) | | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | | image | N/A (booted from volume) | | key_name | None | | name | Java Dev | | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | | properties | | | security_groups | name='allow-ping' | | | name='allow-ssh' | | | name='default' | | status | SHUTOFF | | updated | 2021-04-16T15:52:07Z | | user_id | 69b73ea8f55c46a99021e77ebf70b62a | | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | +-------------------------------------+----------------------------------------------------------+ #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-elcom-1 | | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | | OS-EXT-STS:power_state | Shutdown | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | stopped | | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | it-network=10.255.127.208, 10.0.160.35 | | config_drive | | | created | 2021-03-06T04:35:51Z | | flavor | m4.large (8) | | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | | image | N/A (booted from volume) | | key_name | None | | name | Java Dev | | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | | properties | | | security_groups | name='allow-ping' | | | name='allow-ssh' | | | name='default' | | status | SHUTOFF | | updated | 2021-04-16T15:53:32Z | | user_id | 69b73ea8f55c46a99021e77ebf70b62a | | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | +-------------------------------------+----------------------------------------------------------+ #tail /var/log/nova/nova-conductor.log #tail /var/log/nova/nova-scheduler.log 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: Both Cinder volume storage, and ephemeral storage are being handled by Ceph. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean Mooney [mailto:smooney at redhat.com] Sent: Friday, April 16, 2021 6:28 AM To: openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][victoria] Migrate cross CPU? On 15/04/2021 19:05, DHilsbos at performair.com wrote: > All; > > I seem to have generated another issue for myself... > > I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. > > I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. > > My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. > > I also can't find an option to pass to the openstack server start command which requests a specific host. > > Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? you should be able to cold migrate them using the migrate command but that should put the servers into resize_verify and then you need to confirm the migration to complte it. we will not clean up the vm on the source node until you do that last step. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From jay.faulkner at verizonmedia.com Fri Apr 16 16:26:19 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Fri, 16 Apr 2021 09:26:19 -0700 Subject: [ironic] Ironic Whiteboard v2 call for reviews Message-ID: Hi all, Iury and I spent some time this morning updating the Ironic whiteboard etherpad to include more immediately useful information to contributors. We placed this updated whiteboard at https://etherpad.opendev.org/p/IronicWhiteBoardv2 -- our approach was to prune any outdated/broken links or information, and focus on making the first part of the whiteboard an easy one-click place for folks to see easy ways to contribute. All the rest of the information was carried over and reformatted. Once there is consensus from the team about this being a positive change, we should either replace the existing IronicWhiteBoard with the contents of the v2 page, or just update links to point to the new one instead. What do you all think? Thanks, Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Apr 16 16:57:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Apr 2021 17:57:41 +0100 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> Message-ID: <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> hum ok the best way to debug this is to lis the server events and get the request id for the migration it may be req-ff109e53-74e0-40de-8ec7-29aff600b5f7 based on the logs you posted but you should see more info in the api, conductor and compute logs for that request id. given the state has not change i suspect it failed rather early. its possible that you are expirence an issue with the rabbitmq service and rpc calls are bing lost but i woudl not expect to see logs realted to this in the scudler while the vm is stilll in the SHUTOFF status. can you do "openstack server event list 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3" then get the most recent resize event's request id and see if there are any other logs. regard sean. (note i think it will be listed as a resize not a migrate since interanlly migreate is implmented as resize but to the same flavour). On 16/04/2021 17:04, DHilsbos at performair.com wrote: > Sean; > > Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. > > Unfortunately, at present, the state change is not occurring. > > Here's a series of commands, with output: > > #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > +-------------------------------------+----------------------------------------------------------+ > | Field | Value | > +-------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-elcom-1 | > | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | > | OS-EXT-STS:power_state | Shutdown | > | OS-EXT-STS:task_state | None | > | OS-EXT-STS:vm_state | stopped | > | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | > | OS-SRV-USG:terminated_at | None | > | accessIPv4 | | > | accessIPv6 | | > | addresses | it-network=10.255.127.208, 10.0.160.35 | > | config_drive | | > | created | 2021-03-06T04:35:51Z | > | flavor | m4.large (8) | > | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | > | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | > | image | N/A (booted from volume) | > | key_name | None | > | name | Java Dev | > | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | > | properties | | > | security_groups | name='allow-ping' | > | | name='allow-ssh' | > | | name='default' | > | status | SHUTOFF | > | updated | 2021-04-16T15:52:07Z | > | user_id | 69b73ea8f55c46a99021e77ebf70b62a | > | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | > +-------------------------------------+----------------------------------------------------------+ > #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > +-------------------------------------+----------------------------------------------------------+ > | Field | Value | > +-------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-elcom-1 | > | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | > | OS-EXT-STS:power_state | Shutdown | > | OS-EXT-STS:task_state | None | > | OS-EXT-STS:vm_state | stopped | > | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | > | OS-SRV-USG:terminated_at | None | > | accessIPv4 | | > | accessIPv6 | | > | addresses | it-network=10.255.127.208, 10.0.160.35 | > | config_drive | | > | created | 2021-03-06T04:35:51Z | > | flavor | m4.large (8) | > | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | > | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | > | image | N/A (booted from volume) | > | key_name | None | > | name | Java Dev | > | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | > | properties | | > | security_groups | name='allow-ping' | > | | name='allow-ssh' | > | | name='default' | > | status | SHUTOFF | > | updated | 2021-04-16T15:53:32Z | > | user_id | 69b73ea8f55c46a99021e77ebf70b62a | > | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | > +-------------------------------------+----------------------------------------------------------+ > #tail /var/log/nova/nova-conductor.log > #tail /var/log/nova/nova-scheduler.log > 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises > 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: > > Both Cinder volume storage, and ephemeral storage are being handled by Ceph. > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Friday, April 16, 2021 6:28 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][victoria] Migrate cross CPU? > > > > On 15/04/2021 19:05, DHilsbos at performair.com wrote: >> All; >> >> I seem to have generated another issue for myself... >> >> I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. >> >> I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. >> >> My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. >> >> I also can't find an option to pass to the openstack server start command which requests a specific host. >> >> Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? > you should be able to cold migrate them using the migrate command but > that should put the servers into resize_verify and then you need > to confirm the migration to complte it. we will not clean up the vm on > the source node until you do that last step. > >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Director - Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> >> > From amy at demarco.com Fri Apr 16 17:02:18 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 16 Apr 2021 12:02:18 -0500 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results In-Reply-To: References: Message-ID: Thanks everyone! Amy (spotz) On Fri, Apr 16, 2021 at 9:36 AM Kanevsky, Arkady wrote: > Hurray to Amy. > > > > *From:* Kendall Nelson > *Sent:* Thursday, April 15, 2021 6:57 PM > *To:* OpenStack Discuss > *Subject:* [all][elections][tc] Technical Committee April 2021 Special > Election Results > > > > [EXTERNAL EMAIL] > > Hello! > > > Please join me in congratulating the 1 newly elected member of the > Technical Committee (TC). > > Amy Marrich (spotz)! > > Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c > [civs1.civs.us] > > > Election process details and results are also available here: https://governance.openstack.org/election/ > [governance.openstack.org] > > > Thank you to all of the candidates, having a good group of candidates > helps engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We need to > ensure your voice is heard. > > Thank you for another great round! > > > > -Kendall Nelson (diablo_rojo) & the Election Officials > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Apr 16 17:07:52 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 16 Apr 2021 12:07:52 -0500 Subject: [Diversity] D&I WG Office Hours at the PTG Message-ID: The Diversity and Inclusion WG has grabbed our usual PTG slot Monday at 16:00UTC and we will be holding office hours to help any project with Inclusive Naming or other D&I related topics. If you'd like us to attend any of your sessions please let me know. If you've been interested in learning more about the WG please come join us! Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Apr 16 17:28:02 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 16 Apr 2021 10:28:02 -0700 Subject: [PTG][StoryBoard][infra][OpenDev] StoryBoard PTG Planning Message-ID: Hello! I know it's last minute, but we decided to book an hour on Tuesday to meet from 16-17 UTC in the Ocata room. If you have topics to bring to us, please add them to the etherpad[1]. If you just want to come and say hi, please do! If we have extra time we will triage and cleanup the StoryBoard backlog. -Kendall (diablo_rojo) [1] https://etherpad.opendev.org/p/apr2021-ptg-storyboard -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Apr 16 17:40:08 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 16 Apr 2021 19:40:08 +0200 Subject: [ptl][release][stable][EM] Extended Maintenance - Train Message-ID: <07a09dbb-4baa-1d22-5605-a636b0f55fbc@est.tech> Hi, As Wallaby was released the day before yesterday and we are in a less busy period, it is a good opportunity to call your attention to the following: In less than a month Train is planned to transition to Extended Maintenance phase [1] (planned date: 2021-05-12). I have generated the list of the current *open* and *unreleased* changes in stable/train for the follows-policy tagged repositories [2] (where there are such patches). These lists could help the teams who are planning to do a *final* release on Train before moving stable/train branches to Extended Maintenance. Feel free to edit and extend these lists to track your progress! * At the transition date the Release Team will tag the*latest* (Train) *releases* of repositories with *train-em* tag. * After the transition stable/train will be still open for bug fixes, but there won't be any official releases. NOTE: teams, please focus on wrapping up your libraries first if there is any concern about the changes, in order to avoid broken releases! Thanks, Előd [1] https://releases.openstack.org/ [2] https://etherpad.opendev.org/p/train-final-release-before-em From johnsomor at gmail.com Fri Apr 16 20:19:31 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 16 Apr 2021 13:19:31 -0700 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Arkady, I have put you down for 14:30 UTC. Michael On Thu, Apr 15, 2021 at 7:18 AM Kanevsky, Arkady wrote: > > Thanks Michael. Interop team will have a rep there. > If you can schedule us at 14:00 UTC or 14:30, or 14:45 that will be the best. I think 15 min will be enough. > Thanks, > Arkady > > -----Original Message----- > From: Michael Johnson > Sent: Monday, April 12, 2021 10:57 AM > To: Kanevsky, Arkady > Cc: OpenStack Discuss > Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop > > > [EXTERNAL EMAIL] > > Hi Arkady, > > I have added Interop to the Designate topics list (https://urldefense.com/v3/__https://etherpad.opendev.org/p/xena-ptg-designate__;!!LpKI!yXIFUxciVfW5bKHaFIxjMmhoQrGASnWQVIz9UZY3oXExCpXgnM52TrpaajTFMP1HP3fc$ [etherpad[.]opendev[.]org]) and will schedule a slot this week when I put a rough agenda together. > > Thanks, > Michael > > On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > > > Adding comminuty > > > > > > > > From: Kanevsky, Arkady > > Sent: Sunday, April 11, 2021 3:25 PM > > To: 'johnsomor at gmail.com' > > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for > > Interop > > > > > > > > John, > > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > > > > > Thanks, > > > > > > > > > > > > Arkady Kanevsky, Ph.D. > > > > SP Chief Technologist & DE > > > > Dell Technologies office of CTO > > > > Dell Inc. One Dell Way, MS PS2-91 > > > > Round Rock, TX 78682, USA > > > > Phone: 512 7204955 > > > > From gmann at ghanshyammann.com Fri Apr 16 21:24:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 16 Apr 2021 16:24:29 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 16th April, 21: Reading: 5 min Message-ID: <178dc9228ad.cf85ab74158590.1567674469339432611@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. What we completed this week: ========================= Project updates: ------------------- ** devstack-plugin-ceph is branched now[1]. Other updates: ------------------ ** TC one vacant seat election is completed now. We have a new TC member Amy Marrich (spotz). Also thanks to Feilong Wang (flwang) for participating and showing interest in TC[2]. TC liaisons assignments for Xena cycle --------------------------------------------- * This is to have two TC members assigned as liaisons for each project team. * I generated the auto assignments using the script on top of already assigned projects[3]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-04-15-15.00.log.html * We will skip next week's meeting due to PTG and have our next meeting on April 29th, Thursday 15:00 UTC[4]. 3. Activities In progress: ================== Open Reviews ----------------- * No open reviews this week[5]. This is good progress by TC. Gate performance and heavy job configs ------------------------------------------------ * dansmith fixed the devstack async mode related bash bug related to children[6] * Workarounds for making stackviz not to fail jobs are merged in all stable branches[7]. * Cinder failures are still happening and the Cinder team is in progress to fix those. PTG ----- TC is planning to meet in PTG for Thursday 2 hrs and Friday 4 hrs, details are in etherpad[8], feel free to add topic you would like to discuss with TC in PTG. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[9]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [10] 3. Office hours: The Technical Committee offers two office hours per week in #openstack-tc [11]: * Tuesday at 0100 UTC * Wednesday at 1500 UTC 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/786067 [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021869.html [3] https://governance.openstack.org/tc/reference/tc-liaisons.html [4] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] https://review.opendev.org/c/openstack/devstack/+/786330 [7] https://review.opendev.org/q/Ifee04f28ecee52e74803f1623aba5cfe5ee5ec90 [8] https://etherpad.opendev.org/p/tc-xena-ptg [9] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [10] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [11] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From Arkady.Kanevsky at dell.com Fri Apr 16 21:54:48 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 21:54:48 +0000 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Michael. -----Original Message----- From: Michael Johnson Sent: Friday, April 16, 2021 3:20 PM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Arkady, I have put you down for 14:30 UTC. Michael On Thu, Apr 15, 2021 at 7:18 AM Kanevsky, Arkady wrote: > > Thanks Michael. Interop team will have a rep there. > If you can schedule us at 14:00 UTC or 14:30, or 14:45 that will be the best. I think 15 min will be enough. > Thanks, > Arkady > > -----Original Message----- > From: Michael Johnson > Sent: Monday, April 12, 2021 10:57 AM > To: Kanevsky, Arkady > Cc: OpenStack Discuss > Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena > PTG for Interop > > > [EXTERNAL EMAIL] > > Hi Arkady, > > I have added Interop to the Designate topics list (https://urldefense.com/v3/__https://etherpad.opendev.org/p/xena-ptg-designate__;!!LpKI!yXIFUxciVfW5bKHaFIxjMmhoQrGASnWQVIz9UZY3oXExCpXgnM52TrpaajTFMP1HP3fc$ [etherpad[.]opendev[.]org]) and will schedule a slot this week when I put a rough agenda together. > > Thanks, > Michael > > On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > > > Adding comminuty > > > > > > > > From: Kanevsky, Arkady > > Sent: Sunday, April 11, 2021 3:25 PM > > To: 'johnsomor at gmail.com' > > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for > > Interop > > > > > > > > John, > > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > > > > > Thanks, > > > > > > > > > > > > Arkady Kanevsky, Ph.D. > > > > SP Chief Technologist & DE > > > > Dell Technologies office of CTO > > > > Dell Inc. One Dell Way, MS PS2-91 > > > > Round Rock, TX 78682, USA > > > > Phone: 512 7204955 > > > > From Arkady.Kanevsky at dell.com Fri Apr 16 21:57:41 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 21:57:41 +0000 Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Repeating the request. Where is Swift Xena PTG etherpad? From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:37 PM To: tburke at nvidia.com Cc: OpenStack Discuss Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop Tim, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Swift tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 16 21:59:45 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 21:59:45 +0000 Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Repeating the request. Where is Keystone Etherpad for Xena PTG? From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:30 PM To: knikolla at bu.edu Cc: OpenStack Discuss Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop Kristi, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Keystone tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 16 22:01:29 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 22:01:29 +0000 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Rico, Repeating the request. Where is Heat etherpad for Xena PTG agenda? Thanks, Arkady From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:28 PM To: ricolin at ricolky.com Cc: OpenStack Discuss Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop Rico, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Heat tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 16 22:12:36 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 22:12:36 +0000 Subject: [Interop] No meeting Friday 4/23/2021 - PTG week. Message-ID: Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From martialmichel at datamachines.io Fri Apr 16 22:13:04 2021 From: martialmichel at datamachines.io (Martial Michel) Date: Fri, 16 Apr 2021 18:13:04 -0400 Subject: [Scientific] Scientific SIG meetings during the OpenStack PTG next week Message-ID: The Scientific SIG will have two meetings next week during the PTG. Details on those meetings are as follows: Session 1 - Cactus room - April 21st - 14:00-15:00 UTC Main session, topic discussion (note we only have one hour) Session 2 - Cactus room - April 21st - 21:00-22:00 UTC Lightning Talks: Bring a LT on something you've been doing or would like to present (10 minutes per talk, including questions. Note we only have one hour, so strict timekeeping will have to be enforced ) As a reminder the Scientific SIG has a Slack. Please contact me directly (martialmichel at datamachines.io) if you want to join our slack or our meeting and need joining information. Thank you and looking forward to seeing a few stackers next week -- Martial -------------- next part -------------- An HTML attachment was scrubbed... URL: From martialmichel at datamachines.io Fri Apr 16 22:13:04 2021 From: martialmichel at datamachines.io (Martial Michel) Date: Fri, 16 Apr 2021 18:13:04 -0400 Subject: [Scientific] Scientific SIG meetings during the OpenStack PTG next week Message-ID: The Scientific SIG will have two meetings next week during the PTG. Details on those meetings are as follows: Session 1 - Cactus room - April 21st - 14:00-15:00 UTC Main session, topic discussion (note we only have one hour) Session 2 - Cactus room - April 21st - 21:00-22:00 UTC Lightning Talks: Bring a LT on something you've been doing or would like to present (10 minutes per talk, including questions. Note we only have one hour, so strict timekeeping will have to be enforced ) As a reminder the Scientific SIG has a Slack. Please contact me directly (martialmichel at datamachines.io) if you want to join our slack or our meeting and need joining information. Thank you and looking forward to seeing a few stackers next week -- Martial -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Fri Apr 16 22:56:24 2021 From: dmeng at uvic.ca (dmeng) Date: Fri, 16 Apr 2021 15:56:24 -0700 Subject: [sdk]: compute service create_server method, how to create multiple servers Message-ID: <20bd2d0dd9ed5013919e036df2576cca@uvic.ca> Hello there, Hope this email finds you well. We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" and "min_count" are supported by openstackskd for creating multiple servers at once. I tried both the max_count and the min_count, and they both only create one server for me, but I'd like to create multiple servers at once. The code I'm using is like the following: conn = connection.Connection( session=sess, region_name=None, compute_api_version='2') nova = conn.compute nova.create_server( name='sdk-test-create', image_id=image_id, flavor_id=flavor_id, key_name=my_key_name, networks=[{"uuid": network_id}], security_groups=[{'name':security_group_name}], min_count=3, ) The above code will create one server "sdk-test-create", but I'm assuming it should create three. Wondering did I miss anything, or if we have any other option to archive this? Thanks for your help and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From pangliye at inspur.com Sat Apr 17 05:58:16 2021 From: pangliye at inspur.com (=?gb2312?B?TGl5ZSBQYW5nKOXMwaLStSk=?=) Date: Sat, 17 Apr 2021 05:58:16 +0000 Subject: [venus] Xena PTG schedule Message-ID: Hi: I prepared an agenda for our PTG meeting, which mainly introducing the current progress of venus, and it is available in our etherpad [1]. The time slot is April 22st @ 13:00 - 14:00 UTC[2] Also, if You have any other topic to discuss, please let me know too so I can include it in the agenda. Looking forward to your participation in the discussion. [1] https://etherpad.opendev.org/p/apr2021-ptg-venus [2] http://ptg.openstack.org/ptg.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From ricolin at ricolky.com Sat Apr 17 12:02:07 2021 From: ricolin at ricolky.com (Rico Lin) Date: Sat, 17 Apr 2021 20:02:07 +0800 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi there, Our etherpad https://etherpad.opendev.org/p/xena-ptg-heat You can also found our scheduled time there Kanevsky, Arkady 於 2021年4月17日 週六,上午6:01寫道: > Rico, > > Repeating the request. > > Where is Heat etherpad for Xena PTG agenda? > > Thanks, > > Arkady > > > > *From:* Kanevsky, Arkady > *Sent:* Sunday, April 11, 2021 3:28 PM > *To:* ricolin at ricolky.com > *Cc:* OpenStack Discuss > *Subject:* [Heat][Interop] request for 15-30 min on Xena PTG for Interop > > > > Rico, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > PTG meeting to go over Interop testing and any changes for Heat tempest or > tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -- *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sat Apr 17 17:10:39 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sat, 17 Apr 2021 17:10:39 +0000 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Rico. I had added topic for interlock with Interop WG. Time is 14:45-15:00 UTC. Thanks, Arkady From: Rico Lin Sent: Saturday, April 17, 2021 7:02 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Heat][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi there, Our etherpad https://etherpad.opendev.org/p/xena-ptg-heat [etherpad.opendev.org] You can also found our scheduled time there Kanevsky, Arkady >於 2021年4月17日 週六,上午6:01寫道: Rico, Repeating the request. Where is Heat etherpad for Xena PTG agenda? Thanks, Arkady From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:28 PM To: ricolin at ricolky.com Cc: OpenStack Discuss Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop Rico, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Heat tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -- Rico Lin OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Sat Apr 17 20:54:07 2021 From: melwittt at gmail.com (melanie witt) Date: Sat, 17 Apr 2021 13:54:07 -0700 Subject: [Placement] Weird issue in placement-api In-Reply-To: References: Message-ID: <708dca1f-3ff0-f2f7-12b0-6d594663e545@gmail.com> On 4/15/21 02:57, Taha Adel wrote: > Hello, > > I currently have OpenStack manually deployed by following the official > install documentation, but I have faced a weird situation. When I send > an api request to placement api service using the following command: > > *curl -H "X-Auth-Token: $T" http://controller:8778 * > > I received a status code of "*200*", which indicates a successful > operation. But, when I issue the following request: > > *curl -H "X-Auth-Token: $T" http://controller:8778/resource_providers > * > > I received a status code of "*503*", and when I checked the logs of > placement and keystone, they say that the authentication failed. For the > same reason, nova-compute can't register itself as a resource provider. > > I'm sure that the authentication credentials for placement are set > properly, but I don't know what's the problem. I think what you're seeing is expected behavior, the root in the API doesn't require authentication [1]: "Does not perform verification of authentication tokens for root in the API." so you will get 200 at the root but can get 503 for all other paths if there's an auth issue. Have you looked at the placement-api logs to see if there's additional info there? You can also try enabling log level DEBUG by setting [DEFAULT]debug = True in placement.conf. HTH, -melanie [1] https://github.com/openstack/placement/blob/6f00ba5f685183539d0ebf62a4741f2f6930e051/placement/auth.py#L90-L94 From tburke at nvidia.com Sun Apr 18 02:25:24 2021 From: tburke at nvidia.com (Timothy Burke) Date: Sun, 18 Apr 2021 02:25:24 +0000 Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: , Message-ID: Sorry; been on vacation all week. Our record at http://ptg.openstack.org/etherpads.html should be accurate; Swift's etherpad is at https://etherpad.opendev.org/p/swift-ptg-xena. We'd be happy to talk for about interop testing -- is there a particular time that would work best for you? Any sort of prep work that might be good for us to think about ahead of time? Tim ________________________________ From: Kanevsky, Arkady Sent: Friday, April 16, 2021 2:57 PM To: Timothy Burke Cc: OpenStack Discuss Subject: RE: [Swift][Interop] request for 15-30 min on Xena PTG for Interop External email: Use caution opening links or attachments Repeating the request. Where is Swift Xena PTG etherpad? From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:37 PM To: tburke at nvidia.com Cc: OpenStack Discuss Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop Tim, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Swift tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Mon Apr 19 02:51:19 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Mon, 19 Apr 2021 02:51:19 +0000 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> Message-ID: <0670B960225633449A24709C291A525251114523@COM01.performair.local> Sean; Thank you, your suggestion led me to a problem with ssh. I was a little surprised by this, as live migration works. I reviewed: https://docs.openstack.org/nova/victoria/admin/ssh-configuration.html#cli-os-migrate-cfg-ssh and found that I had a problem with the authorized keys file. I took care of that, and it still didn't work. Here's what came out of the nova compute log: 2021-04-18 19:24:27.201 10808 ERROR oslo_messaging.rpc.server [req-225e7beb-f186-4235-abce-efcf4924d505 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Exception during message handling: nova.exception.ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. Command: ssh -o BatchMode=yes 10.0.128.20 mkdir -p /var/lib/nova/instances/64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 Exit code: 255 Stdout: '' Stderr: 'Host key verification failed.\r\n' When I do su - nova on the origin server, as per the above, then ssh to the receiving server, I get this: Load key "/etc/nova/migration/identity": invalid format /etc/nova/migration/identity isn't mentioned anywhere in the documentation above. I tried: cat id_rsa > /etc/nova/migration/identity and cat id_rsa.pub >> /etc/nova/migration/authorized_keys Using the keys copied in the documentation above; still no go. Same 'Host key verification failed.\r\n' result. What am I missing? Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean Mooney [mailto:smooney at redhat.com] Sent: Friday, April 16, 2021 9:58 AM To: Dominic Hilsbos; openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][victoria] Migrate cross CPU? hum ok the best way to debug this is to lis the server events and get the request id for the migration it may be req-ff109e53-74e0-40de-8ec7-29aff600b5f7 based on the logs you posted but you should see more info in the api, conductor and compute logs for that request id. given the state has not change i suspect it failed rather early. its possible that you are expirence an issue with the rabbitmq service and rpc calls are bing lost but i woudl not expect to see logs realted to this in the scudler while the vm is stilll in the SHUTOFF status. can you do "openstack server event list 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3" then get the most recent resize event's request id and see if there are any other logs. regard sean. (note i think it will be listed as a resize not a migrate since interanlly migreate is implmented as resize but to the same flavour). On 16/04/2021 17:04, DHilsbos at performair.com wrote: > Sean; > > Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. > > Unfortunately, at present, the state change is not occurring. > > Here's a series of commands, with output: > > #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > +-------------------------------------+----------------------------------------------------------+ > | Field | Value | > +-------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-elcom-1 | > | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | > | OS-EXT-STS:power_state | Shutdown | > | OS-EXT-STS:task_state | None | > | OS-EXT-STS:vm_state | stopped | > | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | > | OS-SRV-USG:terminated_at | None | > | accessIPv4 | | > | accessIPv6 | | > | addresses | it-network=10.255.127.208, 10.0.160.35 | > | config_drive | | > | created | 2021-03-06T04:35:51Z | > | flavor | m4.large (8) | > | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | > | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | > | image | N/A (booted from volume) | > | key_name | None | > | name | Java Dev | > | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | > | properties | | > | security_groups | name='allow-ping' | > | | name='allow-ssh' | > | | name='default' | > | status | SHUTOFF | > | updated | 2021-04-16T15:52:07Z | > | user_id | 69b73ea8f55c46a99021e77ebf70b62a | > | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | > +-------------------------------------+----------------------------------------------------------+ > #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > +-------------------------------------+----------------------------------------------------------+ > | Field | Value | > +-------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-elcom-1 | > | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | > | OS-EXT-STS:power_state | Shutdown | > | OS-EXT-STS:task_state | None | > | OS-EXT-STS:vm_state | stopped | > | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | > | OS-SRV-USG:terminated_at | None | > | accessIPv4 | | > | accessIPv6 | | > | addresses | it-network=10.255.127.208, 10.0.160.35 | > | config_drive | | > | created | 2021-03-06T04:35:51Z | > | flavor | m4.large (8) | > | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | > | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | > | image | N/A (booted from volume) | > | key_name | None | > | name | Java Dev | > | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | > | properties | | > | security_groups | name='allow-ping' | > | | name='allow-ssh' | > | | name='default' | > | status | SHUTOFF | > | updated | 2021-04-16T15:53:32Z | > | user_id | 69b73ea8f55c46a99021e77ebf70b62a | > | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | > +-------------------------------------+----------------------------------------------------------+ > #tail /var/log/nova/nova-conductor.log > #tail /var/log/nova/nova-scheduler.log > 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises > 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: > > Both Cinder volume storage, and ephemeral storage are being handled by Ceph. > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Friday, April 16, 2021 6:28 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][victoria] Migrate cross CPU? > > > > On 15/04/2021 19:05, DHilsbos at performair.com wrote: >> All; >> >> I seem to have generated another issue for myself... >> >> I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. >> >> I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. >> >> My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. >> >> I also can't find an option to pass to the openstack server start command which requests a specific host. >> >> Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? > you should be able to cold migrate them using the migrate command but > that should put the servers into resize_verify and then you need > to confirm the migration to complte it. we will not clean up the vm on > the source node until you do that last step. > >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Director - Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> >> > From i at liuyulong.me Mon Apr 19 02:59:07 2021 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Mon, 19 Apr 2021 10:59:07 +0800 Subject: [neutron] Cancel next two neutron L3 meetings Message-ID: Hi there, Next week we will have the virtual PTG online, so let's can cancel the L3 meeting next week. Due to the International Labour Day vacation (1st - 5th May 2021), I will not be online to hold the L3 meeting on 5th May 2021. So, cancle it as well. Regards, LIU Yulong From malikobaidadil at gmail.com Mon Apr 19 07:37:40 2021 From: malikobaidadil at gmail.com (Malik Obaid) Date: Mon, 19 Apr 2021 12:37:40 +0500 Subject: [wallaby][neutron][ovn] QoS bandwidth limit for Floating IP, Gateway IP and VM's Interface Message-ID: Hi, I am using Openstack Wallaby release on Ubuntu 20.04. I just want to confirm that is it possible now in Openstack Wallaby release to apply QoS policy to limit bandwidth for floating IP, Gateway IP and at VMs Interface as previously it was not possible in Victoria release. I would really appreciate any input in this regard. Thank you. Regards, Malik Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Mon Apr 19 08:38:29 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Mon, 19 Apr 2021 10:38:29 +0200 Subject: [ironic] Ironic Whiteboard v2 call for reviews In-Reply-To: References: Message-ID: That looks great, thanks! I would probably just add a link to https://ironicbaremetal.org/ and to the meeting page https://wiki.openstack.org/wiki/Meetings/Ironic at the very top in the first section. Also a formatting adjustment, change the size/style of the characters to have a better visual separation between topic name and content in the status report section. Cheers, Riccardo On Fri, Apr 16, 2021 at 6:31 PM Jay Faulkner wrote: > Hi all, > > Iury and I spent some time this morning updating the Ironic whiteboard > etherpad to include more immediately useful information to contributors. > > We placed this updated whiteboard at > https://etherpad.opendev.org/p/IronicWhiteBoardv2 -- our approach was to > prune any outdated/broken links or information, and focus on making the > first part of the whiteboard an easy one-click place for folks to see easy > ways to contribute. All the rest of the information was carried over and > reformatted. > > Once there is consensus from the team about this being a positive change, > we should either replace the existing IronicWhiteBoard with the contents of > the v2 page, or just update links to point to the new one instead. > > What do you all think? > > Thanks, > Jay Faulkner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Apr 19 09:55:26 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 19 Apr 2021 11:55:26 +0200 Subject: [largescale-sig] PTG meeting: April 21, 15utc Message-ID: <203c7f30-b8dd-8c83-8042-f8bc56075ddb@openstack.org> Hi everyone, Our next Large Scale SIG meeting will be this Wednesday at 15utc in the Ocata meeting room from the virtual PTG[1]. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210421T15 Please register[2] to the PTG if you haven't already, it will give you access to communications explaining how to join. We'll do a general presentation of the Large Scale SIG activities, show off our "Large Scale Journey" documentation, and discuss the topics of our next video meeting(s). Feel free to add other topics to our PTG meeting agenda at: https://etherpad.opendev.org/p/apr2021-ptg-large-scale See you there! [1] http://ptg.openstack.org/ptg.html [2] https://www.openstack.org/ptg/ Regards, -- Thierry Carrez From ralonsoh at redhat.com Mon Apr 19 10:31:10 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 19 Apr 2021 12:31:10 +0200 Subject: [wallaby][neutron][ovn] QoS bandwidth limit for Floating IP, Gateway IP and VM's Interface In-Reply-To: References: Message-ID: Hello Malik: OVN QoS extension now supports max BW and DSCP QoS rules for VM ports and floating IPs [0]. GW IP is still under development [1]. Regards. [0] https://github.com/openstack/neutron/blob/stable/wallaby/doc/source/admin/config-qos.rst [1]https://review.opendev.org/c/openstack/neutron/+/749012 On Mon, Apr 19, 2021 at 9:42 AM Malik Obaid wrote: > Hi, > > I am using Openstack Wallaby release on Ubuntu 20.04. > > I just want to confirm that is it possible now in Openstack Wallaby > release to apply QoS policy to limit bandwidth for floating IP, Gateway IP > and at VMs Interface as previously it was not possible in Victoria release. > I would really appreciate any input in this regard. > > Thank you. > > Regards, > Malik Obaid > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon Apr 19 10:33:15 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 19 Apr 2021 12:33:15 +0200 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: Hi, I only see positive responses and we waited long enough. So Sean, welcome to the nova-specs core team! Cheers, gibi On Tue, Apr 13, 2021 at 09:09, Sylvain Bauza wrote: > > > On Tue, Mar 30, 2021 at 6:51 PM Stephen Finucane > wrote: >> Hey, >> >> Sean has been working on nova for what seems like yonks now. Each >> cycle, they >> spend a significant amount of time reviewing proposed specs and >> contributing to >> discussions at the PTG. This is important work and their >> contributions provide >> everyone with a deep pool of knowledge on all things networking and >> hardware >> upon which to draw. I think the nova project would benefit from >> their addition >> to the specs core reviewer team and I therefore propose we add Sean >> to nova- >> specs-core. >> >> Assuming there are no objections, I'll work with gibi to add Sean >> to nova-specs- >> core next week. >> > > +1, sorry for the late approval, forgot to reply. > >> Cheers, >> Stephen >> >> >> From smooney at redhat.com Mon Apr 19 10:38:20 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 19 Apr 2021 11:38:20 +0100 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <0670B960225633449A24709C291A525251114523@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> <0670B960225633449A24709C291A525251114523@COM01.performair.local> Message-ID: <629d1b9e-fce7-03c1-5b4b-81b07b14eebb@redhat.com> On 19/04/2021 03:51, DHilsbos at performair.com wrote: > Sean; > > Thank you, your suggestion led me to a problem with ssh. I was a little surprised by this, as live migration works. thats a pretty common issue. live migration does not use ssh or rsync to to copy the vm disk data that is done by qemu. for cold migration the data is copied by nova using one of 2 drivers either ssh/scp or rsync. > > I reviewed: > https://docs.openstack.org/nova/victoria/admin/ssh-configuration.html#cli-os-migrate-cfg-ssh > and found that I had a problem with the authorized keys file. I took care of that, and it still didn't work. > > Here's what came out of the nova compute log: > 2021-04-18 19:24:27.201 10808 ERROR oslo_messaging.rpc.server [req-225e7beb-f186-4235-abce-efcf4924d505 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Exception during message handling: nova.exception.ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. > Command: ssh -o BatchMode=yes 10.0.128.20 mkdir -p /var/lib/nova/instances/64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > Exit code: 255 > Stdout: '' > Stderr: 'Host key verification failed.\r\n' > > When I do su - nova on the origin server, as per the above, then ssh to the receiving server, I get this: > Load key "/etc/nova/migration/identity": invalid format > > /etc/nova/migration/identity isn't mentioned anywhere in the documentation above. > > I tried: > cat id_rsa > /etc/nova/migration/identity > and > cat id_rsa.pub >> /etc/nova/migration/authorized_keys > > Using the keys copied in the documentation above; still no go. Same 'Host key verification failed.\r\n' result. > > What am I missing? you will need to su to the nova user and make sure the key has the correct permissions set typically 600 and is owned by nova. then you need to do the key exchange and ensure its added to the known hosts. i normally do that by manually sshing as the nova user to the destination hosts. obviously if its more then a cople of hosts you will want to use ansible or something to automate the process. there are basicaly 3 thing you need to do. 1.) copy a key with out a password to the nova user on all hosts and set permission to 600 2.) add the public key to authorized_keys on all hosts 3.) pre populate the known_hosts on all hosts for all other hosts.(you can use ssh-keyscan for this)      if you have more then about 20 hosts do this on one host and copy to all other because quadratic with large number of hosts takes a while... > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Friday, April 16, 2021 9:58 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][victoria] Migrate cross CPU? > > hum ok the best way to debug this is to lis the server events and get > the request id for the migration > it may be req-ff109e53-74e0-40de-8ec7-29aff600b5f7 based on the logs you > posted but you should see more info > in the api, conductor and compute logs for that request id. > > given the state has not change i suspect it failed rather early. > > its possible that you are expirence an issue with the rabbitmq service > and rpc calls are bing lost but > i woudl not expect to see logs realted to this in the scudler while the > vm is stilll in the SHUTOFF status. > > can you do "openstack server event list > 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3" then get the most recent > resize event's request id and see if there are any other logs. > > regard > sean. > > (note i think it will be listed as a resize not a migrate since > interanlly migreate is implmented as resize but to the same flavour). > > On 16/04/2021 17:04, DHilsbos at performair.com wrote: >> Sean; >> >> Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. >> >> Unfortunately, at present, the state change is not occurring. >> >> Here's a series of commands, with output: >> >> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> +-------------------------------------+----------------------------------------------------------+ >> | Field | Value | >> +-------------------------------------+----------------------------------------------------------+ >> | OS-DCF:diskConfig | MANUAL | >> | OS-EXT-AZ:availability_zone | az-elcom-1 | >> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >> | OS-EXT-STS:power_state | Shutdown | >> | OS-EXT-STS:task_state | None | >> | OS-EXT-STS:vm_state | stopped | >> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >> | OS-SRV-USG:terminated_at | None | >> | accessIPv4 | | >> | accessIPv6 | | >> | addresses | it-network=10.255.127.208, 10.0.160.35 | >> | config_drive | | >> | created | 2021-03-06T04:35:51Z | >> | flavor | m4.large (8) | >> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >> | image | N/A (booted from volume) | >> | key_name | None | >> | name | Java Dev | >> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >> | properties | | >> | security_groups | name='allow-ping' | >> | | name='allow-ssh' | >> | | name='default' | >> | status | SHUTOFF | >> | updated | 2021-04-16T15:52:07Z | >> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >> +-------------------------------------+----------------------------------------------------------+ >> #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> +-------------------------------------+----------------------------------------------------------+ >> | Field | Value | >> +-------------------------------------+----------------------------------------------------------+ >> | OS-DCF:diskConfig | MANUAL | >> | OS-EXT-AZ:availability_zone | az-elcom-1 | >> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >> | OS-EXT-STS:power_state | Shutdown | >> | OS-EXT-STS:task_state | None | >> | OS-EXT-STS:vm_state | stopped | >> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >> | OS-SRV-USG:terminated_at | None | >> | accessIPv4 | | >> | accessIPv6 | | >> | addresses | it-network=10.255.127.208, 10.0.160.35 | >> | config_drive | | >> | created | 2021-03-06T04:35:51Z | >> | flavor | m4.large (8) | >> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >> | image | N/A (booted from volume) | >> | key_name | None | >> | name | Java Dev | >> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >> | properties | | >> | security_groups | name='allow-ping' | >> | | name='allow-ssh' | >> | | name='default' | >> | status | SHUTOFF | >> | updated | 2021-04-16T15:53:32Z | >> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >> +-------------------------------------+----------------------------------------------------------+ >> #tail /var/log/nova/nova-conductor.log >> #tail /var/log/nova/nova-scheduler.log >> 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises >> 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: >> >> Both Cinder volume storage, and ephemeral storage are being handled by Ceph. >> >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Director – Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> >> -----Original Message----- >> From: Sean Mooney [mailto:smooney at redhat.com] >> Sent: Friday, April 16, 2021 6:28 AM >> To: openstack-discuss at lists.openstack.org >> Subject: Re: [ops][nova][victoria] Migrate cross CPU? >> >> >> >> On 15/04/2021 19:05, DHilsbos at performair.com wrote: >>> All; >>> >>> I seem to have generated another issue for myself... >>> >>> I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. >>> >>> I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. >>> >>> My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. >>> >>> I also can't find an option to pass to the openstack server start command which requests a specific host. >>> >>> Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? >> you should be able to cold migrate them using the migrate command but >> that should put the servers into resize_verify and then you need >> to confirm the migration to complte it. we will not clean up the vm on >> the source node until you do that last step. >> >>> Thank you, >>> >>> Dominic L. Hilsbos, MBA >>> Director - Information Technology >>> Perform Air International Inc. >>> DHilsbos at PerformAir.com >>> www.PerformAir.com >>> >>> >>> From openinfradn at gmail.com Mon Apr 19 11:01:24 2021 From: openinfradn at gmail.com (open infra) Date: Mon, 19 Apr 2021 16:31:24 +0530 Subject: Create OpenStack VMs in few seconds In-Reply-To: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> Message-ID: On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley wrote: > On 2021-03-25 14:47:09 +0000 (+0000), Sean Mooney wrote: > > nova does not support create a base vm and then doing a local live > > migration or restore for memory snapshots to create another vm. > > > > this approch likely has several security implciations that would > > not be accpeatable in a multi tenant enviornment. > > > > we have disucssed this type of vm creation in the past and > > determined that it is not a valid implematnion of spawn. a virt > > driver that precreate vms or copys an existing instance can be > > faster but that virt driver is not considered a compliant > > implementation. > > > > so in short there is no way to achive this today in a compliant > > openstack powered cloud. > [...] > > The next best thing is basically what Nodepool[*] does: start new > virtual machines ahead of time and keep them available in the > tenant. This does of course mean you're occupying additional quota > for whatever base "ready" capacity you've set for your various > images/flavors, and that you need to be able to predict how many of > what kinds of virtual machines you're going to need in advance. > > [*] https://zuul-ci.org/docs/nodepool/ Is it recommended to use nodepool in a production environment? > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Apr 19 11:02:42 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 19 Apr 2021 08:02:42 -0300 Subject: [cloudKitty][vPTG] PTG Meeting Message-ID: Hello guys, Sorry for the late post, but we (CloudKitty community) have today the PTG meeting for Xena. The Etherpad can be found at: https://etherpad.opendev.org/p/apr2021-ptg-cloudkitty The zoom meeting is: https://zoom.us/j/98954442045?pwd=M01wcDc3clVDUk1rc0l3cUtCVGVJQT09 The meeting will happen today, Monday April 19, from 14:00 to 17:00 UTC in the Essex room. See you guys there :) -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Mon Apr 19 12:19:59 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 19 Apr 2021 12:19:59 +0000 Subject: [kolla] VM build fails after Train-Ussuri upgrade Message-ID: <83407a4f27244ac4bd751c1ee5ffb6c8@ncwmexgp009.CORP.CHARTERCOM.com> I upgraded my Train test cluster to Ussuri following these instructions: OpenStack Docs: Operating Kolla The upgrade completed successfully with no failures, and the existing VMs are fine, but new VM build fails with rados.Rados.connect\nrados.PermissionDeniedError: Ubuntu Pastebin I'm running external ceph so I looked at this document: OpenStack Docs: External Ceph It says that I need the following in /etc/kolla/config/glance/ceph.conf: auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx I didn't have that, so I added it and then redeployed, but still can't build VMs. I tried adding the same to all copies of ceph.conf and redeployed again, but that didn't help. Does anything else need to change in my ceph config when upgrading from Train to Ussuri? I see some cryptic talk about ceph in the release notes but it's not obvious what I'm being asked to change: OpenStack Docs: Ussuri Series Release Notes I read the bug that it refers to: Bug #1904062 "external ceph cinder volume config breaks volumes ..." : Bugs : kolla-ansible (launchpad.net) But I already have "backend_host=rbd:volumes" so I don't think I'm hitting that. Also I read these sections but I don't see anything obvious here that needs to be changed: * For cinder (cinder-volume and cinder-backup), glance-api and manila keyrings behavior has changed and Kolla Ansible deployment will not copy those keys using wildcards (ceph.*), instead will use newly introduced variables. Your environment may render unusable after an upgrade if your keys in /etc/kolla/config do not match default values for introduced variables. * The default behavior for generating the cinder.conf template has changed. An rbd-1 section will be generated when external Ceph functionality is used, i.e. cinder_backend_ceph is set to true. Previously it was only included when Kolla Ansible internal Ceph deployment mechanism was used. * The rbd section of nova.conf for nova-compute is now generated when nova_backend is set to "rbd". Previously it was only generated when both enable_ceph was "yes" and nova_backend was set to "rbd". My ceph keys have the default name and are in the default locations. I have cinder_backend_ceph: "yes". I don't have a nova_backend setting but I have nova_backend_ceph: "yes" I added nova_backend: "rbd" and redeployed and now I get a different error: rados.Rados.connect\nrados.ObjectNotFound Ubuntu Pastebin I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazumasa.nomura.rx at hitachi.com Mon Apr 19 12:54:07 2021 From: kazumasa.nomura.rx at hitachi.com (=?utf-8?B?6YeO5p2R5ZKM5q2jIC8gTk9NVVJB77yMS0FaVU1BU0E=?=) Date: Mon, 19 Apr 2021 12:54:07 +0000 Subject: [cinder] How to post multiple patches. In-Reply-To: References: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> <98285557-7db7-592a-c160-b6dad4971a1e@gmail.com> Message-ID: Hi, Rosmaita and Bryant, Alan, Thanks for your advices. I will do it that way. Kazumasa Nomura On Mon, Apr 12, 2021 at 6:47 PM Jay Bryant > wrote: On 4/12/2021 11:18 AM, Brian Rosmaita wrote: > On 4/12/21 12:52 AM, 野村和正 / NOMURA,KAZUMASA wrote: >> Hi everyone, >> >> Hitachi has developed the out-of-tree driver as Cinder driver. But >> wewant to deprecate the out-of-tree driver and support only the >> in-treedriver. >> >> We need to submit about ten more patches(*1) for full features which >> theout-of-tree driver has such as Consistency Group and Volume >> Replication. >> >> In that case, we have two options: >> >> 1. Submit two or three patches at once. In other words, submit two >> orthree patches to Xena, then submit another two or three patches >> afterprevious patches were merged, and so on. This may give reviewers >> thefeeling of endless. I just want to add that you are not limited to submitting a single batch of patches in a cycle. If you can get the first batch accepted in Xena, you are free to submit other batches in Xena. Just continue to bear in mind the date for freezing driver patches. The bottom line is the sooner you submit patches and work on resolving reviewer feedback, the sooner you can propose additional patches. Alan >> >> 2. Submit all patches at once to Xena. This will give reviewers >> theinformation how many patches remains from the beginning, but many >> pathesmay bother them. >> >> Does anyone have an opinion as to which option is better? > > My opinion is that option #1 is better, because as the initial patches > are reviewed, issues will come up in review that you will be able to > apply proactively to later patches on your own without reviewers > having to bring them up, which will result in a better experience for > all concerned. > > Also, we can have an idea of how many patches to expect (without your > filing them all at once) if you file blueprints in Launchpad for each > feature. Please name them 'hitachi-consistency-group-support', > 'hitachi-volume-replication', etc., so it's easy to see what driver > they're for. The blueprint doesn't need much detail; it's primarily > for tracking purposes. You can see some examples here: > https://blueprints.launchpad.net/cinder/wallaby > > I concur with Brian. I think doing a few at a time will be less likely to overwhelm the review team and it will help to prevent repeated comments in subsequent patches if you are able to proactively fix the subsequent patches before they are submitted. Thanks for seeking input on this! Jay > cheers, > brian > >> >> Thanks, >> >> Kazumasa Nomura >> >> E-mail: >> kazumasa.nomura.rx at hitachi.com> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 19 13:00:12 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 19 Apr 2021 15:00:12 +0200 Subject: [release] PTG meeting Message-ID: Hello, The release team PTG will be on Jitsi https://meetpad.opendev.org/relmgt-xena-ptg See you there at 2pm UTC. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Apr 19 13:01:28 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 19 Apr 2021 06:01:28 -0700 Subject: [Scientific] Scientific SIG meetings during the OpenStack PTG next week In-Reply-To: References: Message-ID: Greetings, Out of curiosity is there an etherpad with discussion topics for the SIG? I ask because the etherpad list[0] links to an unused etherpad[1]. -Julia [0]: http://ptg.openstack.org/etherpads.html [1]: https://etherpad.opendev.org/p/apr2021-ptg-scientific-sig On Fri, Apr 16, 2021 at 3:15 PM Martial Michel wrote: > > The Scientific SIG will have two meetings next week during the PTG. > Details on those meetings are as follows: > > Session 1 - Cactus room - April 21st - 14:00-15:00 UTC > Main session, topic discussion (note we only have one hour) > > Session 2 - Cactus room - April 21st - 21:00-22:00 UTC > Lightning Talks: Bring a LT on something you've been doing or would like to present > (10 minutes per talk, including questions. Note we only have one hour, so strict timekeeping will have to be enforced ) > > As a reminder the Scientific SIG has a Slack. > Please contact me directly (martialmichel at datamachines.io) if you want to join our slack or our meeting and need joining information. > > Thank you and looking forward to seeing a few stackers next week -- Martial > From pierre at stackhpc.com Mon Apr 19 13:06:16 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 19 Apr 2021 15:06:16 +0200 Subject: [Scientific] Scientific SIG meetings during the OpenStack PTG next week In-Reply-To: References: Message-ID: The Scientific SIG Etherpad is at https://etherpad.opendev.org/p/202104PTG_ScientificSIG_LightingTalks On Mon, 19 Apr 2021 at 15:02, Julia Kreger wrote: > > Greetings, > > Out of curiosity is there an etherpad with discussion topics for the > SIG? I ask because the etherpad list[0] links to an unused > etherpad[1]. > > -Julia > > [0]: http://ptg.openstack.org/etherpads.html > [1]: https://etherpad.opendev.org/p/apr2021-ptg-scientific-sig > > On Fri, Apr 16, 2021 at 3:15 PM Martial Michel > wrote: > > > > The Scientific SIG will have two meetings next week during the PTG. > > Details on those meetings are as follows: > > > > Session 1 - Cactus room - April 21st - 14:00-15:00 UTC > > Main session, topic discussion (note we only have one hour) > > > > Session 2 - Cactus room - April 21st - 21:00-22:00 UTC > > Lightning Talks: Bring a LT on something you've been doing or would like to present > > (10 minutes per talk, including questions. Note we only have one hour, so strict timekeeping will have to be enforced ) > > > > As a reminder the Scientific SIG has a Slack. > > Please contact me directly (martialmichel at datamachines.io) if you want to join our slack or our meeting and need joining information. > > > > Thank you and looking forward to seeing a few stackers next week -- Martial > > > From pierre at stackhpc.com Mon Apr 19 13:06:16 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 19 Apr 2021 15:06:16 +0200 Subject: [Scientific] Scientific SIG meetings during the OpenStack PTG next week In-Reply-To: References: Message-ID: The Scientific SIG Etherpad is at https://etherpad.opendev.org/p/202104PTG_ScientificSIG_LightingTalks On Mon, 19 Apr 2021 at 15:02, Julia Kreger wrote: > > Greetings, > > Out of curiosity is there an etherpad with discussion topics for the > SIG? I ask because the etherpad list[0] links to an unused > etherpad[1]. > > -Julia > > [0]: http://ptg.openstack.org/etherpads.html > [1]: https://etherpad.opendev.org/p/apr2021-ptg-scientific-sig > > On Fri, Apr 16, 2021 at 3:15 PM Martial Michel > wrote: > > > > The Scientific SIG will have two meetings next week during the PTG. > > Details on those meetings are as follows: > > > > Session 1 - Cactus room - April 21st - 14:00-15:00 UTC > > Main session, topic discussion (note we only have one hour) > > > > Session 2 - Cactus room - April 21st - 21:00-22:00 UTC > > Lightning Talks: Bring a LT on something you've been doing or would like to present > > (10 minutes per talk, including questions. Note we only have one hour, so strict timekeeping will have to be enforced ) > > > > As a reminder the Scientific SIG has a Slack. > > Please contact me directly (martialmichel at datamachines.io) if you want to join our slack or our meeting and need joining information. > > > > Thank you and looking forward to seeing a few stackers next week -- Martial > > > From fungi at yuggoth.org Mon Apr 19 13:26:20 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Apr 2021 13:26:20 +0000 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> Message-ID: <20210419132620.noqzlfui7ycstkvc@yuggoth.org> On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote: > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley wrote: [...] > > The next best thing is basically what Nodepool[*] does: start new > > virtual machines ahead of time and keep them available in the > > tenant. This does of course mean you're occupying additional quota > > for whatever base "ready" capacity you've set for your various > > images/flavors, and that you need to be able to predict how many of > > what kinds of virtual machines you're going to need in advance. > > > > [*] https://zuul-ci.org/docs/nodepool/ > > Is it recommended to use nodepool in a production environment? I can't begin to guess what you mean by "in a production environment," but it forms the lifecycle management basis for our production CI/CD system (as it does for many other Zuul installations). In the case of the deployment I help run, it's continuously connected to over a dozen production clouds, both public and private. But anyway, I didn't say "use Nodepool." I suggested you look at "what Nodepool does" as a model for starting server instances in advance within the tenants/projects which regularly require instant access to new virtual machines. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From matus.brandys at vnet.eu Mon Apr 19 13:34:05 2021 From: matus.brandys at vnet.eu (=?UTF-8?B?TWF0w7rFoSBCcmFuZHlz?=) Date: Mon, 19 Apr 2021 15:34:05 +0200 Subject: [openstack-community] [neutron] Adding support for elliptic curve in DH groups for key agreement protocol Message-ID: <954d82ad-9b49-06b9-3d6e-28a48acc2f48@vnet.eu> Hi everyone, I was looking at neutron VPN implementation and found out that the current neutron implementation supports only creating VPN using DH up to group 15. For example, strongswan drivers support except regular group also Elliptic Curve Groups also NIST and Brainpool Elliptic Curve Groups. https://wiki.strongswan.org/projects/strongswan/wiki/IKEv2CipherSuites#Diffie-Hellman-Groups. I would like to know, if there is some limitation using Elliptic Curve groups for VPN or is this only an implementation issue? Thanks, Matus From smooney at redhat.com Mon Apr 19 13:42:33 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 19 Apr 2021 14:42:33 +0100 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: On 19/04/2021 11:33, Balazs Gibizer wrote: > Hi, > > I only see positive responses and we waited long enough. So Sean, > welcome to the nova-specs core team! thank you all for your vote of confidence. while i will try to be conservative in my approach of reviewing and approving specs but i can confirm that the gerrit configuration changes have taken effect. https://review.opendev.org/c/openstack/nova-specs/+/783783 +2w for api consistency between services. its nice to be able to say that :) > > Cheers, > gibi > > On Tue, Apr 13, 2021 at 09:09, Sylvain Bauza wrote: >> >> >> On Tue, Mar 30, 2021 at 6:51 PM Stephen Finucane >> wrote: >>> Hey, >>> >>>  Sean has been working on nova for what seems like yonks now. Each >>> cycle, they >>>  spend a significant amount of time reviewing proposed specs and >>> contributing to >>>  discussions at the PTG. This is important work and their >>> contributions provide >>>  everyone with a deep pool of knowledge on all things networking and >>> hardware >>>  upon which to draw. I think the nova project would benefit from >>> their addition >>>  to the specs core reviewer team and I therefore propose we add Sean >>> to nova- >>>  specs-core. >>> >>>  Assuming there are no objections, I'll work with gibi to add Sean >>> to nova-specs- >>>  core next week. >>> >> >> +1, sorry for the late approval, forgot to reply. >> >>> Cheers, >>>  Stephen >>> >>> >>> > > > From syedammad83 at gmail.com Mon Apr 19 15:24:27 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 19 Apr 2021 20:24:27 +0500 Subject: [victoria][neutron][ovn] Iperf3 Retransmits Message-ID: Hi, I am using openstack victoria and neutron with ovn latest releases. I was trying to assess network performance between the vms first. I have created two ubuntu 20.04 VMs installed iperf3 in them and executed a bidirectional test between VMs. The instance are on same network and network type is geneve. Both VMs are on same host. I have tested by keeping VM to different compute hosts, but results are same. Here are the results. root at iperf3-1:~# iperf3 -c 192.168.100.50 --bidir Connecting to host 192.168.100.50, port 5201 [ 5] local 192.168.100.120 port 53156 connected to 192.168.100.50 port 5201 [ 7] local 192.168.100.120 port 53158 connected to 192.168.100.50 port 5201 [ ID][Role] Interval Transfer Bitrate Retr Cwnd [ 5][TX-C] 0.00-1.00 sec 311 MBytes 2.61 Gbits/sec 407 1.41 MBytes [ 7][RX-C] 0.00-1.00 sec 627 MBytes 5.26 Gbits/sec [ 5][TX-C] 1.00-2.00 sec 1.07 GBytes 9.18 Gbits/sec 2638 1.71 MBytes [ 7][RX-C] 1.00-2.00 sec 778 MBytes 6.53 Gbits/sec [ 5][TX-C] 2.00-3.00 sec 1.14 GBytes 9.75 Gbits/sec 603 1.98 MBytes [ 7][RX-C] 2.00-3.00 sec 922 MBytes 7.73 Gbits/sec [ 5][TX-C] 3.00-4.00 sec 1.00 GBytes 8.62 Gbits/sec 594 2.16 MBytes [ 7][RX-C] 3.00-4.00 sec 935 MBytes 7.84 Gbits/sec [ 5][TX-C] 4.00-5.00 sec 988 MBytes 8.29 Gbits/sec 2431 1.42 MBytes [ 7][RX-C] 4.00-5.00 sec 932 MBytes 7.82 Gbits/sec [ 5][TX-C] 5.00-6.00 sec 879 MBytes 7.37 Gbits/sec 599 991 KBytes [ 7][RX-C] 5.00-6.00 sec 996 MBytes 8.36 Gbits/sec [ 5][TX-C] 6.00-7.00 sec 721 MBytes 6.05 Gbits/sec 389 1.10 MBytes [ 7][RX-C] 6.00-7.00 sec 963 MBytes 8.08 Gbits/sec [ 5][TX-C] 7.00-8.00 sec 901 MBytes 7.56 Gbits/sec 33 1.54 MBytes [ 7][RX-C] 7.00-8.00 sec 1.02 GBytes 8.79 Gbits/sec [ 5][TX-C] 8.00-9.00 sec 889 MBytes 7.44 Gbits/sec 376 1.33 MBytes [ 7][RX-C] 8.00-9.00 sec 1.14 GBytes 9.78 Gbits/sec [ 5][TX-C] 9.00-10.00 sec 671 MBytes 5.64 Gbits/sec 1017 984 KBytes [ 7][RX-C] 9.00-10.00 sec 975 MBytes 8.19 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID][Role] Interval Transfer Bitrate Retr [ 5][TX-C] 0.00-10.00 sec 8.44 GBytes 7.25 Gbits/sec 9087 sender [ 5][TX-C] 0.00-10.00 sec 8.44 GBytes 7.25 Gbits/sec receiver [ 7][RX-C] 0.00-10.00 sec 9.13 GBytes 7.84 Gbits/sec 7914 sender [ 7][RX-C] 0.00-10.00 sec 9.12 GBytes 7.84 Gbits/sec receiver Here is the xmldump of instance interface.
My overlay network IP interface mtu is 1500. Please advise if there is any network performance tuning required ? -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From smartipgroup at gmail.com Sun Apr 18 20:36:34 2021 From: smartipgroup at gmail.com (Smart Ip) Date: Sun, 18 Apr 2021 20:36:34 +0000 Subject: use [Freezer] Message-ID: Good morning, Please I need your support or any kind of update resources for freezer deployment. Presently I'm deploying OpenStack Stein on CentOS and I would like to install freezer also. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From humaira.abdul.salam at desy.de Mon Apr 19 12:32:18 2021 From: humaira.abdul.salam at desy.de (Abdul Salam, Humaira) Date: Mon, 19 Apr 2021 14:32:18 +0200 (CEST) Subject: [Murano] Please help with murano image creation In-Reply-To: <373014192.258448558.1613653306547.JavaMail.zimbra@desy.de> References: <373014192.258448558.1613653306547.JavaMail.zimbra@desy.de> Message-ID: <1786777068.44353099.1618835538984.JavaMail.zimbra@desy.de> Hello, I tried to create Murano image for an openstack Victoria deployment following the document available at, https://docs.openstack.org/murano/victoria/reference/appendix/articles/image_builders/linux.html But unfortunately was not able to successfully create an image. At first the following error occured: "murano-agent requires Python '>=3.6' but the running Python is 2.7.17" Then after changing the release to focal, receive another error: "Unable to locate package python-pip" We(me and my colleague) are kind of block for the deployment of Murano application service, is anybody so kind to help us with this problem? I am attaching the commands ran and terminal output below for reference. Looking forward for your quick response, many thanks in advance. Humaira Abdul Salam, IT-Systems DESY, Hamburg ---------------------------------------------------------------------------------------------- root at kolla-2:~# cat /etc/issue Ubuntu 20.04.2 LTS \n \l root at kolla-2:~# export GITDIR=~/git root at kolla-2:~# mkdir -p $GITDIR root at kolla-2:~# cd $GITDIR root at kolla-2:~/git# git clone https://opendev.org/openstack/murano Cloning into 'murano'... remote: Enumerating objects: 34335, done. remote: Counting objects: 100% (34335/34335), done. remote: Compressing objects: 100% (14856/14856), done. remote: Total 34335 (delta 22813), reused 29217 (delta 18268) Receiving objects: 100% (34335/34335), 16.73 MiB | 7.42 MiB/s, done. Resolving deltas: 100% (22813/22813), done. root at kolla-2:~/git# git clone https://opendev.org/openstack/murano-agent Cloning into 'murano-agent'... remote: Enumerating objects: 6696, done. remote: Counting objects: 100% (6696/6696), done. remote: Compressing objects: 100% (2792/2792), done. remote: Total 6696 (delta 3766), reused 6475 (delta 3586) Receiving objects: 100% (6696/6696), 4.98 MiB | 5.46 MiB/s, done. Resolving deltas: 100% (3766/3766), done. root at kolla-2:~/git# pip3 install diskimage-builder Collecting diskimage-builder Downloading diskimage_builder-3.7.0-py3-none-any.whl (608 kB) |████████████████████████████████| 608 kB 4.8 MB/s Requirement already satisfied: stevedore>=1.20.0 in /usr/lib/python3/dist-packages (from diskimage-builder) (1.32.0) Collecting flake8<4.0.0,>=3.6.0 Downloading flake8-3.8.4-py2.py3-none-any.whl (72 kB) |████████████████████████████████| 72 kB 1.6 MB/s Requirement already satisfied: pbr!=2.1.0,>=2.0.0 in /usr/lib/python3/dist-packages (from diskimage-builder) (5.4.5) Collecting networkx>=1.10 Downloading networkx-2.5-py3-none-any.whl (1.6 MB) |████████████████████████████████| 1.6 MB 7.6 MB/s Requirement already satisfied: PyYAML>=3.12 in /usr/lib/python3/dist-packages (from diskimage-builder) (5.3.1) Collecting pycodestyle<2.7.0,>=2.6.0a1 Downloading pycodestyle-2.6.0-py2.py3-none-any.whl (41 kB) |████████████████████████████████| 41 kB 331 kB/s Collecting pyflakes<2.3.0,>=2.2.0 Downloading pyflakes-2.2.0-py2.py3-none-any.whl (66 kB) |████████████████████████████████| 66 kB 5.8 MB/s Collecting mccabe<0.7.0,>=0.6.0 Downloading mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB) Requirement already satisfied: decorator>=4.3.0 in /usr/lib/python3/dist-packages (from networkx>=1.10->diskimage-builder) (4.4.2) Installing collected packages: pycodestyle, pyflakes, mccabe, flake8, networkx, diskimage-builder Successfully installed diskimage-builder-3.7.0 flake8-3.8.4 mccabe-0.6.1 networkx-2.5 pycodestyle-2.6.0 pyflakes-2.2.0 root at kolla-2:~/git# apt install qemu-utils curl tox Reading package lists... Done Building dependency tree Reading state information... Done curl is already the newest version (7.68.0-1ubuntu2.4). curl set to manually installed. The following additional packages will be installed: ibverbs-providers javascript-common libboost-iostreams1.71.0 libboost-thread1.71.0 libibverbs1 libiscsi7 libjs-jquery libjs-sphinxdoc libjs-underscore libnl-route-3-200 librados2 librbd1 librdmacm1 python3-distlib python3-filelock python3-pluggy python3-py python3-toml python3-virtualenv qemu-block-extra sharutils virtualenv Suggested packages: apache2 | lighttpd | httpd subversion python3-pytest debootstrap sharutils-doc bsd-mailx | mailx The following NEW packages will be installed: ibverbs-providers javascript-common libboost-iostreams1.71.0 libboost-thread1.71.0 libibverbs1 libiscsi7 libjs-jquery libjs-sphinxdoc libjs-underscore libnl-route-3-200 librados2 librbd1 librdmacm1 python3-distlib python3-filelock python3-pluggy python3-py python3-toml python3-virtualenv qemu-block-extra qemu-utils sharutils tox virtualenv 0 upgraded, 24 newly installed, 0 to remove and 14 not upgraded. Need to get 8108 kB of archives. After this operation, 37.8 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libnl-route-3-200 amd64 3.4.0-1 [149 kB] Get:2 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libibverbs1 amd64 28.0-1ubuntu1 [53.6 kB] Get:3 http://de.archive.ubuntu.com/ubuntu focal/main amd64 ibverbs-providers amd64 28.0-1ubuntu1 [232 kB] Get:4 http://de.archive.ubuntu.com/ubuntu focal/main amd64 javascript-common all 11 [6066 B] Get:5 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libboost-iostreams1.71.0 amd64 1.71.0-6ubuntu6 [237 kB] Get:6 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libboost-thread1.71.0 amd64 1.71.0-6ubuntu6 [249 kB] Get:7 http://de.archive.ubuntu.com/ubuntu focal/main amd64 librdmacm1 amd64 28.0-1ubuntu1 [64.9 kB] Get:8 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libiscsi7 amd64 1.18.0-2 [63.9 kB] Get:9 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libjs-jquery all 3.3.1~dfsg-3 [329 kB] Get:10 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libjs-underscore all 1.9.1~dfsg-1 [98.6 kB] Get:11 http://de.archive.ubuntu.com/ubuntu focal/main amd64 libjs-sphinxdoc all 1.8.5-7ubuntu3 [97.1 kB] Get:12 http://de.archive.ubuntu.com/ubuntu focal-updates/main amd64 librados2 amd64 15.2.7-0ubuntu0.20.04.2 [3204 kB] Get:13 http://de.archive.ubuntu.com/ubuntu focal-updates/main amd64 librbd1 amd64 15.2.7-0ubuntu0.20.04.2 [1609 kB] Get:14 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 python3-distlib all 0.3.0-1 [116 kB] Get:15 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 python3-filelock all 3.0.12-2 [7948 B] Get:16 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 python3-pluggy all 0.13.0-2 [18.4 kB] Get:17 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 python3-py all 1.8.1-1 [65.4 kB] Get:18 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 python3-toml all 0.10.0-3 [14.6 kB] Get:19 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 python3-virtualenv all 20.0.17-1 [63.4 kB] Get:20 http://de.archive.ubuntu.com/ubuntu focal-updates/main amd64 qemu-block-extra amd64 1:4.2-3ubuntu6.12 [53.8 kB] Get:21 http://de.archive.ubuntu.com/ubuntu focal-updates/main amd64 qemu-utils amd64 1:4.2-3ubuntu6.12 [974 kB] Get:22 http://de.archive.ubuntu.com/ubuntu focal/main amd64 sharutils amd64 1:4.15.2-4build1 [155 kB] Get:23 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 virtualenv all 20.0.17-1 [2132 B] Get:24 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 tox all 3.13.2-2 [244 kB] Fetched 8108 kB in 0s (31.9 MB/s) Selecting previously unselected package libnl-route-3-200:amd64. (Reading database ... 118797 files and directories currently installed.) Preparing to unpack .../00-libnl-route-3-200_3.4.0-1_amd64.deb ... Unpacking libnl-route-3-200:amd64 (3.4.0-1) ... Selecting previously unselected package libibverbs1:amd64. Preparing to unpack .../01-libibverbs1_28.0-1ubuntu1_amd64.deb ... Unpacking libibverbs1:amd64 (28.0-1ubuntu1) ... Selecting previously unselected package ibverbs-providers:amd64. Preparing to unpack .../02-ibverbs-providers_28.0-1ubuntu1_amd64.deb ... Unpacking ibverbs-providers:amd64 (28.0-1ubuntu1) ... Selecting previously unselected package javascript-common. Preparing to unpack .../03-javascript-common_11_all.deb ... Unpacking javascript-common (11) ... Selecting previously unselected package libboost-iostreams1.71.0:amd64. Preparing to unpack .../04-libboost-iostreams1.71.0_1.71.0-6ubuntu6_amd64.deb ... Unpacking libboost-iostreams1.71.0:amd64 (1.71.0-6ubuntu6) ... Selecting previously unselected package libboost-thread1.71.0:amd64. Preparing to unpack .../05-libboost-thread1.71.0_1.71.0-6ubuntu6_amd64.deb ... Unpacking libboost-thread1.71.0:amd64 (1.71.0-6ubuntu6) ... Selecting previously unselected package librdmacm1:amd64. Preparing to unpack .../06-librdmacm1_28.0-1ubuntu1_amd64.deb ... Unpacking librdmacm1:amd64 (28.0-1ubuntu1) ... Selecting previously unselected package libiscsi7:amd64. Preparing to unpack .../07-libiscsi7_1.18.0-2_amd64.deb ... Unpacking libiscsi7:amd64 (1.18.0-2) ... Selecting previously unselected package libjs-jquery. Preparing to unpack .../08-libjs-jquery_3.3.1~dfsg-3_all.deb ... Unpacking libjs-jquery (3.3.1~dfsg-3) ... Selecting previously unselected package libjs-underscore. Preparing to unpack .../09-libjs-underscore_1.9.1~dfsg-1_all.deb ... Unpacking libjs-underscore (1.9.1~dfsg-1) ... Selecting previously unselected package libjs-sphinxdoc. Preparing to unpack .../10-libjs-sphinxdoc_1.8.5-7ubuntu3_all.deb ... Unpacking libjs-sphinxdoc (1.8.5-7ubuntu3) ... Selecting previously unselected package librados2. Preparing to unpack .../11-librados2_15.2.7-0ubuntu0.20.04.2_amd64.deb ... Unpacking librados2 (15.2.7-0ubuntu0.20.04.2) ... Selecting previously unselected package librbd1. Preparing to unpack .../12-librbd1_15.2.7-0ubuntu0.20.04.2_amd64.deb ... Unpacking librbd1 (15.2.7-0ubuntu0.20.04.2) ... Selecting previously unselected package python3-distlib. Preparing to unpack .../13-python3-distlib_0.3.0-1_all.deb ... Unpacking python3-distlib (0.3.0-1) ... Selecting previously unselected package python3-filelock. Preparing to unpack .../14-python3-filelock_3.0.12-2_all.deb ... Unpacking python3-filelock (3.0.12-2) ... Selecting previously unselected package python3-pluggy. Preparing to unpack .../15-python3-pluggy_0.13.0-2_all.deb ... Unpacking python3-pluggy (0.13.0-2) ... Selecting previously unselected package python3-py. Preparing to unpack .../16-python3-py_1.8.1-1_all.deb ... Unpacking python3-py (1.8.1-1) ... Selecting previously unselected package python3-toml. Preparing to unpack .../17-python3-toml_0.10.0-3_all.deb ... Unpacking python3-toml (0.10.0-3) ... Selecting previously unselected package python3-virtualenv. Preparing to unpack .../18-python3-virtualenv_20.0.17-1_all.deb ... Unpacking python3-virtualenv (20.0.17-1) ... Selecting previously unselected package qemu-block-extra:amd64. Preparing to unpack .../19-qemu-block-extra_1%3a4.2-3ubuntu6.12_amd64.deb ... Unpacking qemu-block-extra:amd64 (1:4.2-3ubuntu6.12) ... Selecting previously unselected package qemu-utils. Preparing to unpack .../20-qemu-utils_1%3a4.2-3ubuntu6.12_amd64.deb ... Unpacking qemu-utils (1:4.2-3ubuntu6.12) ... Selecting previously unselected package sharutils. Preparing to unpack .../21-sharutils_1%3a4.15.2-4build1_amd64.deb ... Unpacking sharutils (1:4.15.2-4build1) ... Selecting previously unselected package virtualenv. Preparing to unpack .../22-virtualenv_20.0.17-1_all.deb ... Unpacking virtualenv (20.0.17-1) ... Selecting previously unselected package tox. Preparing to unpack .../23-tox_3.13.2-2_all.deb ... Unpacking tox (3.13.2-2) ... Setting up javascript-common (11) ... Setting up python3-filelock (3.0.12-2) ... Setting up python3-py (1.8.1-1) ... Setting up python3-distlib (0.3.0-1) ... Setting up libboost-iostreams1.71.0:amd64 (1.71.0-6ubuntu6) ... Setting up libnl-route-3-200:amd64 (3.4.0-1) ... Setting up python3-toml (0.10.0-3) ... Setting up python3-pluggy (0.13.0-2) ... Setting up libboost-thread1.71.0:amd64 (1.71.0-6ubuntu6) ... Setting up libjs-jquery (3.3.1~dfsg-3) ... Setting up sharutils (1:4.15.2-4build1) ... Setting up libjs-underscore (1.9.1~dfsg-1) ... Setting up libibverbs1:amd64 (28.0-1ubuntu1) ... Setting up ibverbs-providers:amd64 (28.0-1ubuntu1) ... Setting up python3-virtualenv (20.0.17-1) ... Setting up virtualenv (20.0.17-1) ... Setting up libjs-sphinxdoc (1.8.5-7ubuntu3) ... Setting up librdmacm1:amd64 (28.0-1ubuntu1) ... Setting up librados2 (15.2.7-0ubuntu0.20.04.2) ... Setting up tox (3.13.2-2) ... Setting up librbd1 (15.2.7-0ubuntu0.20.04.2) ... Setting up libiscsi7:amd64 (1.18.0-2) ... Setting up qemu-block-extra:amd64 (1:4.2-3ubuntu6.12) ... Setting up qemu-utils (1:4.2-3ubuntu6.12) ... Processing triggers for libc-bin (2.31-0ubuntu9.2) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for install-info (6.7.0.dfsg.2-5) ... root at kolla-2:~/git# export ELEMENTS_PATH=$GITDIR/murano/contrib/elements:$GITDIR/murano-agent/contrib/elements root at kolla-2:~/git# disk-image-create vm ubuntu murano-agent -o murano-agent.qcow2 2021-02-18 12:52:34.387 | murano-agent requires Python '>=3.6' but the running Python is 2.7.17 root at kolla-2:~/git# export DIB_RELEASE=focal root at kolla-2:~/git# disk-image-create vm ubuntu murano-agent -o murano-agent.qcow2 2021-02-18 12:55:41.222 | E: Unable to locate package python-pip From jay.faulkner at verizonmedia.com Mon Apr 19 17:24:38 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Mon, 19 Apr 2021 10:24:38 -0700 Subject: [E] Re: [ironic] Ironic Whiteboard v2 call for reviews In-Reply-To: References: Message-ID: > Also a formatting adjustment, change the size/style of the characters to have a better visual separation between topic name and content in the status report section. I left the subteam status area mostly alone, as many of the status updates appeared to be out of date or for features that are already landed or not being worked on. I added an item to the next meetings' agenda to discuss this. I'll add the other links you suggested. Thanks! - Jay Faulkner On Mon, Apr 19, 2021 at 1:38 AM Riccardo Pittau wrote: > That looks great, thanks! > I would probably just add a link to https://ironicbaremetal.org/ > > and to the meeting page https://wiki.openstack.org/wiki/Meetings/Ironic > > at the very top in the first section. > Also a formatting adjustment, change the size/style of the characters to > have a better visual separation between topic name and content in the > status report section. > > Cheers, > Riccardo > > > On Fri, Apr 16, 2021 at 6:31 PM Jay Faulkner < > jay.faulkner at verizonmedia.com> wrote: > >> Hi all, >> >> Iury and I spent some time this morning updating the Ironic whiteboard >> etherpad to include more immediately useful information to contributors. >> >> We placed this updated whiteboard at >> https://etherpad.opendev.org/p/IronicWhiteBoardv2 >> >> -- our approach was to prune any outdated/broken links or information, and >> focus on making the first part of the whiteboard an easy one-click place >> for folks to see easy ways to contribute. All the rest of the information >> was carried over and reformatted. >> >> Once there is consensus from the team about this being a positive change, >> we should either replace the existing IronicWhiteBoard with the contents of >> the v2 page, or just update links to point to the new one instead. >> >> What do you all think? >> >> Thanks, >> Jay Faulkner >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Mon Apr 19 17:27:24 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Mon, 19 Apr 2021 17:27:24 +0000 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <629d1b9e-fce7-03c1-5b4b-81b07b14eebb@redhat.com> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> <0670B960225633449A24709C291A525251114523@COM01.performair.local> <629d1b9e-fce7-03c1-5b4b-81b07b14eebb@redhat.com> Message-ID: <0670B960225633449A24709C291A5252511214EB@COM01.performair.local> All; I think I've worked through the issue with ssh, however I now have another issue. I've attached an extract from the Nova Compute instance on the new server. If I'm reading this correctly, is it having trouble accessing Ceph? Also, this machine I used here can be thrown away, but is there a way to recover it? Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean Mooney [mailto:smooney at redhat.com] Sent: Monday, April 19, 2021 3:38 AM To: Dominic Hilsbos; openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][victoria] Migrate cross CPU? On 19/04/2021 03:51, DHilsbos at performair.com wrote: > Sean; > > Thank you, your suggestion led me to a problem with ssh. I was a little surprised by this, as live migration works. thats a pretty common issue. live migration does not use ssh or rsync to to copy the vm disk data that is done by qemu. for cold migration the data is copied by nova using one of 2 drivers either ssh/scp or rsync. > > I reviewed: > https://docs.openstack.org/nova/victoria/admin/ssh-configuration.html#cli-os-migrate-cfg-ssh > and found that I had a problem with the authorized keys file. I took care of that, and it still didn't work. > > Here's what came out of the nova compute log: > 2021-04-18 19:24:27.201 10808 ERROR oslo_messaging.rpc.server [req-225e7beb-f186-4235-abce-efcf4924d505 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Exception during message handling: nova.exception.ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. > Command: ssh -o BatchMode=yes 10.0.128.20 mkdir -p /var/lib/nova/instances/64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > Exit code: 255 > Stdout: '' > Stderr: 'Host key verification failed.\r\n' > > When I do su - nova on the origin server, as per the above, then ssh to the receiving server, I get this: > Load key "/etc/nova/migration/identity": invalid format > > /etc/nova/migration/identity isn't mentioned anywhere in the documentation above. > > I tried: > cat id_rsa > /etc/nova/migration/identity > and > cat id_rsa.pub >> /etc/nova/migration/authorized_keys > > Using the keys copied in the documentation above; still no go. Same 'Host key verification failed.\r\n' result. > > What am I missing? you will need to su to the nova user and make sure the key has the correct permissions set typically 600 and is owned by nova. then you need to do the key exchange and ensure its added to the known hosts. i normally do that by manually sshing as the nova user to the destination hosts. obviously if its more then a cople of hosts you will want to use ansible or something to automate the process. there are basicaly 3 thing you need to do. 1.) copy a key with out a password to the nova user on all hosts and set permission to 600 2.) add the public key to authorized_keys on all hosts 3.) pre populate the known_hosts on all hosts for all other hosts.(you can use ssh-keyscan for this)      if you have more then about 20 hosts do this on one host and copy to all other because quadratic with large number of hosts takes a while... > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Friday, April 16, 2021 9:58 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][victoria] Migrate cross CPU? > > hum ok the best way to debug this is to lis the server events and get > the request id for the migration > it may be req-ff109e53-74e0-40de-8ec7-29aff600b5f7 based on the logs you > posted but you should see more info > in the api, conductor and compute logs for that request id. > > given the state has not change i suspect it failed rather early. > > its possible that you are expirence an issue with the rabbitmq service > and rpc calls are bing lost but > i woudl not expect to see logs realted to this in the scudler while the > vm is stilll in the SHUTOFF status. > > can you do "openstack server event list > 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3" then get the most recent > resize event's request id and see if there are any other logs. > > regard > sean. > > (note i think it will be listed as a resize not a migrate since > interanlly migreate is implmented as resize but to the same flavour). > > On 16/04/2021 17:04, DHilsbos at performair.com wrote: >> Sean; >> >> Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. >> >> Unfortunately, at present, the state change is not occurring. >> >> Here's a series of commands, with output: >> >> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> +-------------------------------------+----------------------------------------------------------+ >> | Field | Value | >> +-------------------------------------+----------------------------------------------------------+ >> | OS-DCF:diskConfig | MANUAL | >> | OS-EXT-AZ:availability_zone | az-elcom-1 | >> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >> | OS-EXT-STS:power_state | Shutdown | >> | OS-EXT-STS:task_state | None | >> | OS-EXT-STS:vm_state | stopped | >> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >> | OS-SRV-USG:terminated_at | None | >> | accessIPv4 | | >> | accessIPv6 | | >> | addresses | it-network=10.255.127.208, 10.0.160.35 | >> | config_drive | | >> | created | 2021-03-06T04:35:51Z | >> | flavor | m4.large (8) | >> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >> | image | N/A (booted from volume) | >> | key_name | None | >> | name | Java Dev | >> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >> | properties | | >> | security_groups | name='allow-ping' | >> | | name='allow-ssh' | >> | | name='default' | >> | status | SHUTOFF | >> | updated | 2021-04-16T15:52:07Z | >> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >> +-------------------------------------+----------------------------------------------------------+ >> #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> +-------------------------------------+----------------------------------------------------------+ >> | Field | Value | >> +-------------------------------------+----------------------------------------------------------+ >> | OS-DCF:diskConfig | MANUAL | >> | OS-EXT-AZ:availability_zone | az-elcom-1 | >> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >> | OS-EXT-STS:power_state | Shutdown | >> | OS-EXT-STS:task_state | None | >> | OS-EXT-STS:vm_state | stopped | >> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >> | OS-SRV-USG:terminated_at | None | >> | accessIPv4 | | >> | accessIPv6 | | >> | addresses | it-network=10.255.127.208, 10.0.160.35 | >> | config_drive | | >> | created | 2021-03-06T04:35:51Z | >> | flavor | m4.large (8) | >> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >> | image | N/A (booted from volume) | >> | key_name | None | >> | name | Java Dev | >> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >> | properties | | >> | security_groups | name='allow-ping' | >> | | name='allow-ssh' | >> | | name='default' | >> | status | SHUTOFF | >> | updated | 2021-04-16T15:53:32Z | >> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >> +-------------------------------------+----------------------------------------------------------+ >> #tail /var/log/nova/nova-conductor.log >> #tail /var/log/nova/nova-scheduler.log >> 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises >> 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: >> >> Both Cinder volume storage, and ephemeral storage are being handled by Ceph. >> >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Director – Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> >> -----Original Message----- >> From: Sean Mooney [mailto:smooney at redhat.com] >> Sent: Friday, April 16, 2021 6:28 AM >> To: openstack-discuss at lists.openstack.org >> Subject: Re: [ops][nova][victoria] Migrate cross CPU? >> >> >> >> On 15/04/2021 19:05, DHilsbos at performair.com wrote: >>> All; >>> >>> I seem to have generated another issue for myself... >>> >>> I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. >>> >>> I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. >>> >>> My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. >>> >>> I also can't find an option to pass to the openstack server start command which requests a specific host. >>> >>> Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? >> you should be able to cold migrate them using the migrate command but >> that should put the servers into resize_verify and then you need >> to confirm the migration to complte it. we will not clean up the vm on >> the source node until you do that last step. >> >>> Thank you, >>> >>> Dominic L. Hilsbos, MBA >>> Director - Information Technology >>> Perform Air International Inc. >>> DHilsbos at PerformAir.com >>> www.PerformAir.com >>> >>> >>> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: nova_compute_log.txt URL: From smooney at redhat.com Mon Apr 19 17:51:55 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 19 Apr 2021 18:51:55 +0100 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <0670B960225633449A24709C291A5252511214EB@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> <0670B960225633449A24709C291A525251114523@COM01.performair.local> <629d1b9e-fce7-03c1-5b4b-81b07b14eebb@redhat.com> <0670B960225633449A24709C291A5252511214EB@COM01.performair.local> Message-ID: <5a66544e-0ca3-abb2-b716-1c05657ac373@redhat.com> On 19/04/2021 18:27, DHilsbos at performair.com wrote: > All; > > I think I've worked through the issue with ssh, however I now have another issue. I've attached an extract from the Nova Compute instance on the new server. > > If I'm reading this correctly, is it having trouble accessing Ceph? Also, this machine I used here can be thrown away, but is there a way to recover it? Unexpected vif_type=binding_failed is the error and that indicates a issue with neutron. specifically in this case the excation was raised as part of _finish_resize when we update the neutron port host filed to the destination before we generate teh domain xml. in this case the neutron ml2 driver refused to bind the port to the new host. req-228b5f98-e3a4-4c22-8c90-eacce6efb091 is the nova request id but we might also use the same on ewhne calling neutron so you cloud try and see if there is an message with that request id in the neutron-server log. if that return nothing then you can check with the port uuid 2e7d818a-43e1-48fb-a4d3-9e36034a46bf if you correct the neutron issue perhaps by manually unsetting and setting the binding_host on the port as an admin to retrigger port bininding you can hard reboot the vm to fix it. you should see an issue in the neturon logs however. i dont see anyth8ng related to ceph in those logs so i think your storage is likely fine. > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Monday, April 19, 2021 3:38 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][victoria] Migrate cross CPU? > > > > On 19/04/2021 03:51, DHilsbos at performair.com wrote: >> Sean; >> >> Thank you, your suggestion led me to a problem with ssh. I was a little surprised by this, as live migration works. > thats a pretty common issue. > live migration does not use ssh or rsync to to copy the vm disk data > that is done by qemu. > for cold migration the data is copied by nova using one of 2 drivers > either ssh/scp or rsync. >> I reviewed: >> https://docs.openstack.org/nova/victoria/admin/ssh-configuration.html#cli-os-migrate-cfg-ssh >> and found that I had a problem with the authorized keys file. I took care of that, and it still didn't work. >> >> Here's what came out of the nova compute log: >> 2021-04-18 19:24:27.201 10808 ERROR oslo_messaging.rpc.server [req-225e7beb-f186-4235-abce-efcf4924d505 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Exception during message handling: nova.exception.ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. >> Command: ssh -o BatchMode=yes 10.0.128.20 mkdir -p /var/lib/nova/instances/64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> Exit code: 255 >> Stdout: '' >> Stderr: 'Host key verification failed.\r\n' >> >> When I do su - nova on the origin server, as per the above, then ssh to the receiving server, I get this: >> Load key "/etc/nova/migration/identity": invalid format >> >> /etc/nova/migration/identity isn't mentioned anywhere in the documentation above. >> >> I tried: >> cat id_rsa > /etc/nova/migration/identity >> and >> cat id_rsa.pub >> /etc/nova/migration/authorized_keys >> >> Using the keys copied in the documentation above; still no go. Same 'Host key verification failed.\r\n' result. >> >> What am I missing? > you will need to su to the nova user and make sure the key has the > correct permissions set typically 600 > and is owned by nova. then you need to do the key exchange and ensure > its added to the known hosts. > i normally do that by manually sshing as the nova user to the > destination hosts. > > obviously if its more then a cople of hosts you will want to use ansible > or something to automate the process. > there are basicaly 3 thing you need to do. > 1.) copy a key with out a password to the nova user on all hosts and set > permission to 600 > 2.) add the public key to authorized_keys on all hosts > 3.) pre populate the known_hosts on all hosts for all other hosts.(you > can use ssh-keyscan for this) >      if you have more then about 20 hosts do this on one host and copy > to all other because quadratic with large number of hosts takes a while... > >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Director – Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> -----Original Message----- >> From: Sean Mooney [mailto:smooney at redhat.com] >> Sent: Friday, April 16, 2021 9:58 AM >> To: Dominic Hilsbos; openstack-discuss at lists.openstack.org >> Subject: Re: [ops][nova][victoria] Migrate cross CPU? >> >> hum ok the best way to debug this is to lis the server events and get >> the request id for the migration >> it may be req-ff109e53-74e0-40de-8ec7-29aff600b5f7 based on the logs you >> posted but you should see more info >> in the api, conductor and compute logs for that request id. >> >> given the state has not change i suspect it failed rather early. >> >> its possible that you are expirence an issue with the rabbitmq service >> and rpc calls are bing lost but >> i woudl not expect to see logs realted to this in the scudler while the >> vm is stilll in the SHUTOFF status. >> >> can you do "openstack server event list >> 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3" then get the most recent >> resize event's request id and see if there are any other logs. >> >> regard >> sean. >> >> (note i think it will be listed as a resize not a migrate since >> interanlly migreate is implmented as resize but to the same flavour). >> >> On 16/04/2021 17:04, DHilsbos at performair.com wrote: >>> Sean; >>> >>> Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. >>> >>> Unfortunately, at present, the state change is not occurring. >>> >>> Here's a series of commands, with output: >>> >>> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >>> +-------------------------------------+----------------------------------------------------------+ >>> | Field | Value | >>> +-------------------------------------+----------------------------------------------------------+ >>> | OS-DCF:diskConfig | MANUAL | >>> | OS-EXT-AZ:availability_zone | az-elcom-1 | >>> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >>> | OS-EXT-STS:power_state | Shutdown | >>> | OS-EXT-STS:task_state | None | >>> | OS-EXT-STS:vm_state | stopped | >>> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >>> | OS-SRV-USG:terminated_at | None | >>> | accessIPv4 | | >>> | accessIPv6 | | >>> | addresses | it-network=10.255.127.208, 10.0.160.35 | >>> | config_drive | | >>> | created | 2021-03-06T04:35:51Z | >>> | flavor | m4.large (8) | >>> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >>> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >>> | image | N/A (booted from volume) | >>> | key_name | None | >>> | name | Java Dev | >>> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >>> | properties | | >>> | security_groups | name='allow-ping' | >>> | | name='allow-ssh' | >>> | | name='default' | >>> | status | SHUTOFF | >>> | updated | 2021-04-16T15:52:07Z | >>> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >>> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >>> +-------------------------------------+----------------------------------------------------------+ >>> #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >>> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >>> +-------------------------------------+----------------------------------------------------------+ >>> | Field | Value | >>> +-------------------------------------+----------------------------------------------------------+ >>> | OS-DCF:diskConfig | MANUAL | >>> | OS-EXT-AZ:availability_zone | az-elcom-1 | >>> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >>> | OS-EXT-STS:power_state | Shutdown | >>> | OS-EXT-STS:task_state | None | >>> | OS-EXT-STS:vm_state | stopped | >>> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >>> | OS-SRV-USG:terminated_at | None | >>> | accessIPv4 | | >>> | accessIPv6 | | >>> | addresses | it-network=10.255.127.208, 10.0.160.35 | >>> | config_drive | | >>> | created | 2021-03-06T04:35:51Z | >>> | flavor | m4.large (8) | >>> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >>> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >>> | image | N/A (booted from volume) | >>> | key_name | None | >>> | name | Java Dev | >>> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >>> | properties | | >>> | security_groups | name='allow-ping' | >>> | | name='allow-ssh' | >>> | | name='default' | >>> | status | SHUTOFF | >>> | updated | 2021-04-16T15:53:32Z | >>> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >>> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >>> +-------------------------------------+----------------------------------------------------------+ >>> #tail /var/log/nova/nova-conductor.log >>> #tail /var/log/nova/nova-scheduler.log >>> 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises >>> 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: >>> >>> Both Cinder volume storage, and ephemeral storage are being handled by Ceph. >>> >>> Thank you, >>> >>> Dominic L. Hilsbos, MBA >>> Director – Information Technology >>> Perform Air International Inc. >>> DHilsbos at PerformAir.com >>> www.PerformAir.com >>> >>> >>> -----Original Message----- >>> From: Sean Mooney [mailto:smooney at redhat.com] >>> Sent: Friday, April 16, 2021 6:28 AM >>> To: openstack-discuss at lists.openstack.org >>> Subject: Re: [ops][nova][victoria] Migrate cross CPU? >>> >>> >>> >>> On 15/04/2021 19:05, DHilsbos at performair.com wrote: >>>> All; >>>> >>>> I seem to have generated another issue for myself... >>>> >>>> I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. >>>> >>>> I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. >>>> >>>> My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. >>>> >>>> I also can't find an option to pass to the openstack server start command which requests a specific host. >>>> >>>> Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? >>> you should be able to cold migrate them using the migrate command but >>> that should put the servers into resize_verify and then you need >>> to confirm the migration to complte it. we will not clean up the vm on >>> the source node until you do that last step. >>> >>>> Thank you, >>>> >>>> Dominic L. Hilsbos, MBA >>>> Director - Information Technology >>>> Perform Air International Inc. >>>> DHilsbos at PerformAir.com >>>> www.PerformAir.com >>>> >>>> >>>> From Albert.Shih at obspm.fr Mon Apr 19 18:12:56 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Mon, 19 Apr 2021 20:12:56 +0200 Subject: cinder + Unity Message-ID: Hi everyone, I'm a total newbie with openstack, currently I'm trying to put a POC with a Unity storage element, 4 computes, and few servers (cinder, keystone, glance, neutron, nova, placement and horizon). I think my keystone, glance, placement are working (at least they past the test). Currently I'm trying to make cinder working with my Unity (480), the objectif are to use iSCSI. Here the configuration of my /etc/cinder/cinder.conf [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper = tgtadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes enabled_backends = unity transport_url = rabbit://openstack:XXXXXX at amqp-cloud.private.FQDN/openstack auth_strategy = keystone debug = True verbose = True [database] connection = mysql+pymysql://cinder:XXXXXXX at mariadb-cloud.private.FQDN/cinder [keystone_authtoken] www_authenticate_uri = http://keystone.private.FQDN:5000/v3 auth_url = http://keystone.private.FQDN:5000 identity_uri = http://keystone.private.FQDN:5000 memcached_servers = memcached-cloud.private.FQDN:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = XXXXXX [oslo_concurrency] lock_path = /var/lib/cinder/tmp [unity] # Storage protocol storage_protocol = iSCSI # Unisphere IP san_ip = onering-remote.FQDN # Unisphere username and password san_login = openstack san_password = "XXXXX" # Volume driver name volume_driver = cinder.volume.drivers.dell_emc.unity.Driver # backend's name volume_backend_name = Unitiy_ISCSI unity_io_ports = *_enp1s0 unity_storage_pool_names = onering When I'm trying to create a storage through a openstack volume create volumetest --type thick_volume_type --size 100 I don't even see (with tcpdump) the cinder server trying to connect to onering-remote.FQDN Inside my /var/log/cinder/cinder-scheduler.log I have 2021-04-19 18:06:56.805 21315 INFO cinder.scheduler.base_filter [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] Filtering removed all hosts for the request with volume ID '06e5f07d-766f-4d07-b3bf-6153a2cf6abd'. Filter results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0) 2021-04-19 18:06:56.806 21315 WARNING cinder.scheduler.filter_scheduler [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] No weighed backend found for volume with properties: {'id': '5f16fc1f-76ff-41ee-8927-56925cf7b00f', 'name': 'thick_volume_type', 'description': None, 'is_public': True, 'projects': [], 'extra_specs': {'provisioning:type': 'thick', 'thick_provisioning_support': 'True'}, 'qos_specs_id': None, 'created_at': '2021-04-19T15:07:09.000000', 'updated_at': None, 'deleted_at': None, 'deleted': False} 2021-04-19 18:06:56.806 21315 INFO cinder.message.api [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] Creating message record for request_id = req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 2021-04-19 18:06:56.811 21315 ERROR cinder.scheduler.flows.create_volume [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available It seem (for me) cinder don't try to use unity.... Any help ? Regards -- Albert SHIH Observatoire de Paris Heure local/Local time: Mon Apr 19 08:01:37 PM CEST 2021 From DHilsbos at performair.com Mon Apr 19 18:22:58 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Mon, 19 Apr 2021 18:22:58 +0000 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <5a66544e-0ca3-abb2-b716-1c05657ac373@redhat.com> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> <0670B960225633449A24709C291A525251114523@COM01.performair.local> <629d1b9e-fce7-03c1-5b4b-81b07b14eebb@redhat.com> <0670B960225633449A24709C291A5252511214EB@COM01.performair.local> <5a66544e-0ca3-abb2-b716-1c05657ac373@redhat.com> Message-ID: <0670B960225633449A24709C291A5252511226CF@COM01.performair.local> Sean; Thank you for all your help, the migrated server was able to be force rebooted. For reference, I had the physical network configuration slightly off. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean Mooney [mailto:smooney at redhat.com] Sent: Monday, April 19, 2021 10:52 AM To: Dominic Hilsbos; openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][victoria] Migrate cross CPU? On 19/04/2021 18:27, DHilsbos at performair.com wrote: > All; > > I think I've worked through the issue with ssh, however I now have another issue. I've attached an extract from the Nova Compute instance on the new server. > > If I'm reading this correctly, is it having trouble accessing Ceph? Also, this machine I used here can be thrown away, but is there a way to recover it? Unexpected vif_type=binding_failed is the error and that indicates a issue with neutron. specifically in this case the excation was raised as part of _finish_resize when we update the neutron port host filed to the destination before we generate teh domain xml. in this case the neutron ml2 driver refused to bind the port to the new host. req-228b5f98-e3a4-4c22-8c90-eacce6efb091 is the nova request id but we might also use the same on ewhne calling neutron so you cloud try and see if there is an message with that request id in the neutron-server log. if that return nothing then you can check with the port uuid 2e7d818a-43e1-48fb-a4d3-9e36034a46bf if you correct the neutron issue perhaps by manually unsetting and setting the binding_host on the port as an admin to retrigger port bininding you can hard reboot the vm to fix it. you should see an issue in the neturon logs however. i dont see anyth8ng related to ceph in those logs so i think your storage is likely fine. > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Monday, April 19, 2021 3:38 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][victoria] Migrate cross CPU? > > > > On 19/04/2021 03:51, DHilsbos at performair.com wrote: >> Sean; >> >> Thank you, your suggestion led me to a problem with ssh. I was a little surprised by this, as live migration works. > thats a pretty common issue. > live migration does not use ssh or rsync to to copy the vm disk data > that is done by qemu. > for cold migration the data is copied by nova using one of 2 drivers > either ssh/scp or rsync. >> I reviewed: >> https://docs.openstack.org/nova/victoria/admin/ssh-configuration.html#cli-os-migrate-cfg-ssh >> and found that I had a problem with the authorized keys file. I took care of that, and it still didn't work. >> >> Here's what came out of the nova compute log: >> 2021-04-18 19:24:27.201 10808 ERROR oslo_messaging.rpc.server [req-225e7beb-f186-4235-abce-efcf4924d505 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Exception during message handling: nova.exception.ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. >> Command: ssh -o BatchMode=yes 10.0.128.20 mkdir -p /var/lib/nova/instances/64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >> Exit code: 255 >> Stdout: '' >> Stderr: 'Host key verification failed.\r\n' >> >> When I do su - nova on the origin server, as per the above, then ssh to the receiving server, I get this: >> Load key "/etc/nova/migration/identity": invalid format >> >> /etc/nova/migration/identity isn't mentioned anywhere in the documentation above. >> >> I tried: >> cat id_rsa > /etc/nova/migration/identity >> and >> cat id_rsa.pub >> /etc/nova/migration/authorized_keys >> >> Using the keys copied in the documentation above; still no go. Same 'Host key verification failed.\r\n' result. >> >> What am I missing? > you will need to su to the nova user and make sure the key has the > correct permissions set typically 600 > and is owned by nova. then you need to do the key exchange and ensure > its added to the known hosts. > i normally do that by manually sshing as the nova user to the > destination hosts. > > obviously if its more then a cople of hosts you will want to use ansible > or something to automate the process. > there are basicaly 3 thing you need to do. > 1.) copy a key with out a password to the nova user on all hosts and set > permission to 600 > 2.) add the public key to authorized_keys on all hosts > 3.) pre populate the known_hosts on all hosts for all other hosts.(you > can use ssh-keyscan for this) >      if you have more then about 20 hosts do this on one host and copy > to all other because quadratic with large number of hosts takes a while... > >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Director – Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> -----Original Message----- >> From: Sean Mooney [mailto:smooney at redhat.com] >> Sent: Friday, April 16, 2021 9:58 AM >> To: Dominic Hilsbos; openstack-discuss at lists.openstack.org >> Subject: Re: [ops][nova][victoria] Migrate cross CPU? >> >> hum ok the best way to debug this is to lis the server events and get >> the request id for the migration >> it may be req-ff109e53-74e0-40de-8ec7-29aff600b5f7 based on the logs you >> posted but you should see more info >> in the api, conductor and compute logs for that request id. >> >> given the state has not change i suspect it failed rather early. >> >> its possible that you are expirence an issue with the rabbitmq service >> and rpc calls are bing lost but >> i woudl not expect to see logs realted to this in the scudler while the >> vm is stilll in the SHUTOFF status. >> >> can you do "openstack server event list >> 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3" then get the most recent >> resize event's request id and see if there are any other logs. >> >> regard >> sean. >> >> (note i think it will be listed as a resize not a migrate since >> interanlly migreate is implmented as resize but to the same flavour). >> >> On 16/04/2021 17:04, DHilsbos at performair.com wrote: >>> Sean; >>> >>> Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. >>> >>> Unfortunately, at present, the state change is not occurring. >>> >>> Here's a series of commands, with output: >>> >>> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >>> +-------------------------------------+----------------------------------------------------------+ >>> | Field | Value | >>> +-------------------------------------+----------------------------------------------------------+ >>> | OS-DCF:diskConfig | MANUAL | >>> | OS-EXT-AZ:availability_zone | az-elcom-1 | >>> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >>> | OS-EXT-STS:power_state | Shutdown | >>> | OS-EXT-STS:task_state | None | >>> | OS-EXT-STS:vm_state | stopped | >>> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >>> | OS-SRV-USG:terminated_at | None | >>> | accessIPv4 | | >>> | accessIPv6 | | >>> | addresses | it-network=10.255.127.208, 10.0.160.35 | >>> | config_drive | | >>> | created | 2021-03-06T04:35:51Z | >>> | flavor | m4.large (8) | >>> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >>> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >>> | image | N/A (booted from volume) | >>> | key_name | None | >>> | name | Java Dev | >>> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >>> | properties | | >>> | security_groups | name='allow-ping' | >>> | | name='allow-ssh' | >>> | | name='default' | >>> | status | SHUTOFF | >>> | updated | 2021-04-16T15:52:07Z | >>> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >>> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >>> +-------------------------------------+----------------------------------------------------------+ >>> #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >>> #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 >>> +-------------------------------------+----------------------------------------------------------+ >>> | Field | Value | >>> +-------------------------------------+----------------------------------------------------------+ >>> | OS-DCF:diskConfig | MANUAL | >>> | OS-EXT-AZ:availability_zone | az-elcom-1 | >>> | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | >>> | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | >>> | OS-EXT-STS:power_state | Shutdown | >>> | OS-EXT-STS:task_state | None | >>> | OS-EXT-STS:vm_state | stopped | >>> | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | >>> | OS-SRV-USG:terminated_at | None | >>> | accessIPv4 | | >>> | accessIPv6 | | >>> | addresses | it-network=10.255.127.208, 10.0.160.35 | >>> | config_drive | | >>> | created | 2021-03-06T04:35:51Z | >>> | flavor | m4.large (8) | >>> | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | >>> | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | >>> | image | N/A (booted from volume) | >>> | key_name | None | >>> | name | Java Dev | >>> | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | >>> | properties | | >>> | security_groups | name='allow-ping' | >>> | | name='allow-ssh' | >>> | | name='default' | >>> | status | SHUTOFF | >>> | updated | 2021-04-16T15:53:32Z | >>> | user_id | 69b73ea8f55c46a99021e77ebf70b62a | >>> | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | >>> +-------------------------------------+----------------------------------------------------------+ >>> #tail /var/log/nova/nova-conductor.log >>> #tail /var/log/nova/nova-scheduler.log >>> 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises >>> 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: >>> >>> Both Cinder volume storage, and ephemeral storage are being handled by Ceph. >>> >>> Thank you, >>> >>> Dominic L. Hilsbos, MBA >>> Director – Information Technology >>> Perform Air International Inc. >>> DHilsbos at PerformAir.com >>> www.PerformAir.com >>> >>> >>> -----Original Message----- >>> From: Sean Mooney [mailto:smooney at redhat.com] >>> Sent: Friday, April 16, 2021 6:28 AM >>> To: openstack-discuss at lists.openstack.org >>> Subject: Re: [ops][nova][victoria] Migrate cross CPU? >>> >>> >>> >>> On 15/04/2021 19:05, DHilsbos at performair.com wrote: >>>> All; >>>> >>>> I seem to have generated another issue for myself... >>>> >>>> I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. >>>> >>>> I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. >>>> >>>> My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. >>>> >>>> I also can't find an option to pass to the openstack server start command which requests a specific host. >>>> >>>> Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? >>> you should be able to cold migrate them using the migrate command but >>> that should put the servers into resize_verify and then you need >>> to confirm the migration to complte it. we will not clean up the vm on >>> the source node until you do that last step. >>> >>>> Thank you, >>>> >>>> Dominic L. Hilsbos, MBA >>>> Director - Information Technology >>>> Perform Air International Inc. >>>> DHilsbos at PerformAir.com >>>> www.PerformAir.com >>>> >>>> >>>> From rdhasman at redhat.com Mon Apr 19 19:05:46 2021 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 20 Apr 2021 00:35:46 +0530 Subject: Fwd: cinder + Unity In-Reply-To: References: Message-ID: Hi Albert, On Mon, Apr 19, 2021 at 11:45 PM Albert Shih wrote: > Hi everyone, > > I'm a total newbie with openstack, currently I'm trying to put a POC with a > Unity storage element, 4 computes, and few servers (cinder, keystone, > glance, neutron, nova, placement and horizon). > > I think my keystone, glance, placement are working (at least they past the > test). > > Currently I'm trying to make cinder working with my Unity (480), the > objectif are to use iSCSI. > > Here the configuration of my /etc/cinder/cinder.conf > > [DEFAULT] > rootwrap_config = /etc/cinder/rootwrap.conf > api_paste_confg = /etc/cinder/api-paste.ini > iscsi_helper = tgtadm > volume_name_template = volume-%s > volume_group = cinder-volumes > verbose = True > auth_strategy = keystone > state_path = /var/lib/cinder > lock_path = /var/lock/cinder > volumes_dir = /var/lib/cinder/volumes > enabled_backends = unity > transport_url = rabbit://openstack:XXXXXX at amqp-cloud.private.FQDN > /openstack > auth_strategy = keystone > debug = True > verbose = True > > [database] > connection = mysql+pymysql://cinder:XXXXXXX at mariadb-cloud.private.FQDN > /cinder > > [keystone_authtoken] > www_authenticate_uri = http://keystone.private.FQDN:5000/v3 > auth_url = http://keystone.private.FQDN:5000 > identity_uri = http://keystone.private.FQDN:5000 > memcached_servers = memcached-cloud.private.FQDN:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = cinder > password = XXXXXX > > [oslo_concurrency] > lock_path = /var/lib/cinder/tmp > > [unity] > # Storage protocol > storage_protocol = iSCSI > # Unisphere IP > san_ip = onering-remote.FQDN > # Unisphere username and password > san_login = openstack > san_password = "XXXXX" > # Volume driver name > volume_driver = cinder.volume.drivers.dell_emc.unity.Driver > # backend's name > volume_backend_name = Unitiy_ISCSI > This might be something to look at with the wrong spelling causing mismatch. > unity_io_ports = *_enp1s0 > unity_storage_pool_names = onering > > When I'm trying to create a storage through a > > openstack volume create volumetest --type thick_volume_type --size 100 > > I don't even see (with tcpdump) the cinder server trying to connect to > > onering-remote.FQDN > > Inside my /var/log/cinder/cinder-scheduler.log I have > > 2021-04-19 18:06:56.805 21315 INFO cinder.scheduler.base_filter > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] Filtering removed all hosts for the > request with volume ID '06e5f07d-766f-4d07-b3bf-6153a2cf6abd'. Filter > results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: > (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0) > This log mentions that no host is valid to pass the 3 filters in the scheduler. > 2021-04-19 18:06:56.806 21315 WARNING cinder.scheduler.filter_scheduler > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] No weighed backend found for volume > with properties: {'id': '5f16fc1f-76ff-41ee-8927-56925cf7b00f', 'name': > 'thick_volume_type', 'description': None, 'is_public': True, 'projects': > [], 'extra_specs': {'provisioning:type': 'thick', > 'thick_provisioning_support': 'True'}, 'qos_specs_id': None, 'created_at': > '2021-04-19T15:07:09.000000', 'updated_at': None, 'deleted_at': None, > 'deleted': False} > 2021-04-19 18:06:56.806 21315 INFO cinder.message.api > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] Creating message record for > request_id = req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 > 2021-04-19 18:06:56.811 21315 ERROR cinder.scheduler.flows.create_volume > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] Failed to run task > cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: > No valid backend was found. No weighed backends available: > cinder.exception.NoValidBackend: No valid backend was found. No weighed > backends available > > > It seem (for me) cinder don't try to use unity.... > > The cinder-volume service is responsible for communicating with the backend and this create request fails on scheduler only, hence no sign of it. > Any help ? > > Regards > > Looking at the scheduler logs, there are a few things you can check: 1) execute ``cinder-manage service list`` command and check the status of cinder-volume service if it's active or not. If it shows an X sign then check in cinder-volume logs for any startup failure. 2) Check the volume type properties and see if ``volume_backend_name`` is set to the right value i.e. Unitiy_ISCSI (which looks suspicious because the spelling is wrong and there might be a mismatch somewhere) Also it's good to mention the openstack version you're using since the code changes every cycle and it's hard to track the issues with every release. Thanks and regards Rajat Dhasmana > > -- > Albert SHIH > Observatoire de Paris > Heure local/Local time: > Mon Apr 19 08:01:37 PM CEST 2021 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Apr 19 19:15:27 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 19 Apr 2021 12:15:27 -0700 Subject: [E] Re: [ironic] Ironic Whiteboard v2 call for reviews In-Reply-To: References: Message-ID: I would drop the entire subprojects section at the very bottom. It is functionally a mix of statuses, priorities for review, and out of date. I doubt the wiki page for 3rd party CI's is really useful as well. I guess it is very verbose, possibly too much so, but given the end of the cycle we do need to trim back all of the updates anyhow. Overall, I the non-update related portion is much cleaner, which I think was the goal :) On Mon, Apr 19, 2021 at 10:32 AM Jay Faulkner wrote: > > > Also a formatting adjustment, change the size/style of the characters to have a better visual separation between topic name and content in the status report section. > > I left the subteam status area mostly alone, as many of the status updates appeared to be out of date or for features that are already landed or not being worked on. I added an item to the next meetings' agenda to discuss this. > > I'll add the other links you suggested. Thanks! > > - > Jay Faulkner > > On Mon, Apr 19, 2021 at 1:38 AM Riccardo Pittau wrote: >> >> That looks great, thanks! >> I would probably just add a link to https://ironicbaremetal.org/ and to the meeting page https://wiki.openstack.org/wiki/Meetings/Ironic at the very top in the first section. >> Also a formatting adjustment, change the size/style of the characters to have a better visual separation between topic name and content in the status report section. >> >> Cheers, >> Riccardo >> >> >> On Fri, Apr 16, 2021 at 6:31 PM Jay Faulkner wrote: >>> >>> Hi all, >>> >>> Iury and I spent some time this morning updating the Ironic whiteboard etherpad to include more immediately useful information to contributors. >>> >>> We placed this updated whiteboard at https://etherpad.opendev.org/p/IronicWhiteBoardv2 -- our approach was to prune any outdated/broken links or information, and focus on making the first part of the whiteboard an easy one-click place for folks to see easy ways to contribute. All the rest of the information was carried over and reformatted. >>> >>> Once there is consensus from the team about this being a positive change, we should either replace the existing IronicWhiteBoard with the contents of the v2 page, or just update links to point to the new one instead. >>> >>> What do you all think? >>> >>> Thanks, >>> Jay Faulkner From manish16054 at gmail.com Mon Apr 19 21:44:57 2021 From: manish16054 at gmail.com (Manish Mahalwal) Date: Tue, 20 Apr 2021 03:14:57 +0530 Subject: dynamic vendor data and cloud-init In-Reply-To: References: Message-ID: Thank you for the response, Sean. But, I have already successfully implemented and deployed dynamic vendordata to an Openstack instance using these really good tutorials. So, my issue is not with Openstack but with cloud-init. Even though dynamic vendordata was implemented back in Openstack Pike, cloud-init, however, was not handling dynamic vendordata until very recently and they implemented it in Feb 2021! So, even though people were able to send dynamic vendordata to instances they were not able to execute any of that dynamic vendordata using cloud-init. However, even this new implementation from cloud-init comes with issues. I am not able to execute any of the YAML files which I pass through dynamic vendordata. Static vendordata executes perfectly fine though! Now coming to the issue, the vendordata_dynamic_targets flag in the nova.conf follows the syntax: vendordata_dynamic_targets=name@ *So, what value should 'name' hold here?* Should it be the name you give to the REST service or should it be the name of the package that consumes the vendor_data2.json (cloud-init in this case)? I am asking this because https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html mentions that you can multiple REST services to fetch multiple dynamic vendordata's. So, if we set the 'name' attribute to 'cloud-init' then how are we going to differentiate between different REST services, since Nova expects that the key should be unique and repeat key would be ignored (from the specs). For example: vendordata_dynamic_targets=name1@ http://example.com,name2 at http://example2.com { > "name1": { > "cloud-init": "#cloud-config\n..." > }, > "name2": { > "cloud-init": "#cloud-config\n..." > } > } So, essentially, which among the two options is correct? 1. in nova.conf: vendordata_dynamic_targets=cloud-init at localhost and my REST service responds with the value of "cloud-init" key to create the following vendor_data2.json {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"} or 2. in nova.conf: vendordata_dynamic_targets=testing at localhost and my REST service responds with the value of "testing" key. {"testing": {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"}} I hope to get more eyes on this issue as it otherwise renders dynamic vendordata useless if cloud-init doesn't handle it properly. Thank you for your time! *Manish Mahalwal* *MathWorks* On Fri, 16 Apr 2021 at 20:00, Sean Mooney wrote: > > > On 15/04/2021 15:53, Manish Mahalwal wrote: > > Hi All, > > > > I am working with OpenStack Pike and cloud-init 21.1. I am able to > > successfully pass dynamic vendor data to the config drive of an > > instance. However, cloud-init 21.1 just reads all the 'x' bytes of the > > vendor_data2.json but it doesn't execute the contents of the json. > > Although, static vendor data works perfectly fine and the YAML file in > > the JSON is executed as expected by cloud-init 21.1 > > > > * Now, the person who wrote the code for handling dynamic vendordata > > in cloud-init (https://github.com/canonical/cloud-init/pull/777 > > ) says that the JSON > > cloud-init expects is of the form: > > > > {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n > > - black\nfqdn: cloud-overridden-by-vendordata2.example.org."} > > > > > > the reference implementation for the dynamic vendor data backend was > https://github.com/mikalstill/vendordata and it was a feature developed > specificaly for rackspace. > > the data format that service should return is > > # { > > # "hostname": "foo", > > # "image-id": "75a74383-f276-4774-8074-8c4e3ff2ca64", > > # "instance-id": "2ae914e9-f5ab-44ce-b2a2-dcf8373d899d", > > # "metadata": {}, > > # "project-id": "039d104b7a5c4631b4ba6524d0b9e981", > > # "user-data": null > > # } > # An example of this data: > > > > https://github.com/mikalstill/vendordata/blob/master/app.py#L34-L42 > > this blog post explains how it should work > > https://www.madebymikal.com/nova-vendordata-deployment-an-excessively-detailed-guide/ > > > > > * I believe that the JSON should have another outer key (as mentioned > > here > > > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html > > < > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html>) > > > which is the name of the microservice specified in nova.conf file and > > that the inner key should be cloud-init. > > > > In nova.conf: > > vendordata_dynamic_targets=name1 at http://example.com,name2@ > http://example2.com > > > > > > { > > "name1": { > > "cloud-init": "#cloud-config\n..." > > }, > > "name2": { > > "cloud-init": "#cloud-config\n..." > > } > > } > > > > > > > > > > >>Who is right and who is wrong? > > > > To read more on this please go through the following: > > https://bugs.launchpad.net/cloud-init/+bug/1841104 > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Tue Apr 20 03:15:01 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 20 Apr 2021 12:15:01 +0900 Subject: [tacker] PTG meeting Message-ID: Hi team, As a reminder, we will have PTG meeting from 6am UTC (3pm JST/KST) today [1]. You can find the link of room from here [2]. I'd like to skip irc meeting this week without if we have something to discuss after the meeting. [1] https://etherpad.opendev.org/p/tacker-xena-ptg [2] http://ptg.openstack.org/ptg.html Thanks, Yasufumi From tburke at nvidia.com Tue Apr 20 05:30:23 2021 From: tburke at nvidia.com (Timothy Burke) Date: Tue, 20 Apr 2021 05:30:23 +0000 Subject: [ptg][swift][ops] Operator feedback session 21 Apr 13:00-14:00 UTC Message-ID: Later this week, the Swift community will have an operator feedback session during one of our PTG slots [0]. If you're running a swift cluster, large or small, we'd love to see you there! As we've done in the past [1], we'll have an etherpad going [2] -- feel free to start filling it in and adding discussion items today! Even if you won't make the meeting, we'd appreciate your feedback. Looking forward to hearing from you! Tim [0] 21 Apr at 1300 UTC in https://www.openstack.org/ptg/rooms/ocata [1] https://wiki.openstack.org/wiki/Swift/Etherpads#List_of_Ops_Feedback_Etherpads [2] https://etherpad.opendev.org/p/swift-xena-ops-feedback -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Tue Apr 20 06:16:51 2021 From: ricolin at ricolky.com (Rico Lin) Date: Tue, 20 Apr 2021 14:16:51 +0800 Subject: [Multi-arch SIG][PTG] Multi-arch SIG PTG 07:00 and 15:00 UTC today! Message-ID: Dear all Multi-arch SIG will have our PTG session from 07:00-08:00 and 15:00-16:00 (UTC time) Feel free to join us PTG: http://ptg.openstack.org/ptg.html Etherpad: https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevinzs2048 at gmail.com Tue Apr 20 06:21:15 2021 From: kevinzs2048 at gmail.com (kevinz) Date: Tue, 20 Apr 2021 14:21:15 +0800 Subject: [Multi-arch SIG] success to run full tempest tests on Arm64 env. What's next? In-Reply-To: References: Message-ID: HI Ian, On Fri, Apr 9, 2021 at 12:07 PM Ian Wienand wrote: > On Tue, Apr 06, 2021 at 03:43:29PM +0800, Rico Lin wrote: > > The job `devstack-platform-arm64` runs around 2.22 hrs to 3.04 hrs, which > > is near two times slower than on x86 environment. It's not a solid number > > as the performance might change a lot with different cloud environments > and > > different hardware. > > I guess right now we only have one ARM64 cloud so it won't vary that > much :) But we're working on it ... > > I'd like to use this for nodepool / diskimage-builder end-to-end > testing, where we bring up a devstack cloud, build images with dib, > upload them to the devstack cloud with nodepool and boot them. > > But I found that there was no nested virtualisation and the binary > translation mode was impractically slow; like I walked away for almost > an hour and the serial console was putting out a letter every few > seconds like a teletype from 1977 :) > > $ qemu-system-aarch64 -M virt -m 2048 > -drive if=none,file=./test.qcow2,media=disk,id=hd0 > -device virtio-blk-device,drive=hd0 -net none -pflash flash0.img > -pflash flash1.img > > Maybe I have something wrong there? I couldn't find a lot of info on > how to boot. I expected slow, but not that slow. > > Yes No nested virtualization now on Arm64 server. > Is binary translation practical? Is booting cirros images, etc. big > part of this much longer runtime? > Suggest boot cirros as the test VM, other distro boot use qemu without kvm support is slow > > -i > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Tue Apr 20 11:42:24 2021 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 20 Apr 2021 13:42:24 +0200 Subject: [neutron] bug deputy report for week of 2021-04-12 Message-ID: Hi Neutron Team, We had the following new bugs last week: Critical * https://bugs.launchpad.net/neutron/+bug/1923633 Neutron-tempest-plugin scenario jobs failing due to metadata issues gate failure, Slawek and Rodolfo added extra logs and are analyzing them High * https://bugs.launchpad.net/neutron/+bug/1923453 and * https://bugs.launchpad.net/neutron/+bug/1923870 Some Neutron commands do not initilize privsep fix merged: https://review.opendev.org/c/openstack/neutron/+/786272 fix in the gate: https://review.opendev.org/c/openstack/neutron/+/786282 * https://bugs.launchpad.net/neutron/+bug/1924789 Since the migration to new engine facade, some operations that shouldn't be executing within a txn, are logging a "session semantic violated" warning unassigned Medium * https://bugs.launchpad.net/neutron/+bug/1923700 Report correct error when external DNS(designate) recordset quota limit exceeds fix proposed (waiting for neutron-lib release): https://review.opendev.org/c/openstack/neutron/+/786175 Low * https://bugs.launchpad.net/neutron/+bug/1923423 l3_router.service_providers DriverController's _attrs_to_driver is not py3 compatible fix merged: https://review.opendev.org/c/openstack/neutron/+/785830 Stable * https://bugs.launchpad.net/neutron/+bug/1924315 [stable/rocky] neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-rocky job fails gate failure, fix proposed: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/786657 * https://bugs.launchpad.net/neutron/+bug/1923412 [stable/stein] Tempest fails with unrecognized arguments: --exclude-regex gate failure, unassigned * https://bugs.launchpad.net/neutron/+bug/1923644 OVN agent delete support should be backported to pre-wallaby releases backport with many dependencies, recommended for stable reviewers Neutron-Dynamic-Routing * https://bugs.launchpad.net/neutron/+bug/1924237 TypeError when formating IP address for BGP plugin fix merged: https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/786371 micro-RFE * https://bugs.launchpad.net/neutron/+bug/1923592 [Routed networks] Router routes to other segment CIDRs should have a gateway IP Edit still following up with the triage: * https://bugs.launchpad.net/neutron/+bug/1924776 [ovn] use of address scopes does not automatically disable router snat * https://bugs.launchpad.net/neutron/+bug/1924765 [ovn] fip assignment to instance via router with snat disabled is broken * https://bugs.launchpad.net/neutron/+bug/1922089 [ovn] enable_snat cannot be disabled once enabled Cheers, Bence From Albert.Shih at obspm.fr Tue Apr 20 13:34:49 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Tue, 20 Apr 2021 15:34:49 +0200 Subject: Fwd: cinder + Unity In-Reply-To: References: Message-ID: Le 20/04/2021 à 00:35:46+0530, Rajat Dhasmana a écrit Hi Rajat, > > This might be something to look at with the wrong spelling causing mismatch. >   > >   unity_io_ports = *_enp1s0 >   unity_storage_pool_names = onering > > When I'm trying to create a storage through a > >     openstack volume create volumetest --type thick_volume_type --size 100 > > I don't even see (with tcpdump) the cinder server trying to connect to > >   onering-remote.FQDN > > Inside my /var/log/cinder/cinder-scheduler.log I have > >   2021-04-19 18:06:56.805 21315 INFO cinder.scheduler.base_filter > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] Filtering removed all hosts for the > request with volume ID '06e5f07d-766f-4d07-b3bf-6153a2cf6abd'. Filter > results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: > (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0) > > > This log mentions that no host is valid to pass the 3 filters in the scheduler. >   > >   2021-04-19 18:06:56.806 21315 WARNING cinder.scheduler.filter_scheduler > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] No weighed backend found for volume > with properties: {'id': '5f16fc1f-76ff-41ee-8927-56925cf7b00f', 'name': > 'thick_volume_type', 'description': None, 'is_public': True, 'projects': > [], 'extra_specs': {'provisioning:type': 'thick', > 'thick_provisioning_support': 'True'}, 'qos_specs_id': None, 'created_at': > '2021-04-19T15:07:09.000000', 'updated_at': None, 'deleted_at': None, > 'deleted': False} >   2021-04-19 18:06:56.806 21315 INFO cinder.message.api > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] Creating message record for > request_id = req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 >   2021-04-19 18:06:56.811 21315 ERROR cinder.scheduler.flows.create_volume > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > b1d58ebae6b84f7586ad63b94203d7ae - - -] Failed to run task > cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask; > volume:create: No valid backend was found. No weighed backends available: > cinder.exception.NoValidBackend: No valid backend was found. No weighed > backends available > > > It seem (for me) cinder don't try to use unity.... > > > > The cinder-volume service is responsible for communicating with the backend and > this create request fails on scheduler only, hence no sign of it. Absolutly. I didn't known I need to install the cinder-volume, inside the docs it's seem the cinder-volume is for LVM backend. My bad. After installing the cinder-volume and > > Looking at the scheduler logs, there are a few things you can check: > > 1) execute ``cinder-manage service list`` command and check the status of > cinder-volume service if it's active or not. If it shows an X sign then check > in cinder-volume logs for any startup failure. > 2) Check the volume type properties and see if ``volume_backend_name`` is set > to the right value i.e. Unitiy_ISCSI (which looks suspicious because the > spelling is wrong and there might be a mismatch somewhere) changing through openstack volume type I was able to create a volume on my Unity storage unit. I'm not sure it's working perfectly because I still don't have nova and neutron running. But now I'm going to configure nova and neutron. > > Also it's good to mention the openstack version you're using since the code > changes every cycle and it's hard to track the issues with every release. Sorry. I will do that next time. Thanks you very much. -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue Apr 20 03:32:01 PM CEST 2021 From Albert.Shih at obspm.fr Tue Apr 20 13:40:40 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Tue, 20 Apr 2021 15:40:40 +0200 Subject: cinder + Unity In-Reply-To: References: Message-ID: Le 20/04/2021 à 00:33:56+0530, Rajat Dhasmana a écrit Hi, Me again... > This might be something to look at with the wrong spelling causing mismatch. >   Just one thing ... > >   unity_io_ports = *_enp1s0 What's that ? I didn't find a right configuration for that. Let them empty make cinder working well. But it's little frustrating. What's unity_io_ports should be ? My intend if to use iSCSI only so no FC ports. Just the «regular» ethernet 10Gbit/s ports. So I try enp1s0 (the name of the interface), spa_enp1s0, *_enps1s0 etc...every thing I can think of end up with a error. I don't event understand why I need this information because the cinder server don't «mount» anything. Regards -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue Apr 20 03:36:24 PM CEST 2021 From smooney at redhat.com Tue Apr 20 13:44:43 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 20 Apr 2021 14:44:43 +0100 Subject: dynamic vendor data and cloud-init In-Reply-To: References: Message-ID: On 19/04/2021 22:44, Manish Mahalwal wrote: > Thank you for the response, Sean. But, I have already successfully > implemented and deployed dynamic vendordata to an Openstack instance > using these really good tutorials. So, my issue is not with Openstack > but with cloud-init. Even though dynamic vendordata was implemented > back in Openstack Pike, cloud-init, however, was not handling dynamic > vendordata  until > very recently and they implemented it in Feb 2021! So, even though > people were able to send dynamic vendordata to instances they were not > able to execute any of that dynamic vendordata using cloud-init. > > However, even this new implementation from cloud-init comes with > issues. I am not able to execute any of the YAML files which I pass > through dynamic vendordata. Static vendordata executes perfectly fine > though! > > Now coming to the issue, the vendordata_dynamic_targets flag in the > nova.conf follows the syntax: > vendordata_dynamic_targets=name@ > > *So, what value should 'name' hold here?* Should it be the name you > give to the REST service or should it be the name of the package that > consumes the vendor_data2.json (cloud-init in this case)? > > I am asking this because > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html >  mentions > that you can multiple REST services to fetch multiple dynamic > vendordata's. So, if we set the 'name' attribute to 'cloud-init' then > how are we going to differentiate between different REST services, > since Nova expects that the key should be unique and repeat key would > be ignored (from the specs). For example: > > vendordata_dynamic_targets=name1 at http://example.com,name2 at http://example2.com > > > { >     "name1": { >  "cloud-init": "#cloud-config\n..." >     }, >     "name2": { >  "cloud-init": "#cloud-config\n..." >     } > } > > > So, essentially, which among the two options is correct? > > 1. in nova.conf: vendordata_dynamic_targets=cloud-init at localhost and > my REST service responds with the value of "cloud-init" key to create > the following vendor_data2.json > {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"} > > or > > 2. in nova.conf: vendordata_dynamic_targets=testing at localhost and my > REST service responds with the value of "testing" key. > {"testing": {"cloud-init": "#cloud-config\npackage_upgrade: > True\npackages:\n - htop"}} > > > > I hope to get more eyes on this issue as it otherwise renders dynamic > vendordata useless if cloud-init doesn't handle it properly. Thank you > for your time! well dynmaic vendor data is not primary for cloud-init it can be consumed by cloud init but it was created to provide info for agents that rackspace developed themselves unfortunetlly this is one of the more obscure feature that is very very rarely used. it has not been updated more or less since it was added and most of the people that were involved nolonger work on nova. i think 2 is what they were expecting to work but im not sure really. i dont think either is nessisarly invlalid from a nova point of view the question here really is what does cloud-init actully require. > > > / > / > /Manish Mahalwal/ > /MathWorks/ > > On Fri, 16 Apr 2021 at 20:00, Sean Mooney > wrote: > > > > On 15/04/2021 15:53, Manish Mahalwal wrote: > > Hi All, > > > > I am working with OpenStack Pike and cloud-init 21.1. I am able to > > successfully pass dynamic vendor data to the config drive of an > > instance. However, cloud-init 21.1 just reads all the 'x' bytes > of the > > vendor_data2.json but it doesn't execute the contents of the json. > > Although, static vendor data works perfectly fine and the YAML > file in > > the JSON is executed as expected by cloud-init 21.1 > > > > * Now, the person who wrote the code for handling dynamic > vendordata > > in cloud-init (https://github.com/canonical/cloud-init/pull/777 > > > >) says that the > JSON > > cloud-init expects is of the form: > > > >     {"cloud-init": "#cloud-config\npackage_upgrade: > True\npackages:\n > >     - black\nfqdn: cloud-overridden-by-vendordata2.example.org > ."} > > > > > > the reference implementation for the dynamic vendor data backend was > https://github.com/mikalstill/vendordata > and it was a feature > developed > specificaly for rackspace. > > the data format that service should return is > > # { > > # "hostname": "foo", > > # "image-id": "75a74383-f276-4774-8074-8c4e3ff2ca64", > > # "instance-id": "2ae914e9-f5ab-44ce-b2a2-dcf8373d899d", > > # "metadata": {}, > > # "project-id": "039d104b7a5c4631b4ba6524d0b9e981", > > # "user-data": null > > # } > # An example of this data: > > > > https://github.com/mikalstill/vendordata/blob/master/app.py#L34-L42 > > > this blog post explains how it should work > https://www.madebymikal.com/nova-vendordata-deployment-an-excessively-detailed-guide/ > > > > > > * I believe that the JSON should have another outer key (as > mentioned > > here > > > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html > > > > > >) > > > which is the name of the microservice specified in nova.conf > file and > > that the inner key should be cloud-init. > > > > In nova.conf: > > vendordata_dynamic_targets=name1 at http://example.com > ,name2 at http://example2.com > > ,name2 at http//example2.com > > > > >     { > >         "name1": { > >      "cloud-init": "#cloud-config\n..." > >         }, > >         "name2": { > >      "cloud-init": "#cloud-config\n..." > >         } > >     } > > > > > > > > > > >>Who is right and who is wrong? > > > > To read more on this please go through the following: > > https://bugs.launchpad.net/cloud-init/+bug/1841104 > > > > > > > From tjoen at dds.nl Tue Apr 20 14:36:53 2021 From: tjoen at dds.nl (tjoen) Date: Tue, 20 Apr 2021 16:36:53 +0200 Subject: [wallaby] placement-status upgrade check error Message-ID: oslo.upgradecheck-1.3.0 Python-3.9.4 Got very basic Ussuri and Train working, skipped Victoria Stuck on https://docs.openstack.org/placement/wallaby/install/verify.html $ placement-status upgrade check Error: Traceback (most recent call last):   File "/usr/lib/python3.9/site-packages/oslo_upgradecheck/upgradecheck.py", line 196, in run     return conf.command.action_fn()   File "/usr/lib/python3.9/site-packages/oslo_upgradecheck/upgradecheck.py", line 104, in check     result = func_name(self, **kwargs)   File "/usr/lib/python3.9/site-packages/oslo_upgradecheck/common_checks.py", line 41, in check_policy_json     policy_path = conf.find_file(conf.oslo_policy.policy_file)   File "/usr/lib/python3.9/site-packages/oslo_config/cfg.py", line 2543, in find_file     raise NotInitializedError() oslo_config.cfg.NotInitializedError: call expression on parser has not been invoked From knikolla at bu.edu Tue Apr 20 14:44:03 2021 From: knikolla at bu.edu (Nikolla, Kristi) Date: Tue, 20 Apr 2021 14:44:03 +0000 Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: <6CA1AB71-6C5B-46C0-B66F-52A0F8003D92@bu.edu> Hi Arkady, The keystone team is meeting THU and FRI from 14.00 to 16.00 UTC. The PTG etherpad is available at https://etherpad.opendev.org/p/keystone-xena-ptg The schedule is pretty flexible, so let me know what time works for you. Best, Kristi On Apr 16, 2021, at 5:59 PM, Kanevsky, Arkady > wrote: Repeating the request. Where is Keystone Etherpad for Xena PTG? From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:30 PM To: knikolla at bu.edu Cc: OpenStack Discuss Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop Kristi, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Keystone tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Tue Apr 20 16:45:46 2021 From: opensrloo at gmail.com (Ruby Loo) Date: Tue, 20 Apr 2021 12:45:46 -0400 Subject: [ironic] Ironic Whiteboard v2 call for reviews In-Reply-To: References: Message-ID: Thanks for doing this! I think 'archive' (ie, keep around existing whiteboard, renamed to something else) and make your new version the one at https://etherpad.opendev.org/p/IronicWhiteBoard. This way, we don't break anyone's link (and we don't make people update links). --ruby On Fri, Apr 16, 2021 at 12:30 PM Jay Faulkner wrote: > Hi all, > > Iury and I spent some time this morning updating the Ironic whiteboard > etherpad to include more immediately useful information to contributors. > > We placed this updated whiteboard at > https://etherpad.opendev.org/p/IronicWhiteBoardv2 -- our approach was to > prune any outdated/broken links or information, and focus on making the > first part of the whiteboard an easy one-click place for folks to see easy > ways to contribute. All the rest of the information was carried over and > reformatted. > > Once there is consensus from the team about this being a positive change, > we should either replace the existing IronicWhiteBoard with the contents of > the v2 page, or just update links to point to the new one instead. > > What do you all think? > > Thanks, > Jay Faulkner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Apr 20 16:48:07 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 20 Apr 2021 18:48:07 +0200 Subject: [wallaby] placement-status upgrade check error In-Reply-To: References: Message-ID: <70GVRQ.WRJASB1E3PKS@est.tech> On Tue, Apr 20, 2021 at 16:36, tjoen wrote: > oslo.upgradecheck-1.3.0 > > Python-3.9.4 > > Got very basic Ussuri and Train working, skipped Victoria > > Stuck on > https://docs.openstack.org/placement/wallaby/install/verify.html > > $ placement-status upgrade check > Error: > Traceback (most recent call last): > File > "/usr/lib/python3.9/site-packages/oslo_upgradecheck/upgradecheck.py", > line 196, in run > return conf.command.action_fn() > File > "/usr/lib/python3.9/site-packages/oslo_upgradecheck/upgradecheck.py", > line 104, in check > result = func_name(self, **kwargs) > File > "/usr/lib/python3.9/site-packages/oslo_upgradecheck/common_checks.py", > line 41, in check_policy_json > policy_path = conf.find_file(conf.oslo_policy.policy_file) > File "/usr/lib/python3.9/site-packages/oslo_config/cfg.py", line > 2543, in find_file > raise NotInitializedError() > oslo_config.cfg.NotInitializedError: call expression on parser has > not been invoked I can reproduce the same error locally in devstack with python3.8 So I think this is a valid bug. I've reported a bug [1] and pushed a fix [2]. Thanks for reporting! Cheers, gibi [1] https://storyboard.openstack.org/#!/story/2008831 [2] https://review.opendev.org/q/topic:%22story%252F2008831%22+(status:open%20OR%20status:merged) > > From eng.taha1928 at gmail.com Tue Apr 20 11:39:22 2021 From: eng.taha1928 at gmail.com (Taha Adel) Date: Tue, 20 Apr 2021 13:39:22 +0200 Subject: [Cinder] Problem in iSCSI Portal Message-ID: Hello, The situation is, I have one storage node that has cinder-volume service up and running on top of it and has a dedicated physical NIC for storage traffic. I have set the following configuration in /etc/cinder/cinder.conf file at the storage node: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm iscsi_ip_address = 10.0.102.11 (different than the management ip) The service is able to create the volume and attach it as a backstore for iSCSI, but it can't create a Portal for iSCSI service. Should I create the portal entry by myself? or is there a mistake I have made in the config file? Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From kabaiz at gmail.com Tue Apr 20 14:59:40 2021 From: kabaiz at gmail.com (=?UTF-8?Q?Kabai_Zolt=C3=A1n?=) Date: Tue, 20 Apr 2021 16:59:40 +0200 Subject: Why doesn't identity providers have separate IDs and Names in Openstack? Message-ID: Dear Community, I just have a question what I couldn't resolve with my colleagues: Why doesn't identity providers have separate IDs and Names in Openstack? I think most of the things have a separate ID and a separate Name in Openstack (Users, Projects, virtual machines, etc). Why doesn't identity providers have separate IDs and Names? Are there other objects like this? Is there a rule why specific entities must have both IDs and Names while others can live without a separate Name? $ openstack identity provider list +----------------+---------+-------------+ | ID | Enabled | Description | +----------------+---------+-------------+ | cloud1 | True | None | +----------------+---------+-------------+ Br, Zoltán Gábor Kabai kabaiz at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Tue Apr 20 17:52:19 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 20 Apr 2021 17:52:19 +0000 Subject: cinder + Unity In-Reply-To: References: Message-ID: Albert, I assume you had a look at https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/dell-emc-unity-driver.html If you need documentation for specific release, replace "latest" with release name. You do not need to specify IO_ports. These are needed if unity used for more than openstack so you can filter non-openstack ports of unity. That is performance improvement for connection establishment. rkady -----Original Message----- From: Albert Shih Sent: Tuesday, April 20, 2021 8:41 AM To: Rajat Dhasmana Cc: openstack-discuss at lists.openstack.org Subject: Re: cinder + Unity [EXTERNAL EMAIL] Le 20/04/2021 à 00:33:56+0530, Rajat Dhasmana a écrit Hi, Me again... > This might be something to look at with the wrong spelling causing mismatch. >   Just one thing ... > >   unity_io_ports = *_enp1s0 What's that ? I didn't find a right configuration for that. Let them empty make cinder working well. But it's little frustrating. What's unity_io_ports should be ? My intend if to use iSCSI only so no FC ports. Just the «regular» ethernet 10Gbit/s ports. So I try enp1s0 (the name of the interface), spa_enp1s0, *_enps1s0 etc...every thing I can think of end up with a error. I don't event understand why I need this information because the cinder server don't «mount» anything. Regards -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue Apr 20 03:36:24 PM CEST 2021 From rosmaita.fossdev at gmail.com Tue Apr 20 19:29:58 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 20 Apr 2021 15:29:58 -0400 Subject: [ptg][cinder] backend drivers' day at the PTG Message-ID: <02df95c1-2a17-dc91-1a72-df52e8272d34@gmail.com> Hello Cinder third-party storage backend driver developers, maintainers, their managers, and any other interested parties: Thursday 22 April is Cinder Drivers' Day at the PTG. We kick off at 1300 UTC with Adam Krpan talking about his experiences deploying Software Factory to set up the third-party CI for Pure Storage. Anyone running an aging infrastructure for their third-party CI should be interested in this topic. See the Cinder project Xena PTG etherpad for the other Drivers' Day topics and the connection info to join the discussion: https://etherpad.opendev.org/p/apr2021-ptg-cinder From elod.illes at est.tech Tue Apr 20 19:31:03 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Tue, 20 Apr 2021 21:31:03 +0200 Subject: [all][stable] Ocata - End of Life Message-ID: Hi, Sorry, this will be long :) as there are 3 topics around old stable branches and 'End of Life'. 1. Deletion of ocata-eol tagged branches With the introduction of Extended Maintenance process [1][2] some cycles ago, the 'End of Life' (EOL) process also changed: * branches were no longer EOL tagged and "mass-deleted" at the end of   maintenance phase * EOL'ing became a project decision * if a project decides to cease maintenance of a branch that is in   Extended Maintenance, then they can tag their branch with $series-eol However, the EOL-tagging process was not automated or redefined process-wise, so that meant the branches that were tagged as EOL were not deleted. Now (after some changing in tooling) Release Management team finally will start to delete EOL-tagged branches. In this mail I'm sending a *WARNING* to consumers of old stable branches, especially *ocata*, as we will start deleting the *ocata-eol* tagged branches in a *week*. (And also newer *-eol branches later on) 2. Ocata branch Beyond the 1st topic we must clarify the future of Ocata stable branch in general: tempest jobs became broken about ~ a year ago. That means that projects had two ways forward: a. drop tempest testing to unblock gate b. simply don't support ocata branch anymore As far as I see the latter one happened and stable/ocata became unmaintained probably for every projects. So my questions are regarding this: * Is any project still using/maintaining their stable/ocata branch? * If not: can Release Team initiate a mass-EOL-tagging of stable/ocata? 3. The 'next' old stable branches Some projects still support their Pike, Queens and Rocky branches. These branches use Xenial and py2.7 and both are out of support. This results broken gates time to time. Especially nowadays. These issues suggest that these branches are closer and closer to being unmaintained. So I call the attention of interested parties, who are for example still consuming these stable branches and using them downstream to put effort on maintaining the branches and their CI/gates. It is a good practice for stable maintainers to check if there are failures in their projects' periodic-stable jobs [3], as those are good indicators of the health of their stable branches. And if there are, then try to fix it as soon as possible. [1] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html [2] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases [3] http://lists.openstack.org/pipermail/openstack-stable-maint/2021-April/date.html Thanks, Előd From gmann at ghanshyammann.com Tue Apr 20 21:20:19 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 20 Apr 2021 16:20:19 -0500 Subject: [wallaby] placement-status upgrade check error In-Reply-To: <70GVRQ.WRJASB1E3PKS@est.tech> References: <70GVRQ.WRJASB1E3PKS@est.tech> Message-ID: <178f127c82b.be2a51df318943.3476509370609024162@ghanshyammann.com> ---- On Tue, 20 Apr 2021 11:48:07 -0500 Balazs Gibizer wrote ---- > > > On Tue, Apr 20, 2021 at 16:36, tjoen wrote: > > oslo.upgradecheck-1.3.0 > > > > Python-3.9.4 > > > > Got very basic Ussuri and Train working, skipped Victoria > > > > Stuck on > > https://docs.openstack.org/placement/wallaby/install/verify.html > > > > $ placement-status upgrade check > > Error: > > Traceback (most recent call last): > > File > > "/usr/lib/python3.9/site-packages/oslo_upgradecheck/upgradecheck.py", > > line 196, in run > > return conf.command.action_fn() > > File > > "/usr/lib/python3.9/site-packages/oslo_upgradecheck/upgradecheck.py", > > line 104, in check > > result = func_name(self, **kwargs) > > File > > "/usr/lib/python3.9/site-packages/oslo_upgradecheck/common_checks.py", > > line 41, in check_policy_json > > policy_path = conf.find_file(conf.oslo_policy.policy_file) > > File "/usr/lib/python3.9/site-packages/oslo_config/cfg.py", line > > 2543, in find_file > > raise NotInitializedError() > > oslo_config.cfg.NotInitializedError: call expression on parser has > > not been invoked > > I can reproduce the same error locally in devstack with python3.8 So I > think this is a valid bug. I've reported a bug [1] and pushed a fix [2]. Yeah, check_policy_json now started searching the policy file from config dir locations and so does require the full initialized object of CONFlike any other CLI. I think we were missing the test for that new check otherwise it could have detected in code change itself like I found in many other projects. Thanks gibi for taking care of fix. -gmann > > Thanks for reporting! > > Cheers, > gibi > > [1] https://storyboard.openstack.org/#!/story/2008831 > [2] > https://review.opendev.org/q/topic:%22story%252F2008831%22+(status:open%20OR%20status:merged) > > > > > > > > > From Istvan.Szabo at agoda.com Wed Apr 21 08:00:32 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Wed, 21 Apr 2021 08:00:32 +0000 Subject: Live migration fails Message-ID: Hi, I have couple of compute nodes where the live migration fails with existing vms. When I quickly spawn a vm and try live migration it works so I assume shouldn't be a big problem with the compute node. However I have many existing vms where it fails with a servername not found. /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 ERROR nova.conductor.tasks.migrate [req-f4067a26-a233-4673-8c07-9a8a290980b0 dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] Unable to find record for source node servername on servername: ComputeHostNotFound: Compute host servername could not be found. /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 WARNING nova.scheduler.utils [req-f4067a26-a233-4673-8c07-9a8a290980b0 dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - default default] Failed to compute_task_migrate_server: Compute host servername could not be found.: ComputeHostNotFound: Compute host servername could not be found. /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 WARNING nova.scheduler.utils [req-f4067a26-a233-4673-8c07-9a8a290980b0 dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host servername could not be found. /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 ERROR oslo_messaging.rpc.server [req-f4067a26-a233-4673-8c07-9a8a290980b0 dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - default default] Exception during message handling: ComputeHostNotFound: Compute host am-osfecn-4025 Tried with this command: nova live-migration --block-migrate id. Any idea? Thank you. ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 21 08:00:06 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 21 Apr 2021 10:00:06 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: <4135616.GcyNBQpf4Z@p1> References: <2752308.ClrQMDxLba@p1> <4135616.GcyNBQpf4Z@p1> Message-ID: <10301525.1vPiazqYTU@p1> Hi, Dnia środa, 14 kwietnia 2021 10:42:35 CEST Slawek Kaplonski pisze: > Hi Arkady, > > Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > > Hi, > > > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > > Brian, > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > > > > PTG meeting to go over Interop testing and any changes for neutron tempest > > or > > > > tempest configuration in Wallaby cycle or changes planned for Xena. Once on > > > > agenda one of the Interop WG person will attend and lead the discussion. > > > > I just added it to our etherpad https://etherpad.opendev.org/p/neutron-xena-ptg > > I will be working on schedule of the sessions later this week and I will let > > You know what timeslot this session with Interop WG will be. > > Please let me know if You have any preferences. We have our sessions > > scheduled: > > > > Monday 1300 - 1600 UTC > > Tuesday 1300 - 1600 UTC > > Thursday 1300 - 1600 UTC > > Friday 1300 - 1600 UTC > > > > Our time slots which are already booked are: > > - Monday 15:00 - 16:00 UTC > > - Thursday 14:00 - 15:30 UTC > > - Friday 14:00 - 15:00 UTC > > > > > Thanks, > > > Arkady > > > > > > Arkady Kanevsky, Ph.D. > > > SP Chief Technologist & DE > > > Dell Technologies office of CTO > > > Dell Inc. One Dell Way, MS PS2-91 > > > Round Rock, TX 78682, USA > > > Phone: 512 7204955 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. > Please let me know if that isn't good time slot for You. > Please also add topics which You want to discuss to our etherpad https:// > etherpad.opendev.org/p/neutron-xena-ptg I just wanted to confirm with You that this time slot is ok for You and to ask if You can put topics which You want to discuss in our etherpad https://etherpad.opendev.org/p/ neutron-xena-ptg[1] under this topic (currently it's L366 there). > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://etherpad.opendev.org/p/neutron-xena-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From xin-ran.wang at intel.com Wed Apr 21 08:15:52 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Wed, 21 Apr 2021 08:15:52 +0000 Subject: [cyborg] IRC meeting cancelled in next 2 weeks Message-ID: Hi all, As we have virtual PTG this week, let's cancel the Cyborg weekly meeting this week. And due to the International Labor Day vacation (1st - 5th May 2021), the IRC meeting in next week will also be cancelled. Thanks, Xin-Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Apr 21 08:25:52 2021 From: eblock at nde.ag (Eugen Block) Date: Wed, 21 Apr 2021 08:25:52 +0000 Subject: Live migration fails In-Reply-To: Message-ID: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> Hi, can you share the output of these commands? nova-manage cell_v2 list_hosts openstack compute service list Zitat von "Szabo, Istvan (Agoda)" : > Hi, > > I have couple of compute nodes where the live migration fails with > existing vms. > When I quickly spawn a vm and try live migration it works so I > assume shouldn't be a big problem with the compute node. > However I have many existing vms where it fails with a servername not found. > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > ERROR nova.conductor.tasks.migrate > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > Unable to find record for source node servername on servername: > ComputeHostNotFound: Compute host servername could not be found. > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > WARNING nova.scheduler.utils > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] Failed to compute_task_migrate_server: Compute host > servername could not be found.: ComputeHostNotFound: Compute host > servername could not be found. > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > WARNING nova.scheduler.utils > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host > servername could not be found. > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 > ERROR oslo_messaging.rpc.server > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] Exception during message handling: > ComputeHostNotFound: Compute host am-osfecn-4025 > > Tried with this command: > > nova live-migration --block-migrate id. > > Any idea? > > Thank you. > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to > anyone. Any confidentiality or privilege is not waived or lost by > any mistaken delivery or unauthorized disclosure of the message. All > messages sent to and from Agoda may be monitored to ensure > compliance with company policies, to protect the company's interests > and to remove potential malware. Electronic messages may be > intercepted, amended, lost or deleted, or contain viruses. From mkopec at redhat.com Wed Apr 21 08:42:54 2021 From: mkopec at redhat.com (Martin Kopec) Date: Wed, 21 Apr 2021 10:42:54 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: <10301525.1vPiazqYTU@p1> References: <2752308.ClrQMDxLba@p1> <4135616.GcyNBQpf4Z@p1> <10301525.1vPiazqYTU@p1> Message-ID: Hi Slawek, yeah, the time slot is ok for me. I'll write it up today. Thanks for reminding me, On Wed, 21 Apr 2021 at 10:01, Slawek Kaplonski wrote: > Hi, > > Dnia środa, 14 kwietnia 2021 10:42:35 CEST Slawek Kaplonski pisze: > > > Hi Arkady, > > > > > > Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > > > > Hi, > > > > > > > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > > > > Brian, > > > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > min on > > > > > > > > PTG meeting to go over Interop testing and any changes for neutron > tempest > > > > > > or > > > > > > > > tempest configuration in Wallaby cycle or changes planned for Xena. > Once on > > > > > > > > agenda one of the Interop WG person will attend and lead the > discussion. > > > > > > > > I just added it to our etherpad > https://etherpad.opendev.org/p/neutron-xena-ptg > > > > I will be working on schedule of the sessions later this week and I > will let > > > > You know what timeslot this session with Interop WG will be. > > > > Please let me know if You have any preferences. We have our sessions > > > > scheduled: > > > > > > > > Monday 1300 - 1600 UTC > > > > Tuesday 1300 - 1600 UTC > > > > Thursday 1300 - 1600 UTC > > > > Friday 1300 - 1600 UTC > > > > > > > > Our time slots which are already booked are: > > > > - Monday 15:00 - 16:00 UTC > > > > - Thursday 14:00 - 15:30 UTC > > > > - Friday 14:00 - 15:00 UTC > > > > > > > > > Thanks, > > > > > Arkady > > > > > > > > > > Arkady Kanevsky, Ph.D. > > > > > SP Chief Technologist & DE > > > > > Dell Technologies office of CTO > > > > > Dell Inc. One Dell Way, MS PS2-91 > > > > > Round Rock, TX 78682, USA > > > > > Phone: 512 7204955 > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. > > > Please let me know if that isn't good time slot for You. > > > Please also add topics which You want to discuss to our etherpad https:// > > > etherpad.opendev.org/p/neutron-xena-ptg > > I just wanted to confirm with You that this time slot is ok for You and to > ask if You can put topics which You want to discuss in our etherpad > https://etherpad.opendev.org/p/neutron-xena-ptg under this topic > (currently it's L366 there). > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Istvan.Szabo at agoda.com Wed Apr 21 09:55:56 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Wed, 21 Apr 2021 09:55:56 +0000 Subject: Live migration fails In-Reply-To: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> Message-ID: Sure: https://jpst.it/2u3uh These are the one where can't live migrate: xy-osfecn-40250 xy-osfecn-40281 xy-osfecn-40290 xy-osbecn-40073 xy-osfecn-40238 The compute service are disabled on these because we don't want anybody spawn a vm on these anymore so want to evacuate all vms. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Eugen Block Sent: Wednesday, April 21, 2021 3:26 PM To: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails Hi, can you share the output of these commands? nova-manage cell_v2 list_hosts openstack compute service list Zitat von "Szabo, Istvan (Agoda)" : > Hi, > > I have couple of compute nodes where the live migration fails with > existing vms. > When I quickly spawn a vm and try live migration it works so I assume > shouldn't be a big problem with the compute node. > However I have many existing vms where it fails with a servername not found. > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 ERROR > nova.conductor.tasks.migrate > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > Unable to find record for source node servername on servername: > ComputeHostNotFound: Compute host servername could not be found. > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > WARNING nova.scheduler.utils > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] Failed to compute_task_migrate_server: Compute host > servername could not be found.: ComputeHostNotFound: Compute host > servername could not be found. > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > WARNING nova.scheduler.utils > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host > servername could not be found. > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 ERROR > oslo_messaging.rpc.server > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > default default] Exception during message handling: > ComputeHostNotFound: Compute host am-osfecn-4025 > > Tried with this command: > > nova live-migration --block-migrate id. > > Any idea? > > Thank you. > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to anyone. > Any confidentiality or privilege is not waived or lost by any mistaken > delivery or unauthorized disclosure of the message. All messages sent > to and from Agoda may be monitored to ensure compliance with company > policies, to protect the company's interests and to remove potential > malware. Electronic messages may be intercepted, amended, lost or > deleted, or contain viruses. ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From skaplons at redhat.com Wed Apr 21 10:24:58 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 21 Apr 2021 12:24:58 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: <10301525.1vPiazqYTU@p1> Message-ID: <10727721.Haq6DnGYxq@p1> Hi, Dnia środa, 21 kwietnia 2021 10:42:54 CEST Martin Kopec pisze: > Hi Slawek, > > yeah, the time slot is ok for me. I'll write it up today. > > Thanks for reminding me, Thank You :) > > On Wed, 21 Apr 2021 at 10:01, Slawek Kaplonski wrote: > > Hi, > > > > Dnia środa, 14 kwietnia 2021 10:42:35 CEST Slawek Kaplonski pisze: > > > Hi Arkady, > > > > > > Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > > > > Hi, > > > > > > > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > > > > Brian, > > > > > > > > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > > > > min on > > > > > > PTG meeting to go over Interop testing and any changes for neutron > > > > tempest > > > > > or > > > > > > > > tempest configuration in Wallaby cycle or changes planned for Xena. > > > > Once on > > > > > > agenda one of the Interop WG person will attend and lead the > > > > discussion. > > > > > > I just added it to our etherpad > > > > https://etherpad.opendev.org/p/neutron-xena-ptg > > > > > > I will be working on schedule of the sessions later this week and I > > > > will let > > > > > > You know what timeslot this session with Interop WG will be. > > > > > > > > Please let me know if You have any preferences. We have our sessions > > > > > > > > scheduled: > > > > > > > > > > > > > > > > Monday 1300 - 1600 UTC > > > > > > > > Tuesday 1300 - 1600 UTC > > > > > > > > Thursday 1300 - 1600 UTC > > > > > > > > Friday 1300 - 1600 UTC > > > > > > > > > > > > > > > > Our time slots which are already booked are: > > > > > > > > - Monday 15:00 - 16:00 UTC > > > > > > > > - Thursday 14:00 - 15:30 UTC > > > > > > > > - Friday 14:00 - 15:00 UTC > > > > > > > > > Thanks, > > > > > > > > > > Arkady > > > > > > > > > > > > > > > > > > > > Arkady Kanevsky, Ph.D. > > > > > > > > > > SP Chief Technologist & DE > > > > > > > > > > Dell Technologies office of CTO > > > > > > > > > > Dell Inc. One Dell Way, MS PS2-91 > > > > > > > > > > Round Rock, TX 78682, USA > > > > > > > > > > Phone: 512 7204955 > > > > > > > > -- > > > > > > > > Slawek Kaplonski > > > > > > > > Principal Software Engineer > > > > > > > > Red Hat > > > > > > I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. > > > > > > Please let me know if that isn't good time slot for You. > > > > > > Please also add topics which You want to discuss to our etherpad https:// > > > > > > etherpad.opendev.org/p/neutron-xena-ptg > > > > I just wanted to confirm with You that this time slot is ok for You and to > > ask if You can put topics which You want to discuss in our etherpad > > https://etherpad.opendev.org/p/neutron-xena-ptg under this topic > > (currently it's L366 there). > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > -- > Martin -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From eblock at nde.ag Wed Apr 21 10:36:39 2021 From: eblock at nde.ag (Eugen Block) Date: Wed, 21 Apr 2021 10:36:39 +0000 Subject: Live migration fails In-Reply-To: References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> Message-ID: <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> The error message seems correct, I can't find am-osfecn-4025 either in the list of compute nodes. Can you check in the database if there's an active instance (or several) allocated to that compute node? In that case you would need to correct the allocation in order for the migration to work. Zitat von "Szabo, Istvan (Agoda)" : > Sure: > > https://jpst.it/2u3uh > > These are the one where can't live migrate: > xy-osfecn-40250 > xy-osfecn-40281 > xy-osfecn-40290 > xy-osbecn-40073 > xy-osfecn-40238 > > The compute service are disabled on these because we don't want > anybody spawn a vm on these anymore so want to evacuate all vms. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Eugen Block > Sent: Wednesday, April 21, 2021 3:26 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: Live migration fails > > Hi, > > can you share the output of these commands? > > nova-manage cell_v2 list_hosts > openstack compute service list > > > Zitat von "Szabo, Istvan (Agoda)" : > >> Hi, >> >> I have couple of compute nodes where the live migration fails with >> existing vms. >> When I quickly spawn a vm and try live migration it works so I assume >> shouldn't be a big problem with the compute node. >> However I have many existing vms where it fails with a servername not found. >> >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 ERROR >> nova.conductor.tasks.migrate >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >> Unable to find record for source node servername on servername: >> ComputeHostNotFound: Compute host servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >> WARNING nova.scheduler.utils >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] Failed to compute_task_migrate_server: Compute host >> servername could not be found.: ComputeHostNotFound: Compute host >> servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >> WARNING nova.scheduler.utils >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >> Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host >> servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 ERROR >> oslo_messaging.rpc.server >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] Exception during message handling: >> ComputeHostNotFound: Compute host am-osfecn-4025 >> >> Tried with this command: >> >> nova live-migration --block-migrate id. >> >> Any idea? >> >> Thank you. >> >> ________________________________ >> This message is confidential and is for the sole use of the intended >> recipient(s). It may also be privileged or otherwise protected by >> copyright or other legal rules. If you have received it by mistake >> please let us know by reply email and delete it from your system. It >> is prohibited to copy this message or disclose its content to anyone. >> Any confidentiality or privilege is not waived or lost by any mistaken >> delivery or unauthorized disclosure of the message. All messages sent >> to and from Agoda may be monitored to ensure compliance with company >> policies, to protect the company's interests and to remove potential >> malware. Electronic messages may be intercepted, amended, lost or >> deleted, or contain viruses. > > > > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to > anyone. Any confidentiality or privilege is not waived or lost by > any mistaken delivery or unauthorized disclosure of the message. All > messages sent to and from Agoda may be monitored to ensure > compliance with company policies, to protect the company's interests > and to remove potential malware. Electronic messages may be > intercepted, amended, lost or deleted, or contain viruses. From ricolin at ricolky.com Wed Apr 21 13:32:11 2021 From: ricolin at ricolky.com (Rico Lin) Date: Wed, 21 Apr 2021 21:32:11 +0800 Subject: [heat][PTG] PTG will start soon (in 30 mins) 14:00-16:00 (UTC time) Message-ID: Dear all, Heat PTG will starts at 14:00-16:00 (UTC time) today. Feel free to join: https://etherpad.opendev.org/p/xena-ptg-heat *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 21 15:53:37 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 21 Apr 2021 17:53:37 +0200 Subject: [neutron][all] PTG session about OVN as default backend in Devstack In-Reply-To: <32241215.rSAhrytHMa@p1> References: <32241215.rSAhrytHMa@p1> Message-ID: <1873822.4RtXlhcRdI@p1> Hi, I just wanted to remind that tomorrow at 1300 UTC we will have in Neutron session about moving to OVN as default Neutron backend in Devstack. Please join us in that session if You have any questions/concerns about it and You would like to discuss them with the Neutron team. Dnia piątek, 16 kwietnia 2021 12:55:29 CEST Slawek Kaplonski pisze: > Hi, > > We discussed this topic couple of times in the Neutron team and with wider > community also. And now we really feel like it is good time to pull the > trigger and switch default Neutron backend in Devstack from ML2/OVS to ML2/ > OVN. > Lucas already prepared patches for that and all should be already in goo > shape. But before we will do that, we want to have PTG session about it. It is > scheduled to be on Thursday 22nd of April at 13:00 UTC time in the Neutron > session. > We want to give some short summary of current status of this but also we would > like to do something like "AMA" about it for people from other projects. So if > You have any questions/concerns about that, please go to that session on > Thursday to discuss that with us. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From whayutin at redhat.com Wed Apr 21 17:50:10 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 21 Apr 2021 11:50:10 -0600 Subject: [tripleo][ci] All CentOS-7 jobs are blocked Message-ID: FYI, Infra put up a change for docker that has busted CentOS-7 jobs, a fix is in the works. https://bugs.launchpad.net/tripleo/+bug/1925372 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Apr 21 20:02:20 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 21 Apr 2021 13:02:20 -0700 Subject: [tripleo][ci] All CentOS-7 jobs are blocked In-Reply-To: References: Message-ID: On Wed, Apr 21, 2021, at 10:50 AM, Wesley Hayutin wrote: > FYI, > > Infra put up a change for docker that has busted CentOS-7 jobs, a fix > is in the works. To be clear the change was proposed by a member of the Zuul community and merged by the Zuul project. We consume the zuul-jobs standard lib from there, but it wasn't an Infra specific change. (I believe the change originated in the Ansible project) Unfortunately, I suspected that the particular issue may have been a problem in review, but it managed to sneak through because we thought we were testing these scenarios already. In reality it appears we were only testing CentOS 7 + upstream docker and not with distro docker. https://review.opendev.org/c/zuul/zuul-jobs/+/787429 is a change to correct this problem with testing. > https://bugs.launchpad.net/tripleo/+bug/1925372 > > Thanks From whayutin at redhat.com Wed Apr 21 20:23:56 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 21 Apr 2021 14:23:56 -0600 Subject: [tripleo][ci] All CentOS-7 jobs are blocked In-Reply-To: References: Message-ID: On Wed, Apr 21, 2021 at 2:05 PM Clark Boylan wrote: > On Wed, Apr 21, 2021, at 10:50 AM, Wesley Hayutin wrote: > > FYI, > > > > Infra put up a change for docker that has busted CentOS-7 jobs, a fix > > is in the works. > > To be clear the change was proposed by a member of the Zuul community and > merged by the Zuul project. We consume the zuul-jobs standard lib from > there, but it wasn't an Infra specific change. (I believe the change > originated in the Ansible project) > Aye, agree w/ you. I should have just spoken to where the change was, who did it and what team they are a part of is irrelevant. > > Unfortunately, I suspected that the particular issue may have been a > problem in review, but it managed to sneak through because we thought we > were testing these scenarios already. In reality it appears we were only > testing CentOS 7 + upstream docker and not with distro docker. > https://review.opendev.org/c/zuul/zuul-jobs/+/787429 is a change to > correct this problem with testing. > Thanks for following up on it > > > https://bugs.launchpad.net/tripleo/+bug/1925372 > > > > Thanks > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vineshnellaiappan at gmail.com Thu Apr 22 02:21:28 2021 From: vineshnellaiappan at gmail.com (Vinesh N) Date: Thu, 22 Apr 2021 07:51:28 +0530 Subject: [TripleO] overcloud node introspect failed Message-ID: hi, i am facing an issue while introspect the bare metal nodes, *error message* "*4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf | 2021-04-22T01:41:32 | 2021-04-22T01:41:35 | Failed to set boot device to PXE: Failed to set boot device for node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf: Client Error for url: http://10.0.1.202:6385/v1/nodes/4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf/management/boot_device , Node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf is locked by host undercloud.localdomain, please retry after the current operation is completed*" *(undercloud) [stack at undercloud ~]$ cat /etc/*release* CentOS Linux release 8.3.2011 *ussuri version* *(undercloud) [stack at undercloud ~]$ openstack image list* /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) +--------------------------------------+------------------------+--------+ | ID | Name | Status | +--------------------------------------+------------------------+--------+ | 8ddcd168-cc18-4ce2-97c5-c3502ac471a4 | overcloud-full | active | | 8d9cfac9-400b-4570-b0b1-baeb175b16c4 | overcloud-full-initrd | active | | c561f1d5-41ae-4599-81ea-de2c1e74eae7 | overcloud-full-vmlinuz | active | +--------------------------------------+------------------------+--------+ *Using the command to introspect the node, it was able to discover the node and I could provision the node boot via pxe, and load the image on the node. I could see the login prompt on the server, after some time of provision shut the node down.* *openstack overcloud node discover --range 10.0.40.5 --credentials admin:XXXX --introspect --provide* /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) Successfully probed node IP 10.0.40.5 Successfully registered node UUID 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) PLAY [Baremetal Introspection for multiple Ironic Nodes] *********************** 2021-04-22 07:04:28.978299 | 002590fe-0d22-76eb-1a70-000000000008 | TASK | Check for required inputs 2021-04-22 07:04:29.002729 | 002590fe-0d22-76eb-1a70-000000000008 | SKIPPED | Check for required inputs | localhost | item=node_uuids 2021-04-22 07:04:29.004468 | 002590fe-0d22-76eb-1a70-000000000008 | TIMING | Check for required inputs | localhost | 0:00:00.069134 | 0.0 .... .... .... 2021-04-22 07:11:43.261714 | 002590fe-0d22-76eb-1a70-000000000016 | TASK | Nodes that failed introspection 2021-04-22 07:11:43.296417 | 002590fe-0d22-76eb-1a70-000000000016 | FATAL | Nodes that failed introspection | localhost | error={ "msg": " 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf" } 2021-04-22 07:11:43.297359 | 002590fe-0d22-76eb-1a70-000000000016 | TIMING | Nodes that failed introspection | localhost | 0:07:14.362025 | 0.03s NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 2021-04-22 07:11:43.301553 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-04-22 07:11:43.302101 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-04-22 07:11:43.302609 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Elapsed Time: 0:07:14.367265 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-04-22 07:11:43.303162 | UUID | Info | Host | Task Name | Run Time 2021-04-22 07:11:43.303740 | 002590fe-0d22-76eb-1a70-000000000014 | SUMMARY | localhost | Start baremetal introspection | 434.03s 2021-04-22 07:11:43.304248 | 002590fe-0d22-76eb-1a70-000000000015 | SUMMARY | localhost | Nodes that passed introspection | 0.04s 2021-04-22 07:11:43.304814 | 002590fe-0d22-76eb-1a70-000000000016 | SUMMARY | localhost | Nodes that failed introspection | 0.03s 2021-04-22 07:11:43.305341 | 002590fe-0d22-76eb-1a70-000000000008 | SUMMARY | localhost | Check for required inputs | 0.03s 2021-04-22 07:11:43.305854 | 002590fe-0d22-76eb-1a70-00000000000a | SUMMARY | localhost | Set node_uuids_intro fact | 0.02s 2021-04-22 07:11:43.306397 | 002590fe-0d22-76eb-1a70-000000000010 | SUMMARY | localhost | Check if validation enabled | 0.02s 2021-04-22 07:11:43.306904 | 002590fe-0d22-76eb-1a70-000000000012 | SUMMARY | localhost | Fail if validations are disabled | 0.02s 2021-04-22 07:11:43.307379 | 002590fe-0d22-76eb-1a70-00000000000e | SUMMARY | localhost | Set concurrency fact | 0.02s 2021-04-22 07:11:43.307913 | 002590fe-0d22-76eb-1a70-00000000000c | SUMMARY | localhost | Notice | 0.02s 2021-04-22 07:11:43.308417 | 002590fe-0d22-76eb-1a70-000000000011 | SUMMARY | localhost | Run Validations | 0.02s 2021-04-22 07:11:43.308926 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-04-22 07:11:43.309423 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ State Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-04-22 07:11:43.310021 | ~~~~~~~~~~~~~~~~~~ Number of nodes which did not deploy successfully: 1 ~~~~~~~~~~~~~~~~~ 2021-04-22 07:11:43.310545 | The following node(s) had failures: localhost 2021-04-22 07:11:43.311080 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ansible execution failed. playbook: /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run Status: failed, Return Code: 2 Exception occured while running the command Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run super(Command, self).run(parsed_args) File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/usr/lib/python3.6/site-packages/cliff/command.py", line 187, in run return_code = self.take_action(parsed_args) or 0 File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_node.py", line 462, in take_action retry_timeout=parsed_args.retry_timeout, File "/usr/lib/python3.6/site-packages/tripleoclient/workflows/baremetal.py", line 193, in introspect "retry_timeout": retry_timeout, File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 728, in run_ansible_playbook raise RuntimeError(err_msg) RuntimeError: Ansible execution failed. playbook: /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run Status: failed, Return Code: 2 Ansible execution failed. playbook: /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run Status: failed, Return Code: 2 *(undercloud) [stack at undercloud ~]$ openstack baremetal introspection list* /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | UUID | Started at | Finished at | Error | +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf | 2021-04-22T01:41:32 | 2021-04-22T01:41:35 | Failed to set boot device to PXE: Failed to set boot device for node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf: Client Error for url: http://10.0.1.202:6385/v1/nodes/4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf/management/boot_device, Node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf is locked by host undercloud.localdomain, please retry after the current operation is completed. | | 3d091348-e9c7-4e99-80e3-df72d332d935 | 2021-04-21T12:36:30 | 2021-04-21T12:36:32 | Failed to set boot device to PXE: Failed to set boot device for node 3d091348-e9c7-4e99-80e3-df72d332d935: Client Error for url: http://10.0.1.202:6385/v1/nodes/3d091348-e9c7-4e99-80e3-df72d332d935/management/boot_device, Node 3d091348-e9c7-4e99-80e3-df72d332d935 is locked by host undercloud.localdomain, please retry after the current operation is completed. | +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Istvan.Szabo at agoda.com Thu Apr 22 04:19:20 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Thu, 22 Apr 2021 04:19:20 +0000 Subject: Live migration fails In-Reply-To: <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> Message-ID: Sorry, in the log I haven't commented out the servername ☹ it is xy-osfecn-40250 Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Eugen Block Sent: Wednesday, April 21, 2021 5:37 PM To: Szabo, Istvan (Agoda) Cc: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails The error message seems correct, I can't find am-osfecn-4025 either in the list of compute nodes. Can you check in the database if there's an active instance (or several) allocated to that compute node? In that case you would need to correct the allocation in order for the migration to work. Zitat von "Szabo, Istvan (Agoda)" : > Sure: > > https://jpst.it/2u3uh > > These are the one where can't live migrate: > xy-osfecn-40250 > xy-osfecn-40281 > xy-osfecn-40290 > xy-osbecn-40073 > xy-osfecn-40238 > > The compute service are disabled on these because we don't want > anybody spawn a vm on these anymore so want to evacuate all vms. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Eugen Block > Sent: Wednesday, April 21, 2021 3:26 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: Live migration fails > > Hi, > > can you share the output of these commands? > > nova-manage cell_v2 list_hosts > openstack compute service list > > > Zitat von "Szabo, Istvan (Agoda)" : > >> Hi, >> >> I have couple of compute nodes where the live migration fails with >> existing vms. >> When I quickly spawn a vm and try live migration it works so I assume >> shouldn't be a big problem with the compute node. >> However I have many existing vms where it fails with a servername not found. >> >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 ERROR >> nova.conductor.tasks.migrate >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >> Unable to find record for source node servername on servername: >> ComputeHostNotFound: Compute host servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >> WARNING nova.scheduler.utils >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] Failed to compute_task_migrate_server: Compute host >> servername could not be found.: ComputeHostNotFound: Compute host >> servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >> WARNING nova.scheduler.utils >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >> Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host >> servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 ERROR >> oslo_messaging.rpc.server >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] Exception during message handling: >> ComputeHostNotFound: Compute host am-osfecn-4025 >> >> Tried with this command: >> >> nova live-migration --block-migrate id. >> >> Any idea? >> >> Thank you. >> >> ________________________________ >> This message is confidential and is for the sole use of the intended >> recipient(s). It may also be privileged or otherwise protected by >> copyright or other legal rules. If you have received it by mistake >> please let us know by reply email and delete it from your system. It >> is prohibited to copy this message or disclose its content to anyone. >> Any confidentiality or privilege is not waived or lost by any >> mistaken delivery or unauthorized disclosure of the message. All >> messages sent to and from Agoda may be monitored to ensure compliance >> with company policies, to protect the company's interests and to >> remove potential malware. Electronic messages may be intercepted, >> amended, lost or deleted, or contain viruses. > > > > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to anyone. > Any confidentiality or privilege is not waived or lost by any mistaken > delivery or unauthorized disclosure of the message. All messages sent > to and from Agoda may be monitored to ensure compliance with company > policies, to protect the company's interests and to remove potential > malware. Electronic messages may be intercepted, amended, lost or > deleted, or contain viruses. ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From Istvan.Szabo at agoda.com Thu Apr 22 05:06:37 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Thu, 22 Apr 2021 05:06:37 +0000 Subject: Live migration fails In-Reply-To: References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> Message-ID: I think I found the issue, in the instances nova db in the node column the compute node name somehow changed to short hostname. It works fith FQDN but it doesn't work with short ☹ I hope I didn't mess-up anything if I change to FQDN to make it work. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Szabo, Istvan (Agoda) Sent: Thursday, April 22, 2021 11:19 AM To: Eugen Block Cc: openstack-discuss at lists.openstack.org Subject: RE: Live migration fails Sorry, in the log I haven't commented out the servername ☹ it is xy-osfecn-40250 Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Eugen Block Sent: Wednesday, April 21, 2021 5:37 PM To: Szabo, Istvan (Agoda) Cc: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails The error message seems correct, I can't find am-osfecn-4025 either in the list of compute nodes. Can you check in the database if there's an active instance (or several) allocated to that compute node? In that case you would need to correct the allocation in order for the migration to work. Zitat von "Szabo, Istvan (Agoda)" : > Sure: > > https://jpst.it/2u3uh > > These are the one where can't live migrate: > xy-osfecn-40250 > xy-osfecn-40281 > xy-osfecn-40290 > xy-osbecn-40073 > xy-osfecn-40238 > > The compute service are disabled on these because we don't want > anybody spawn a vm on these anymore so want to evacuate all vms. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Eugen Block > Sent: Wednesday, April 21, 2021 3:26 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: Live migration fails > > Hi, > > can you share the output of these commands? > > nova-manage cell_v2 list_hosts > openstack compute service list > > > Zitat von "Szabo, Istvan (Agoda)" : > >> Hi, >> >> I have couple of compute nodes where the live migration fails with >> existing vms. >> When I quickly spawn a vm and try live migration it works so I assume >> shouldn't be a big problem with the compute node. >> However I have many existing vms where it fails with a servername not found. >> >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 ERROR >> nova.conductor.tasks.migrate >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >> Unable to find record for source node servername on servername: >> ComputeHostNotFound: Compute host servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >> WARNING nova.scheduler.utils >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] Failed to compute_task_migrate_server: Compute host >> servername could not be found.: ComputeHostNotFound: Compute host >> servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >> WARNING nova.scheduler.utils >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >> Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host >> servername could not be found. >> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 ERROR >> oslo_messaging.rpc.server >> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >> default default] Exception during message handling: >> ComputeHostNotFound: Compute host am-osfecn-4025 >> >> Tried with this command: >> >> nova live-migration --block-migrate id. >> >> Any idea? >> >> Thank you. >> >> ________________________________ >> This message is confidential and is for the sole use of the intended >> recipient(s). It may also be privileged or otherwise protected by >> copyright or other legal rules. If you have received it by mistake >> please let us know by reply email and delete it from your system. It >> is prohibited to copy this message or disclose its content to anyone. >> Any confidentiality or privilege is not waived or lost by any >> mistaken delivery or unauthorized disclosure of the message. All >> messages sent to and from Agoda may be monitored to ensure compliance >> with company policies, to protect the company's interests and to >> remove potential malware. Electronic messages may be intercepted, >> amended, lost or deleted, or contain viruses. > > > > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to anyone. > Any confidentiality or privilege is not waived or lost by any mistaken > delivery or unauthorized disclosure of the message. All messages sent > to and from Agoda may be monitored to ensure compliance with company > policies, to protect the company's interests and to remove potential > malware. Electronic messages may be intercepted, amended, lost or > deleted, or contain viruses. ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From eblock at nde.ag Thu Apr 22 06:01:27 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 22 Apr 2021 06:01:27 +0000 Subject: Live migration fails In-Reply-To: References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> Message-ID: <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> Yeah, the column "node" has the FQDN in my DB, too, only "host" is the short name. The question is how did the short name get into the "node" column, but it will probably be difficult to get to the bottom of that. Zitat von "Szabo, Istvan (Agoda)" : > I think I found the issue, in the instances nova db in the node > column the compute node name somehow changed to short hostname. It > works fith FQDN but it doesn't work with short ☹ I hope I didn't > mess-up anything if I change to FQDN to make it work. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Szabo, Istvan (Agoda) > Sent: Thursday, April 22, 2021 11:19 AM > To: Eugen Block > Cc: openstack-discuss at lists.openstack.org > Subject: RE: Live migration fails > > Sorry, in the log I haven't commented out the servername ☹ it is > xy-osfecn-40250 > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Eugen Block > Sent: Wednesday, April 21, 2021 5:37 PM > To: Szabo, Istvan (Agoda) > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Live migration fails > > The error message seems correct, I can't find am-osfecn-4025 either > in the list of compute nodes. Can you check in the database if > there's an active instance (or several) allocated to that compute > node? In that case you would need to correct the allocation in order > for the migration to work. > > > Zitat von "Szabo, Istvan (Agoda)" : > >> Sure: >> >> https://jpst.it/2u3uh >> >> These are the one where can't live migrate: >> xy-osfecn-40250 >> xy-osfecn-40281 >> xy-osfecn-40290 >> xy-osbecn-40073 >> xy-osfecn-40238 >> >> The compute service are disabled on these because we don't want >> anybody spawn a vm on these anymore so want to evacuate all vms. >> >> Istvan Szabo >> Senior Infrastructure Engineer >> --------------------------------------------------- >> Agoda Services Co., Ltd. >> e: istvan.szabo at agoda.com >> --------------------------------------------------- >> >> -----Original Message----- >> From: Eugen Block >> Sent: Wednesday, April 21, 2021 3:26 PM >> To: openstack-discuss at lists.openstack.org >> Subject: Re: Live migration fails >> >> Hi, >> >> can you share the output of these commands? >> >> nova-manage cell_v2 list_hosts >> openstack compute service list >> >> >> Zitat von "Szabo, Istvan (Agoda)" : >> >>> Hi, >>> >>> I have couple of compute nodes where the live migration fails with >>> existing vms. >>> When I quickly spawn a vm and try live migration it works so I assume >>> shouldn't be a big problem with the compute node. >>> However I have many existing vms where it fails with a servername >>> not found. >>> >>> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 ERROR >>> nova.conductor.tasks.migrate >>> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >>> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >>> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >>> Unable to find record for source node servername on servername: >>> ComputeHostNotFound: Compute host servername could not be found. >>> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >>> WARNING nova.scheduler.utils >>> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >>> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >>> default default] Failed to compute_task_migrate_server: Compute host >>> servername could not be found.: ComputeHostNotFound: Compute host >>> servername could not be found. >>> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 >>> WARNING nova.scheduler.utils >>> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >>> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >>> default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] >>> Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host >>> servername could not be found. >>> /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 ERROR >>> oslo_messaging.rpc.server >>> [req-f4067a26-a233-4673-8c07-9a8a290980b0 >>> dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - >>> default default] Exception during message handling: >>> ComputeHostNotFound: Compute host am-osfecn-4025 >>> >>> Tried with this command: >>> >>> nova live-migration --block-migrate id. >>> >>> Any idea? >>> >>> Thank you. >>> >>> ________________________________ >>> This message is confidential and is for the sole use of the intended >>> recipient(s). It may also be privileged or otherwise protected by >>> copyright or other legal rules. If you have received it by mistake >>> please let us know by reply email and delete it from your system. It >>> is prohibited to copy this message or disclose its content to anyone. >>> Any confidentiality or privilege is not waived or lost by any >>> mistaken delivery or unauthorized disclosure of the message. All >>> messages sent to and from Agoda may be monitored to ensure compliance >>> with company policies, to protect the company's interests and to >>> remove potential malware. Electronic messages may be intercepted, >>> amended, lost or deleted, or contain viruses. >> >> >> >> >> >> ________________________________ >> This message is confidential and is for the sole use of the intended >> recipient(s). It may also be privileged or otherwise protected by >> copyright or other legal rules. If you have received it by mistake >> please let us know by reply email and delete it from your system. It >> is prohibited to copy this message or disclose its content to anyone. >> Any confidentiality or privilege is not waived or lost by any mistaken >> delivery or unauthorized disclosure of the message. All messages sent >> to and from Agoda may be monitored to ensure compliance with company >> policies, to protect the company's interests and to remove potential >> malware. Electronic messages may be intercepted, amended, lost or >> deleted, or contain viruses. > > > > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to > anyone. Any confidentiality or privilege is not waived or lost by > any mistaken delivery or unauthorized disclosure of the message. All > messages sent to and from Agoda may be monitored to ensure > compliance with company policies, to protect the company's interests > and to remove potential malware. Electronic messages may be > intercepted, amended, lost or deleted, or contain viruses. From balazs.gibizer at est.tech Thu Apr 22 07:28:16 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 22 Apr 2021 09:28:16 +0200 Subject: [nova] No weekly meeting today Message-ID: <4FFYRQ.TCFA156YRYT32@est.tech> Hi, Today's nova weekly IRC meeting is canceled as we will be meeting via the PTG sessions. Cheers, gibi From smooney at redhat.com Thu Apr 22 09:13:08 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 22 Apr 2021 10:13:08 +0100 Subject: Live migration fails In-Reply-To: <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> Message-ID: <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> On Thu, 2021-04-22 at 06:01 +0000, Eugen Block wrote: > Yeah, the column "node" has the FQDN in my DB, too, only "host" is the > short name. The question is how did the short name get into the "node" > column, but it will probably be difficult to get to the bottom of that. well by default we do not expect to have FQDNs in either filed. novas default for both is the hostname of the host which will be the short name not the fqdn unless you set an fqdn in /etc/hostname which is not generally the recommended pratice. nova in general does nto support changing the hostname(/etc/hostname) of a host and you should avoid changeing the "host" value in the nova.conf too. changing these values can result in the creation fo addtional placment RP, compute service records and compute nodes and that can result in hard to fix situation wehre old vms are using one set of resouce and new vms are using the updated ones. so you should not modify either value in the db. did you perhaps specify a host when live migrating and just pass the wrong value or was the host selected by the scheduler. > > > Zitat von "Szabo, Istvan (Agoda)" : > > > I think I found the issue, in the instances nova db in the node > > column the compute node name somehow changed to short hostname. It > > works fith FQDN but it doesn't work with short ☹ I hope I didn't > > mess-up anything if I change to FQDN to make it work. > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Szabo, Istvan (Agoda) > > Sent: Thursday, April 22, 2021 11:19 AM > > To: Eugen Block > > Cc: openstack-discuss at lists.openstack.org > > Subject: RE: Live migration fails > > > > Sorry, in the log I haven't commented out the servername ☹ it is > > xy-osfecn-40250 > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Eugen Block > > Sent: Wednesday, April 21, 2021 5:37 PM > > To: Szabo, Istvan (Agoda) > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: Live migration fails > > > > The error message seems correct, I can't find am-osfecn-4025 either > > in the list of compute nodes. Can you check in the database if > > there's an active instance (or several) allocated to that compute > > node? In that case you would need to correct the allocation in order > > for the migration to work. > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > Sure: > > > > > > https://jpst.it/2u3uh > > > > > > These are the one where can't live migrate: > > > xy-osfecn-40250 > > > xy-osfecn-40281 > > > xy-osfecn-40290 > > > xy-osbecn-40073 > > > xy-osfecn-40238 > > > > > > The compute service are disabled on these because we don't want > > > anybody spawn a vm on these anymore so want to evacuate all vms. > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Eugen Block > > > Sent: Wednesday, April 21, 2021 3:26 PM > > > To: openstack-discuss at lists.openstack.org > > > Subject: Re: Live migration fails > > > > > > Hi, > > > > > > can you share the output of these commands? > > > > > > nova-manage cell_v2 list_hosts > > > openstack compute service list > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > Hi, > > > > > > > > I have couple of compute nodes where the live migration fails with > > > > existing vms. > > > > When I quickly spawn a vm and try live migration it works so I assume > > > > shouldn't be a big problem with the compute node. > > > > However I have many existing vms where it fails with a servername > > > > not found. > > > > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 ERROR > > > > nova.conductor.tasks.migrate > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > > > > default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Unable to find record for source node servername on servername: > > > > ComputeHostNotFound: Compute host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > > > > default default] Failed to compute_task_migrate_server: Compute host > > > > servername could not be found.: ComputeHostNotFound: Compute host > > > > servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > > > > default default] [instance: 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Setting instance to ACTIVE state.: ComputeHostNotFound: Compute host > > > > servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 ERROR > > > > oslo_messaging.rpc.server > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 ca7e35079f4440c78bd9870724b9638b - > > > > default default] Exception during message handling: > > > > ComputeHostNotFound: Compute host am-osfecn-4025 > > > > > > > > Tried with this command: > > > > > > > > nova live-migration --block-migrate id. > > > > > > > > Any idea? > > > > > > > > Thank you. > > > > > > > > ________________________________ > > > > This message is confidential and is for the sole use of the intended > > > > recipient(s). It may also be privileged or otherwise protected by > > > > copyright or other legal rules. If you have received it by mistake > > > > please let us know by reply email and delete it from your system. It > > > > is prohibited to copy this message or disclose its content to anyone. > > > > Any confidentiality or privilege is not waived or lost by any > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > messages sent to and from Agoda may be monitored to ensure compliance > > > > with company policies, to protect the company's interests and to > > > > remove potential malware. Electronic messages may be intercepted, > > > > amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the intended > > > recipient(s). It may also be privileged or otherwise protected by > > > copyright or other legal rules. If you have received it by mistake > > > please let us know by reply email and delete it from your system. It > > > is prohibited to copy this message or disclose its content to anyone. > > > Any confidentiality or privilege is not waived or lost by any mistaken > > > delivery or unauthorized disclosure of the message. All messages sent > > > to and from Agoda may be monitored to ensure compliance with company > > > policies, to protect the company's interests and to remove potential > > > malware. Electronic messages may be intercepted, amended, lost or > > > deleted, or contain viruses. > > > > > > > > > > > > ________________________________ > > This message is confidential and is for the sole use of the intended > > recipient(s). It may also be privileged or otherwise protected by > > copyright or other legal rules. If you have received it by mistake > > please let us know by reply email and delete it from your system. It > > is prohibited to copy this message or disclose its content to > > anyone. Any confidentiality or privilege is not waived or lost by > > any mistaken delivery or unauthorized disclosure of the message. All > > messages sent to and from Agoda may be monitored to ensure > > compliance with company policies, to protect the company's interests > > and to remove potential malware. Electronic messages may be > > intercepted, amended, lost or deleted, or contain viruses. > > > > From destienne.maxime at gmail.com Thu Apr 22 10:00:57 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Thu, 22 Apr 2021 12:00:57 +0200 Subject: Error with Placement and oslo_config Message-ID: Hello, I'm deploying the Wallaby release, after configuring Placement, I have this error when running placement-status upgrade check File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2543, in find_file raise NotInitializedError() oslo_config.cfg.NotInitializedError: call expression on parser has not been invoked It seems like migrations have been made, connection to database seems ok from the placement.conf file. I added this environment variable : OS_PLACEMENT_DATABASE__CONNECTION=mysql+pymysql://placement:placementPASSWD at controller1/placement (With the right password) I don't understand this error as I used the installation guide to install Placement. Do you know where I could dig ? Thanks a lot ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu Apr 22 10:30:15 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 22 Apr 2021 12:30:15 +0200 Subject: Error with Placement and oslo_config In-Reply-To: References: Message-ID: On Thu, Apr 22, 2021 at 12:00, Maxime d'Estienne wrote: > Hello, Hi! > > I'm deploying the Wallaby release, after configuring Placement, I > have this error when running placement-status upgrade check > > > File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2543, > in find_file > raise NotInitializedError() > oslo_config.cfg.NotInitializedError: call expression on parser has > not been invoked I think you are hit the bug [1]. We already fixed it on master. And now I proposed the backport to stable wallaby. [1] https://storyboard.openstack.org/#!/story/2008831 [2] https://review.opendev.org/q/topic:story/2008831 Cheers, gibi > > It seems like migrations have been made, connection to database seems > ok from the placement.conf file. > > I added this environment variable : > OS_PLACEMENT_DATABASE__CONNECTION=mysql+pymysql://placement:placementPASSWD at controller1/placement > > (With the right password) > > I don't understand this error as I used the installation guide to > install Placement. > > Do you know where I could dig ? > > Thanks a lot ! > > From destienne.maxime at gmail.com Thu Apr 22 10:51:46 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Thu, 22 Apr 2021 12:51:46 +0200 Subject: Error with Placement and oslo_config In-Reply-To: References: Message-ID: > I think you are hit the bug [1]. We already fixed it on master. And now > I proposed the backport to stable wallaby. > > [1] https://storyboard.openstack.org/#!/story/2008831 > [2] https://review.opendev.org/q/topic:story/2008831 > > Cheers, > gibi Ok, I understand now, Thank you ! Le jeu. 22 avr. 2021 à 12:30, Balazs Gibizer a écrit : > > > On Thu, Apr 22, 2021 at 12:00, Maxime d'Estienne > wrote: > > Hello, > > Hi! > > > > > I'm deploying the Wallaby release, after configuring Placement, I > > have this error when running placement-status upgrade check > > > > > > File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2543, > > in find_file > > raise NotInitializedError() > > oslo_config.cfg.NotInitializedError: call expression on parser has > > not been invoked > > I think you are hit the bug [1]. We already fixed it on master. And now > I proposed the backport to stable wallaby. > > [1] https://storyboard.openstack.org/#!/story/2008831 > [2] https://review.opendev.org/q/topic:story/2008831 > > Cheers, > gibi > > > > > It seems like migrations have been made, connection to database seems > > ok from the placement.conf file. > > > > I added this environment variable : > > > OS_PLACEMENT_DATABASE__CONNECTION=mysql+pymysql://placement:placementPASSWD at controller1 > /placement > > > > (With the right password) > > > > I don't understand this error as I used the installation guide to > > install Placement. > > > > Do you know where I could dig ? > > > > Thanks a lot ! > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xxxcloudlearner at gmail.com Thu Apr 22 11:16:30 2021 From: xxxcloudlearner at gmail.com (cloud learner) Date: Thu, 22 Apr 2021 16:46:30 +0530 Subject: unable to access internet in instance Message-ID: Dear All, I have installed rocky on centos 7, as documented at openstack with 2 node controller and compute nodes, as mentioned in the document used vxlan and verified all the steps as mentioned in the document and all things are working fine but i am unable to access the internet in instances. kindly help me sort out this issue. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Apr 22 11:43:35 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 22 Apr 2021 12:43:35 +0100 Subject: unable to access internet in instance In-Reply-To: References: Message-ID: On Thu, 2021-04-22 at 16:46 +0530, cloud learner wrote: > Dear All, > > I have installed rocky on centos 7, as documented at openstack with 2 node > controller and compute nodes, as mentioned in the document used vxlan and > verified all the steps as mentioned in the document and all things are > working fine but i am unable to access the internet in instances. > kindly help me sort out this issue. you have not mentioned what network backedn you are using or how you have connecte the hosts to the internet so there is not much help we can provide without more infomation. are you useing linux bridge or ovs? do you have 1 port or 2+ ports per host. if 2 ports did you confiugre a flat/vlan external network and add your physical network router as the gateway and set up static routes for the openstack external network subnets. if you only have 1 port per host have you set up a dns masquage form the external network subnets so that the traffic can route back. if you are behind a coperate firewall have you set teh dns name servers in your neutron subnets to one that is accessable if port 53 is blocked https://www.rdoproject.org/networking/networking-in-too-much-detail/#network-host-external-traffic-kl might help but there are many things that could be broken and without more info on what you have done i cant really direct you in what direction to take to resolve it. > > Thanks From balazs.gibizer at est.tech Thu Apr 22 11:50:46 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 22 Apr 2021 13:50:46 +0200 Subject: [nova][placement] introducing review priority label in gerrit Message-ID: Hi, As we discussed yesterday in the retro we would like to start using the review priority label in gerrit as a replacement of the runway process. Sean proposed the project config changes for this[1]. I added couple of questions / suggestions to that patch to start a discussion about the process. Let's try to agree in the review. Then I will document the process in tree. Cheers, gibi [1] https://review.opendev.org/c/openstack/project-config/+/787523 From C-Albert.Braden at charter.com Thu Apr 22 12:06:21 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 22 Apr 2021 12:06:21 +0000 Subject: [kolla] VM build fails after Train-Ussuri upgrade In-Reply-To: <13f2c3e3131a407c87d403d6cad3cd53@ncwmexgp009.CORP.CHARTERCOM.com> References: <13f2c3e3131a407c87d403d6cad3cd53@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: <29477f2b0e35492ca951877a48b3acc5@ncwmexgp009.CORP.CHARTERCOM.com> Can anyone help with this upgrade issue? From: Braden, Albert Sent: Monday, April 19, 2021 8:20 AM To: openstack-discuss at lists.openstack.org Subject: [kolla] VM build fails after Train-Ussuri upgrade I upgraded my Train test cluster to Ussuri following these instructions: OpenStack Docs: Operating Kolla The upgrade completed successfully with no failures, and the existing VMs are fine, but new VM build fails with rados.Rados.connect\nrados.PermissionDeniedError: Ubuntu Pastebin I'm running external ceph so I looked at this document: OpenStack Docs: External Ceph It says that I need the following in /etc/kolla/config/glance/ceph.conf: auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx I didn't have that, so I added it and then redeployed, but still can't build VMs. I tried adding the same to all copies of ceph.conf and redeployed again, but that didn't help. Does anything else need to change in my ceph config when upgrading from Train to Ussuri? I see some cryptic talk about ceph in the release notes but it's not obvious what I'm being asked to change: OpenStack Docs: Ussuri Series Release Notes I read the bug that it refers to: Bug #1904062 "external ceph cinder volume config breaks volumes ..." : Bugs : kolla-ansible (launchpad.net) But I already have "backend_host=rbd:volumes" so I don't think I'm hitting that. Also I read these sections but I don't see anything obvious here that needs to be changed: * For cinder (cinder-volume and cinder-backup), glance-api and manila keyrings behavior has changed and Kolla Ansible deployment will not copy those keys using wildcards (ceph.*), instead will use newly introduced variables. Your environment may render unusable after an upgrade if your keys in /etc/kolla/config do not match default values for introduced variables. * The default behavior for generating the cinder.conf template has changed. An rbd-1 section will be generated when external Ceph functionality is used, i.e. cinder_backend_ceph is set to true. Previously it was only included when Kolla Ansible internal Ceph deployment mechanism was used. * The rbd section of nova.conf for nova-compute is now generated when nova_backend is set to "rbd". Previously it was only generated when both enable_ceph was "yes" and nova_backend was set to "rbd". My ceph keys have the default name and are in the default locations. I have cinder_backend_ceph: "yes". I don't have a nova_backend setting but I have nova_backend_ceph: "yes" I added nova_backend: "rbd" and redeployed and now I get a different error: rados.Rados.connect\nrados.ObjectNotFound Ubuntu Pastebin I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Thu Apr 22 13:25:28 2021 From: kendall at openstack.org (Kendall Waters) Date: Thu, 22 Apr 2021 08:25:28 -0500 Subject: Diversity & Inclusion Social Hour at the PTG sponsored by RDO Message-ID: <7A17C16F-7B5F-466B-9BE7-FC1CEF937CEF@openstack.org> Hey everyone, On behalf of the RDO Community, please join us for an hour of Trivia on Thursday April 22 at 17:00 UTC. We will have trivia related to OpenStack and the other OIF projects as well as the cities we've held events. Time permitting we'll have some Pop Culture trivia as well. Prizes for the first 3 placings and registration is Free! https://eventyay.com/e/5f05de57 Cheers, Kendall Kendall Waters Perez Community Partnerships Manager Open Infrastructure Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Apr 22 14:06:23 2021 From: donny at fortnebula.com (Donny Davis) Date: Thu, 22 Apr 2021 10:06:23 -0400 Subject: Create OpenStack VMs in few seconds In-Reply-To: <20210419132620.noqzlfui7ycstkvc@yuggoth.org> References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> <20210419132620.noqzlfui7ycstkvc@yuggoth.org> Message-ID: FWIW I had some excellent startup times in Fort Nebula due to using local storage backed by nvme drives. Once the cloud image is copied to the hypervisor, startup's of the vms were usually measured in seconds. Not sure if that fits the requirements, but sub 30 second startups were the norm. This time was including the actual connection from nodepool to the instance. So I imagine the local start time was even faster. What is the requirement for startup times? On Mon, Apr 19, 2021 at 9:31 AM Jeremy Stanley wrote: > On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote: > > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley > wrote: > [...] > > > The next best thing is basically what Nodepool[*] does: start new > > > virtual machines ahead of time and keep them available in the > > > tenant. This does of course mean you're occupying additional quota > > > for whatever base "ready" capacity you've set for your various > > > images/flavors, and that you need to be able to predict how many of > > > what kinds of virtual machines you're going to need in advance. > > > > > > [*] https://zuul-ci.org/docs/nodepool/ > > > > Is it recommended to use nodepool in a production environment? > > I can't begin to guess what you mean by "in a production > environment," but it forms the lifecycle management basis for our > production CI/CD system (as it does for many other Zuul > installations). In the case of the deployment I help run, it's > continuously connected to over a dozen production clouds, both > public and private. > > But anyway, I didn't say "use Nodepool." I suggested you look at > "what Nodepool does" as a model for starting server instances in > advance within the tenants/projects which regularly require instant > access to new virtual machines. > -- > Jeremy Stanley > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Apr 22 14:13:07 2021 From: donny at fortnebula.com (Donny Davis) Date: Thu, 22 Apr 2021 10:13:07 -0400 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> <20210419132620.noqzlfui7ycstkvc@yuggoth.org> Message-ID: https://grafana.opendev.org/d/BskTteEGk/nodepool-openedge?orgId=1&from=1587349229268&to=1594290731707 I know nodepool keeps a number of instances up and available for use, but on FN it was usually tapped to the max so I am not sure this logic applies. Anyways in my performance testing and tuning of Openstack to get it moving as fast as possible, my determination was local NVME storage fit the rapid fire use case the best. Next best was on shared NVME storage via ISCSI using cinder caching. On Thu, Apr 22, 2021 at 10:06 AM Donny Davis wrote: > FWIW I had some excellent startup times in Fort Nebula due to using local > storage backed by nvme drives. Once the cloud image is copied to the > hypervisor, startup's of the vms were usually measured in seconds. Not sure > if that fits the requirements, but sub 30 second startups were the norm. > This time was including the actual connection from nodepool to the > instance. So I imagine the local start time was even faster. > > What is the requirement for startup times? > > On Mon, Apr 19, 2021 at 9:31 AM Jeremy Stanley wrote: > >> On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote: >> > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley >> wrote: >> [...] >> > > The next best thing is basically what Nodepool[*] does: start new >> > > virtual machines ahead of time and keep them available in the >> > > tenant. This does of course mean you're occupying additional quota >> > > for whatever base "ready" capacity you've set for your various >> > > images/flavors, and that you need to be able to predict how many of >> > > what kinds of virtual machines you're going to need in advance. >> > > >> > > [*] https://zuul-ci.org/docs/nodepool/ >> > >> > Is it recommended to use nodepool in a production environment? >> >> I can't begin to guess what you mean by "in a production >> environment," but it forms the lifecycle management basis for our >> production CI/CD system (as it does for many other Zuul >> installations). In the case of the deployment I help run, it's >> continuously connected to over a dozen production clouds, both >> public and private. >> >> But anyway, I didn't say "use Nodepool." I suggested you look at >> "what Nodepool does" as a model for starting server instances in >> advance within the tenants/projects which regularly require instant >> access to new virtual machines. >> -- >> Jeremy Stanley >> > > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Apr 22 14:23:50 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 22 Apr 2021 19:53:50 +0530 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> <20210419132620.noqzlfui7ycstkvc@yuggoth.org> Message-ID: On Thu, Apr 22, 2021 at 7:46 PM Donny Davis wrote: > > https://grafana.opendev.org/d/BskTteEGk/nodepool-openedge?orgId=1&from=1587349229268&to=1594290731707 > > I know nodepool keeps a number of instances up and available for use, but > on FN it was usually tapped to the max so I am not sure this logic applies. > > Anyways in my performance testing and tuning of Openstack to get it moving > as fast as possible, my determination was local NVME storage fit the > rapid fire use case the best. Next best was on shared NVME storage via > ISCSI using cinder caching. > Local nvme means, utilization of worker node storage? > On Thu, Apr 22, 2021 at 10:06 AM Donny Davis wrote: > >> FWIW I had some excellent startup times in Fort Nebula due to using local >> storage backed by nvme drives. Once the cloud image is copied to the >> hypervisor, startup's of the vms were usually measured in seconds. Not sure >> if that fits the requirements, but sub 30 second startups were the norm. >> This time was including the actual connection from nodepool to the >> instance. So I imagine the local start time was even faster. >> >> What is the requirement for startup times? >> > End-user supposed to access an application based on his/her choice, and it's a dedicated VM for the user. Based on the application end-user selection, VM should be available to the end-user along with required apps and provided hardware. > >> On Mon, Apr 19, 2021 at 9:31 AM Jeremy Stanley wrote: >> >>> On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote: >>> > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley >>> wrote: >>> [...] >>> > > The next best thing is basically what Nodepool[*] does: start new >>> > > virtual machines ahead of time and keep them available in the >>> > > tenant. This does of course mean you're occupying additional quota >>> > > for whatever base "ready" capacity you've set for your various >>> > > images/flavors, and that you need to be able to predict how many of >>> > > what kinds of virtual machines you're going to need in advance. >>> > > >>> > > [*] https://zuul-ci.org/docs/nodepool/ >>> > >>> > Is it recommended to use nodepool in a production environment? >>> >>> I can't begin to guess what you mean by "in a production >>> environment," but it forms the lifecycle management basis for our >>> production CI/CD system (as it does for many other Zuul >>> installations). In the case of the deployment I help run, it's >>> continuously connected to over a dozen production clouds, both >>> public and private. >>> >>> But anyway, I didn't say "use Nodepool." I suggested you look at >>> "what Nodepool does" as a model for starting server instances in >>> advance within the tenants/projects which regularly require instant >>> access to new virtual machines. >>> -- >>> Jeremy Stanley >>> >> >> >> -- >> ~/DonnyD >> C: 805 814 6800 >> "No mission too difficult. No sacrifice too great. Duty First" >> > > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Apr 22 14:45:05 2021 From: donny at fortnebula.com (Donny Davis) Date: Thu, 22 Apr 2021 10:45:05 -0400 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> <20210419132620.noqzlfui7ycstkvc@yuggoth.org> Message-ID: On Thu, Apr 22, 2021 at 10:24 AM open infra wrote: > > > On Thu, Apr 22, 2021 at 7:46 PM Donny Davis wrote: > >> >> https://grafana.opendev.org/d/BskTteEGk/nodepool-openedge?orgId=1&from=1587349229268&to=1594290731707 >> >> I know nodepool keeps a number of instances up and available for use, but >> on FN it was usually tapped to the max so I am not sure this logic applies. >> >> Anyways in my performance testing and tuning of Openstack to get it >> moving as fast as possible, my determination was local NVME storage fit the >> rapid fire use case the best. Next best was on shared NVME storage via >> ISCSI using cinder caching. >> > > Local nvme means, utilization of worker node storage? > > >> On Thu, Apr 22, 2021 at 10:06 AM Donny Davis >> wrote: >> >>> FWIW I had some excellent startup times in Fort Nebula due to using >>> local storage backed by nvme drives. Once the cloud image is copied to the >>> hypervisor, startup's of the vms were usually measured in seconds. Not sure >>> if that fits the requirements, but sub 30 second startups were the norm. >>> This time was including the actual connection from nodepool to the >>> instance. So I imagine the local start time was even faster. >>> >>> What is the requirement for startup times? >>> >> > End-user supposed to access an application based on his/her choice, and > it's a dedicated VM for the user. > Based on the application end-user selection, VM should be available to the > end-user along with required apps and provided hardware. > > > >> >>> On Mon, Apr 19, 2021 at 9:31 AM Jeremy Stanley >>> wrote: >>> >>>> On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote: >>>> > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley >>>> wrote: >>>> [...] >>>> > > The next best thing is basically what Nodepool[*] does: start new >>>> > > virtual machines ahead of time and keep them available in the >>>> > > tenant. This does of course mean you're occupying additional quota >>>> > > for whatever base "ready" capacity you've set for your various >>>> > > images/flavors, and that you need to be able to predict how many of >>>> > > what kinds of virtual machines you're going to need in advance. >>>> > > >>>> > > [*] https://zuul-ci.org/docs/nodepool/ >>>> > >>>> > Is it recommended to use nodepool in a production environment? >>>> >>>> I can't begin to guess what you mean by "in a production >>>> environment," but it forms the lifecycle management basis for our >>>> production CI/CD system (as it does for many other Zuul >>>> installations). In the case of the deployment I help run, it's >>>> continuously connected to over a dozen production clouds, both >>>> public and private. >>>> >>>> But anyway, I didn't say "use Nodepool." I suggested you look at >>>> "what Nodepool does" as a model for starting server instances in >>>> advance within the tenants/projects which regularly require instant >>>> access to new virtual machines. >>>> -- >>>> Jeremy Stanley >>>> >>> >>> >>> -- >>> ~/DonnyD >>> C: 805 814 6800 >>> "No mission too difficult. No sacrifice too great. Duty First" >>> >> >> >> -- >> ~/DonnyD >> C: 805 814 6800 >> "No mission too difficult. No sacrifice too great. Duty First" >> > >Local nvme means, utilization of worker node storage? Yes, the compute nodes were configured to just use the local storage (which is the default I do believe). The directory /var/lib/nova was mounted onto a dedicated NVME device. >End-user supposed to access an application based on his/her choice, and it's a dedicated VM for the user. >Based on the application end-user selection, VM should be available to the end-user along with required apps and provided hardware. This is really a two part answer. You need a portal or mechanism for end users to request a templated application stack (meaning it may take one or more machines to support the application) You also need a method to create cloud images with your applications pre-baked into them. I would also consider creating a mechanism to bake the cloud images with all of the application code and configuration (best you can) already contained in the cloud image. Disk Image Builder is some really great software that can handle the image creation part for you. The Openstack infra team does this on the daily to support the CI. They use DIB, so it's battle tested. It is likely you would end up creating some portal or using a CMP (Cloud Management Platform) to handle end user requests. For instance End users want access to X application, the CMP would orchestrate wiring together all of the bits and pieces and then respond back with access information to the app. You can use Openstack's Heat to do the same thing depending on what you want the experience for end users to be. At this point in Openstack's maturity, it can do just about anything you want it to. You just need to be specific in what you are asking your infrastructure to do and configure it for the use case. Hope this helps and good luck. Cheers ~DonnyD -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Apr 22 15:06:30 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 22 Apr 2021 20:36:30 +0530 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> <20210419132620.noqzlfui7ycstkvc@yuggoth.org> Message-ID: Thanks Donny. On Thu, Apr 22, 2021 at 8:15 PM Donny Davis wrote: > On Thu, Apr 22, 2021 at 10:24 AM open infra wrote: > >> >> >> On Thu, Apr 22, 2021 at 7:46 PM Donny Davis wrote: >> >>> >>> https://grafana.opendev.org/d/BskTteEGk/nodepool-openedge?orgId=1&from=1587349229268&to=1594290731707 >>> >>> I know nodepool keeps a number of instances up and available for use, >>> but on FN it was usually tapped to the max so I am not sure this logic >>> applies. >>> >>> Anyways in my performance testing and tuning of Openstack to get it >>> moving as fast as possible, my determination was local NVME storage fit the >>> rapid fire use case the best. Next best was on shared NVME storage via >>> ISCSI using cinder caching. >>> >> >> Local nvme means, utilization of worker node storage? >> >> >>> On Thu, Apr 22, 2021 at 10:06 AM Donny Davis >>> wrote: >>> >>>> FWIW I had some excellent startup times in Fort Nebula due to using >>>> local storage backed by nvme drives. Once the cloud image is copied to the >>>> hypervisor, startup's of the vms were usually measured in seconds. Not sure >>>> if that fits the requirements, but sub 30 second startups were the norm. >>>> This time was including the actual connection from nodepool to the >>>> instance. So I imagine the local start time was even faster. >>>> >>>> What is the requirement for startup times? >>>> >>> >> End-user supposed to access an application based on his/her choice, and >> it's a dedicated VM for the user. >> Based on the application end-user selection, VM should be available to >> the end-user along with required apps and provided hardware. >> >> >> >>> >>>> On Mon, Apr 19, 2021 at 9:31 AM Jeremy Stanley >>>> wrote: >>>> >>>>> On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote: >>>>> > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley >>>>> wrote: >>>>> [...] >>>>> > > The next best thing is basically what Nodepool[*] does: start new >>>>> > > virtual machines ahead of time and keep them available in the >>>>> > > tenant. This does of course mean you're occupying additional quota >>>>> > > for whatever base "ready" capacity you've set for your various >>>>> > > images/flavors, and that you need to be able to predict how many of >>>>> > > what kinds of virtual machines you're going to need in advance. >>>>> > > >>>>> > > [*] https://zuul-ci.org/docs/nodepool/ >>>>> > >>>>> > Is it recommended to use nodepool in a production environment? >>>>> >>>>> I can't begin to guess what you mean by "in a production >>>>> environment," but it forms the lifecycle management basis for our >>>>> production CI/CD system (as it does for many other Zuul >>>>> installations). In the case of the deployment I help run, it's >>>>> continuously connected to over a dozen production clouds, both >>>>> public and private. >>>>> >>>>> But anyway, I didn't say "use Nodepool." I suggested you look at >>>>> "what Nodepool does" as a model for starting server instances in >>>>> advance within the tenants/projects which regularly require instant >>>>> access to new virtual machines. >>>>> -- >>>>> Jeremy Stanley >>>>> >>>> >>>> >>>> -- >>>> ~/DonnyD >>>> C: 805 814 6800 >>>> "No mission too difficult. No sacrifice too great. Duty First" >>>> >>> >>> >>> -- >>> ~/DonnyD >>> C: 805 814 6800 >>> "No mission too difficult. No sacrifice too great. Duty First" >>> >> > >Local nvme means, utilization of worker node storage? > Yes, the compute nodes were configured to just use the local storage > (which is the default I do believe). The directory /var/lib/nova was > mounted onto a dedicated NVME device. > > >End-user supposed to access an application based on his/her choice, and > it's a dedicated VM for the user. > >Based on the application end-user selection, VM should be available to > the end-user along with required apps and provided hardware. > This is really a two part answer. You need a portal or mechanism for end > users to request a templated application stack (meaning it may take one or > more machines to support the application) > You also need a method to create cloud images with your applications > pre-baked into them. > I would also consider creating a mechanism to bake the cloud images with > all of the application code and configuration (best you can) already > contained in the cloud image. Disk Image Builder is some really great > software that can handle the image creation part for you. The Openstack > infra team does this on the daily to support the CI. They use DIB, so it's > battle tested. > > It is likely you would end up creating some portal or using a CMP (Cloud > Management Platform) to handle end user requests. For instance End users > want access to X application, the CMP would orchestrate wiring together all > of the bits and pieces and then respond back with access information to the > app. You can use Openstack's Heat to do the same thing depending on what > you want the experience for end users to be. > > At this point in Openstack's maturity, it can do just about anything you > want it to. You just need to be specific in what you are asking your > infrastructure to do and configure it for the use case. > > I was supposed to use StarlingX to manage Openstack's underlying infrastructure and storage to keep separately (from worker/compute nodes) but not sure how this affects the boot time of VMs. Trying to achieve low latency. > Hope this helps and good luck. > Thanks again! > Cheers > ~DonnyD > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yong.Huang at Dell.com Thu Apr 22 11:13:00 2021 From: Yong.Huang at Dell.com (Huang, Yong) Date: Thu, 22 Apr 2021 11:13:00 +0000 Subject: cinder + Unity In-Reply-To: References: Message-ID: Hi Albert, Could you execute the `cinder get-pools` command to check if Unity driver report the valid pools correctly? the output should be: -------------------------------------------------- stack at ubuntu-xenial:/opt/stack/cinder$ cinder get-pools +----------+--------------------------------+ | Property | Value | +----------+--------------------------------+ | name | ubuntu-xenial at unity#Flash_Pool | +----------+--------------------------------+ +----------+---------------------------------+ | Property | Value | +----------+---------------------------------+ | name | ubuntu-xenial at unity#Manila_Pool | +----------+---------------------------------+ +----------+---------------------------------+ | Property | Value | +----------+---------------------------------+ | name | ubuntu-xenial at unity#Cinder_Pool | +----------+---------------------------------+ -------------------------------------------------- If no valid pools, one possibility is Unity driver not initialized successfully, if the driver initialized successfully, there should be a log in cinder-volume.log like: -------------------------------------------------- Apr 21 03:20:07 ubuntu-xenial cinder-volume[29607]: INFO cinder.volume.manager [None req-4b345a29-0d78-4cf7-8bae-ae541a48aaf3 None None] Driver post RPC initialization completed successfully. -------------------------------------------------- Did you install the storops library which Unity driver relies on? Please check the version info of storops: -------------------------------------------------- stack at ubuntu-xenial:/opt/stack/cinder$ pip show storops Name: storops Version: 1.2.8 Summary: Python API for VNX and Unity. Home-page: https://github.com/emc-openstack/storops Author: Cedric Zhuang Author-email: cedric.zhuang at gmail.com License: Apache Software License Location: /usr/local/lib/python2.7/dist-packages Requires: requests, python-dateutil, persist-queue, cachez, bitmath, enum34, six, PyYAML, retryz -------------------------------------------------- Thanks Yong Huang [EXTERNAL EMAIL] Hi Albert, On Mon, Apr 19, 2021 at 11:45 PM Albert Shih > wrote: Hi everyone, I'm a total newbie with openstack, currently I'm trying to put a POC with a Unity storage element, 4 computes, and few servers (cinder, keystone, glance, neutron, nova, placement and horizon). I think my keystone, glance, placement are working (at least they past the test). Currently I'm trying to make cinder working with my Unity (480), the objectif are to use iSCSI. Here the configuration of my /etc/cinder/cinder.conf [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper = tgtadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes enabled_backends = unity transport_url = rabbit://openstack:XXXXXX at amqp-cloud.private.FQDN/openstack auth_strategy = keystone debug = True verbose = True [database] connection = mysql+pymysql://cinder:XXXXXXX at mariadb-cloud.private.FQDN/cinder [keystone_authtoken] www_authenticate_uri = http://keystone.private.FQDN:5000/v3 [keystone.private.fqdn] auth_url = http://keystone.private.FQDN:5000 [keystone.private.fqdn] identity_uri = http://keystone.private.FQDN:5000 [keystone.private.fqdn] memcached_servers = memcached-cloud.private.FQDN:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = XXXXXX [oslo_concurrency] lock_path = /var/lib/cinder/tmp [unity] # Storage protocol storage_protocol = iSCSI # Unisphere IP san_ip = onering-remote.FQDN # Unisphere username and password san_login = openstack san_password = "XXXXX" # Volume driver name volume_driver = cinder.volume.drivers.dell_emc.unity.Driver # backend's name volume_backend_name = Unitiy_ISCSI This might be something to look at with the wrong spelling causing mismatch. unity_io_ports = *_enp1s0 unity_storage_pool_names = onering When I'm trying to create a storage through a openstack volume create volumetest --type thick_volume_type --size 100 I don't even see (with tcpdump) the cinder server trying to connect to onering-remote.FQDN Inside my /var/log/cinder/cinder-scheduler.log I have 2021-04-19 18:06:56.805 21315 INFO cinder.scheduler.base_filter [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] Filtering removed all hosts for the request with volume ID '06e5f07d-766f-4d07-b3bf-6153a2cf6abd'. Filter results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0) This log mentions that no host is valid to pass the 3 filters in the scheduler. 2021-04-19 18:06:56.806 21315 WARNING cinder.scheduler.filter_scheduler [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] No weighed backend found for volume with properties: {'id': '5f16fc1f-76ff-41ee-8927-56925cf7b00f', 'name': 'thick_volume_type', 'description': None, 'is_public': True, 'projects': [], 'extra_specs': {'provisioning:type': 'thick', 'thick_provisioning_support': 'True'}, 'qos_specs_id': None, 'created_at': '2021-04-19T15:07:09.000000', 'updated_at': None, 'deleted_at': None, 'deleted': False} 2021-04-19 18:06:56.806 21315 INFO cinder.message.api [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] Creating message record for request_id = req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 2021-04-19 18:06:56.811 21315 ERROR cinder.scheduler.flows.create_volume [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee b1d58ebae6b84f7586ad63b94203d7ae - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available It seem (for me) cinder don't try to use unity.... The cinder-volume service is responsible for communicating with the backend and this create request fails on scheduler only, hence no sign of it. Any help ? Regards Looking at the scheduler logs, there are a few things you can check: 1) execute ``cinder-manage service list`` command and check the status of cinder-volume service if it's active or not. If it shows an X sign then check in cinder-volume logs for any startup failure. 2) Check the volume type properties and see if ``volume_backend_name`` is set to the right value i.e. Unitiy_ISCSI (which looks suspicious because the spelling is wrong and there might be a mismatch somewhere) Also it's good to mention the openstack version you're using since the code changes every cycle and it's hard to track the issues with every release. Thanks and regards Rajat Dhasmana -- Albert SHIH Observatoire de Paris Heure local/Local time: Mon Apr 19 08:01:37 PM CEST 2021 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.faulkner at verizonmedia.com Thu Apr 22 17:46:43 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 22 Apr 2021 10:46:43 -0700 Subject: [E] Re: [ironic] Ironic Whiteboard v2 call for reviews In-Reply-To: References: Message-ID: Hey, The requested links have been added, and the new whiteboard information migrated into the original etherpad. I backed up the existing IronicWhiteBoard contents and linked it in the "Historical Information" section of the new whiteboard. Thanks, Jay Faulkner On Tue, Apr 20, 2021 at 9:45 AM Ruby Loo wrote: > Thanks for doing this! > > I think 'archive' (ie, keep around existing whiteboard, renamed to > something else) and make your new version the one at > https://etherpad.opendev.org/p/IronicWhiteBoard > . > This way, we don't break anyone's link (and we don't make people update > links). > > --ruby > > On Fri, Apr 16, 2021 at 12:30 PM Jay Faulkner < > jay.faulkner at verizonmedia.com> wrote: > >> Hi all, >> >> Iury and I spent some time this morning updating the Ironic whiteboard >> etherpad to include more immediately useful information to contributors. >> >> We placed this updated whiteboard at >> https://etherpad.opendev.org/p/IronicWhiteBoardv2 >> >> -- our approach was to prune any outdated/broken links or information, and >> focus on making the first part of the whiteboard an easy one-click place >> for folks to see easy ways to contribute. All the rest of the information >> was carried over and reformatted. >> >> Once there is consensus from the team about this being a positive change, >> we should either replace the existing IronicWhiteBoard with the contents of >> the v2 page, or just update links to point to the new one instead. >> >> What do you all think? >> >> Thanks, >> Jay Faulkner >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Thu Apr 22 18:13:48 2021 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 22 Apr 2021 14:13:48 -0400 Subject: OpenStack Wallaby for Ubuntu 21.04 and Ubuntu 20.04 LTS Message-ID: The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Wallaby on Ubuntu 21.04 (Hirsute Hippo) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Wallaby release can be found at: https://www.openstack.org/software/wallaby. To get access to the Ubuntu Wallaby packages: == Ubuntu 21.04 == OpenStack Wallaby is available by default for installation on Ubuntu 21.04. == Ubuntu 20.04 LTS == The Ubuntu Cloud Archive for OpenStack Wallaby can be enabled on Ubuntu 20.04 by running the following command: sudo add-apt-repository cloud-archive:wallaby The Ubuntu Cloud Archive for Wallaby includes updates for: aodh, barbican, ceilometer, ceph (16.2.0), cinder, designate, designate-dashboard, dpdk (20.11.1), glance, gnocchi, heat, heat-dashboard, horizon, ironic, ironic-ui, keystone, magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, openvswitch (2.15.0), ovn (20.12.0), ovn-octavia-provider, panko, placement, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui. For a full list of packages and versions, please refer to: http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/wallaby_versions.html == Reporting bugs == If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thank you to everyone who contributed to OpenStack Wallaby. Enjoy and see you in Xena! Corey (on behalf of the Ubuntu OpenStack Engineering team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Thu Apr 22 19:02:53 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 22 Apr 2021 21:02:53 +0200 Subject: OpenStack Wallaby for Ubuntu 21.04 and Ubuntu 20.04 LTS In-Reply-To: References: Message-ID: <466900a4-b0dc-2e52-7ea3-1c430b8e1d96@linaro.org> W dniu 22.04.2021 o 20:13, Corey Bryant pisze: > The Ubuntu OpenStack team at Canonical is pleased to announce the general > availability of OpenStack Wallaby on Ubuntu 21.04 (Hirsute Hippo) and Ubuntu > 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Wallaby > release can be found at: https://www.openstack.org/software/wallaby > . > > To get access to the Ubuntu Wallaby packages: > == Ubuntu 20.04 LTS == > > The Ubuntu Cloud Archive for OpenStack Wallaby can be enabled on Ubuntu > 20.04 by running the following command: > > sudo add-apt-repository cloud-archive:wallaby > == Reporting bugs == > > If you have any issues please report bugs using the ‘ubuntu-bug’ tool to > ensure that bugs get logged in the right place in Launchpad: https://bugs.launchpad.net/cloud-archive/+bug/1925081 block Kolla from updating Ubuntu support to Wallaby. We cannot build Horizon image because UCA packages have file conflicts: INFO:kolla.common.utils.horizon:Unpacking python3-murano-dashboard (1:11.0.0-0ubuntu1~cloud0) ... INFO:kolla.common.utils.horizon:dpkg: error processing archive /tmp/apt-dpkg-install-ArbxLs/35-python3-murano-dashboard_1%3a11.0.0-0ubuntu1~cloud0_all.deb (--unpack): INFO:kolla.common.utils.horizon: trying to overwrite directory '/usr/lib/python3/dist-packages/openstack_dashboard/enabled' in package python3-heat-dashboard 5.0.0-0ubuntu1~cloud1 with nondirectory INFO:kolla.common.utils.horizon:Selecting previously unselected package python3-octavia-dashboard. From corey.bryant at canonical.com Thu Apr 22 20:41:53 2021 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 22 Apr 2021 16:41:53 -0400 Subject: OpenStack Wallaby for Ubuntu 21.04 and Ubuntu 20.04 LTS In-Reply-To: <466900a4-b0dc-2e52-7ea3-1c430b8e1d96@linaro.org> References: <466900a4-b0dc-2e52-7ea3-1c430b8e1d96@linaro.org> Message-ID: On Thu, Apr 22, 2021 at 3:02 PM Marcin Juszkiewicz < marcin.juszkiewicz at linaro.org> wrote: > W dniu 22.04.2021 o 20:13, Corey Bryant pisze: > > The Ubuntu OpenStack team at Canonical is pleased to announce the general > > availability of OpenStack Wallaby on Ubuntu 21.04 (Hirsute Hippo) and > Ubuntu > > 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the > Wallaby > > release can be found at: https://www.openstack.org/software/wallaby > > . > > > > To get access to the Ubuntu Wallaby packages: > > > == Ubuntu 20.04 LTS == > > > > The Ubuntu Cloud Archive for OpenStack Wallaby can be enabled on Ubuntu > > 20.04 by running the following command: > > > > sudo add-apt-repository cloud-archive:wallaby > > > == Reporting bugs == > > > > If you have any issues please report bugs using the ‘ubuntu-bug’ tool to > > ensure that bugs get logged in the right place in Launchpad: > > https://bugs.launchpad.net/cloud-archive/+bug/1925081 block Kolla from > updating Ubuntu support to Wallaby. > > We cannot build Horizon image because UCA packages have file conflicts: > > INFO:kolla.common.utils.horizon:Unpacking python3-murano-dashboard > (1:11.0.0-0ubuntu1~cloud0) ... > INFO:kolla.common.utils.horizon:dpkg: error processing archive > /tmp/apt-dpkg-install-ArbxLs/35-python3-murano-dashboard_1%3a11.0.0-0ubuntu1~cloud0_all.deb > > (--unpack): > INFO:kolla.common.utils.horizon: trying to overwrite directory > '/usr/lib/python3/dist-packages/openstack_dashboard/enabled' in package > python3-heat-dashboard 5.0.0-0ubuntu1~cloud1 with nondirectory > INFO:kolla.common.utils.horizon:Selecting previously unselected package > python3-octavia-dashboard. > Hi Marcin, That heat-dashboard issue should be fixed now. Can you give it another try? Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus.fnst at fujitsu.com Fri Apr 23 01:50:38 2021 From: hus.fnst at fujitsu.com (hus.fnst at fujitsu.com) Date: Fri, 23 Apr 2021 01:50:38 +0000 Subject: Regarding whether sushy will support etag Message-ID: Dear maintainers: Hello, I am Hu Shuai from FNST. I am sending this email to know if sushy will support etag. For now sushy does not support etag. However etag is useful for reducing network data and redfish dmtf organization also recommends redfish to support etag. In addition, I raised an issue about this question before but I have not received a reply. Thanks for your reading and I am looking forward to your opinions about this question. Best regards! Hu Shuai -------------- next part -------------- An HTML attachment was scrubbed... URL: From Istvan.Szabo at agoda.com Fri Apr 23 02:13:00 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Fri, 23 Apr 2021 02:13:00 +0000 Subject: Live migration fails In-Reply-To: <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> Message-ID: My /etc/hostname has only short name. The nova.conf host value is also short name. The host has been selected by the scheduler: nova live-migration --block-migrate 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0 What has been changed is in the instances table in the nova DB the node field of the vm. So actually I don't change the compute host value just edited the VM value actually. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Sean Mooney Sent: Thursday, April 22, 2021 4:13 PM To: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails On Thu, 2021-04-22 at 06:01 +0000, Eugen Block wrote: > Yeah, the column "node" has the FQDN in my DB, too, only "host" is the > short name. The question is how did the short name get into the "node" > column, but it will probably be difficult to get to the bottom of that. well by default we do not expect to have FQDNs in either filed. novas default for both is the hostname of the host which will be the short name not the fqdn unless you set an fqdn in /etc/hostname which is not generally the recommended pratice. nova in general does nto support changing the hostname(/etc/hostname) of a host and you should avoid changeing the "host" value in the nova.conf too. changing these values can result in the creation fo addtional placment RP, compute service records and compute nodes and that can result in hard to fix situation wehre old vms are using one set of resouce and new vms are using the updated ones. so you should not modify either value in the db. did you perhaps specify a host when live migrating and just pass the wrong value or was the host selected by the scheduler. > > > Zitat von "Szabo, Istvan (Agoda)" : > > > I think I found the issue, in the instances nova db in the node > > column the compute node name somehow changed to short hostname. It > > works fith FQDN but it doesn't work with short ☹ I hope I didn't > > mess-up anything if I change to FQDN to make it work. > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Szabo, Istvan (Agoda) > > Sent: Thursday, April 22, 2021 11:19 AM > > To: Eugen Block > > Cc: openstack-discuss at lists.openstack.org > > Subject: RE: Live migration fails > > > > Sorry, in the log I haven't commented out the servername ☹ it is > > xy-osfecn-40250 > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Eugen Block > > Sent: Wednesday, April 21, 2021 5:37 PM > > To: Szabo, Istvan (Agoda) > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: Live migration fails > > > > The error message seems correct, I can't find am-osfecn-4025 either > > in the list of compute nodes. Can you check in the database if > > there's an active instance (or several) allocated to that compute > > node? In that case you would need to correct the allocation in order > > for the migration to work. > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > Sure: > > > > > > https://jpst.it/2u3uh > > > > > > These are the one where can't live migrate: > > > xy-osfecn-40250 > > > xy-osfecn-40281 > > > xy-osfecn-40290 > > > xy-osbecn-40073 > > > xy-osfecn-40238 > > > > > > The compute service are disabled on these because we don't want > > > anybody spawn a vm on these anymore so want to evacuate all vms. > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Eugen Block > > > Sent: Wednesday, April 21, 2021 3:26 PM > > > To: openstack-discuss at lists.openstack.org > > > Subject: Re: Live migration fails > > > > > > Hi, > > > > > > can you share the output of these commands? > > > > > > nova-manage cell_v2 list_hosts > > > openstack compute service list > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > Hi, > > > > > > > > I have couple of compute nodes where the live migration fails > > > > with existing vms. > > > > When I quickly spawn a vm and try live migration it works so I > > > > assume shouldn't be a big problem with the compute node. > > > > However I have many existing vms where it fails with a > > > > servername not found. > > > > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > ERROR nova.conductor.tasks.migrate > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Unable to find record for source node servername on servername: > > > > ComputeHostNotFound: Compute host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] Failed to > > > > compute_task_migrate_server: Compute host servername could not > > > > be found.: ComputeHostNotFound: Compute host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Setting instance to ACTIVE state.: ComputeHostNotFound: Compute > > > > host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 > > > > ERROR oslo_messaging.rpc.server > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] Exception during message handling: > > > > ComputeHostNotFound: Compute host am-osfecn-4025 > > > > > > > > Tried with this command: > > > > > > > > nova live-migration --block-migrate id. > > > > > > > > Any idea? > > > > > > > > Thank you. > > > > > > > > ________________________________ This message is confidential > > > > and is for the sole use of the intended recipient(s). It may > > > > also be privileged or otherwise protected by copyright or other > > > > legal rules. If you have received it by mistake please let us > > > > know by reply email and delete it from your system. It is > > > > prohibited to copy this message or disclose its content to anyone. > > > > Any confidentiality or privilege is not waived or lost by any > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > messages sent to and from Agoda may be monitored to ensure > > > > compliance with company policies, to protect the company's > > > > interests and to remove potential malware. Electronic messages > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the > > > intended recipient(s). It may also be privileged or otherwise > > > protected by copyright or other legal rules. If you have received > > > it by mistake please let us know by reply email and delete it from > > > your system. It is prohibited to copy this message or disclose its content to anyone. > > > Any confidentiality or privilege is not waived or lost by any > > > mistaken delivery or unauthorized disclosure of the message. All > > > messages sent to and from Agoda may be monitored to ensure > > > compliance with company policies, to protect the company's > > > interests and to remove potential malware. Electronic messages may > > > be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > ________________________________ > > This message is confidential and is for the sole use of the intended > > recipient(s). It may also be privileged or otherwise protected by > > copyright or other legal rules. If you have received it by mistake > > please let us know by reply email and delete it from your system. It > > is prohibited to copy this message or disclose its content to > > anyone. Any confidentiality or privilege is not waived or lost by > > any mistaken delivery or unauthorized disclosure of the message. All > > messages sent to and from Agoda may be monitored to ensure > > compliance with company policies, to protect the company's interests > > and to remove potential malware. Electronic messages may be > > intercepted, amended, lost or deleted, or contain viruses. > > > > ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From shalabhgoel13 at gmail.com Fri Apr 23 05:27:30 2021 From: shalabhgoel13 at gmail.com (Shalabh Goel) Date: Fri, 23 Apr 2021 10:57:30 +0530 Subject: Placement service error on one of the controller nodes Message-ID: Hi, I am setting up an Openstack environment with 2 controller nodes. I have installed placement service on both by following the steps from https://docs.openstack.org/placement/victoria/install/install-ubuntu.html#configure-user-and-endpoints My first node is 192.168.2.14 and second is 192.168.2.15 with the virtual IP as 192.168.2.100 on which haproxy is used for listening. After restarting apache2 service, placement service is running on one first node ( 192.168.2.14) but is giving the following error on second one ( 192.168.2.15): 2021-04-23 10:39:03.314524 2021-04-23 10:39:03.314 8299 WARNING placement.db_api [-] TransactionFactory already started, not reconfiguring.\x1b[00m 2021-04-23 10:39:03.322774 2021-04-23 10:39:03.322 8299 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Pl ease set this to True.\x1b[00m 2021-04-23 10:39:03.391538 2021-04-23 10:39:03.391 8299 WARNING keystonemiddleware.auth_token [-] Configuring www_authenticat e_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint \x1b[00m 2021-04-23 10:39:03.394729 mod_wsgi (pid=8299): Failed to exec Python script file '/usr/bin/placement-api'. 2021-04-23 10:39:03.394771 mod_wsgi (pid=8299): Exception occurred processing WSGI script '/usr/bin/placement-api'. The second issue is that I am not able to use openstack compute command. It throws authentication error but I am able to use other openstack commands. I found this line on placement-api log in apache directory 2021-04-23 10:46:08.450216 2021-04-23 10:46:08.449 1421 WARNING keystonemiddleware.auth_token [req-f17428d0-183f-4e3d-8343-74b04dad7944 - - - - -] Authorization failed for token: keystonemiddleware.auth_token._exceptions.InvalidToken\x1b[00 How can I solve these issues. Are any other details needed? Thanks -- Shalabh Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Fri Apr 23 06:07:23 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 23 Apr 2021 08:07:23 +0200 Subject: OpenStack Wallaby for Ubuntu 21.04 and Ubuntu 20.04 LTS In-Reply-To: References: <466900a4-b0dc-2e52-7ea3-1c430b8e1d96@linaro.org> Message-ID: W dniu 22.04.2021 o 22:41, Corey Bryant pisze: > On Thu, Apr 22, 2021 at 3:02 PM Marcin Juszkiewicz W dniu 22.04.2021 > o 20:13, Corey Bryant pisze: >>> The Ubuntu OpenStack team at Canonical is pleased to announce the >>> general availability of OpenStack Wallaby on Ubuntu 21.04 >>> (Hirsute Hippo) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu >>> Cloud Archive. >> We cannot build Horizon image because UCA packages have file >> conflicts: > That heat-dashboard issue should be fixed now. Can you give it > another try? I did. Same issue, different package versions. From honjo.rikimaru at ntt-tx.co.jp Fri Apr 23 06:39:32 2021 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Fri, 23 Apr 2021 15:39:32 +0900 Subject: How do I delete my account of review.opendev.org? Message-ID: <6727c765-623e-119c-fe4a-75bf1873ebe7@ntt-tx.co.jp> Hi, How do I delete my account of review.opendev.org? I couldn't find the UI or documents for it. I'm sorry if I overlooked it. Best regards, From elfosardo at gmail.com Fri Apr 23 07:49:55 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Fri, 23 Apr 2021 09:49:55 +0200 Subject: [E] Re: [ironic] Ironic Whiteboard v2 call for reviews In-Reply-To: References: Message-ID: Awesome! Thanks Jay, that looks much cleaner now Riccardo On Thu, Apr 22, 2021 at 7:55 PM Jay Faulkner wrote: > Hey, > > The requested links have been added, and the new whiteboard information > migrated into the original etherpad. I backed up the existing > IronicWhiteBoard contents and linked it in the "Historical Information" > section of the new whiteboard. > > Thanks, > Jay Faulkner > > On Tue, Apr 20, 2021 at 9:45 AM Ruby Loo wrote: > >> Thanks for doing this! >> >> I think 'archive' (ie, keep around existing whiteboard, renamed to >> something else) and make your new version the one at >> https://etherpad.opendev.org/p/IronicWhiteBoard >> . >> This way, we don't break anyone's link (and we don't make people update >> links). >> >> --ruby >> >> On Fri, Apr 16, 2021 at 12:30 PM Jay Faulkner < >> jay.faulkner at verizonmedia.com> wrote: >> >>> Hi all, >>> >>> Iury and I spent some time this morning updating the Ironic whiteboard >>> etherpad to include more immediately useful information to contributors. >>> >>> We placed this updated whiteboard at >>> https://etherpad.opendev.org/p/IronicWhiteBoardv2 >>> >>> -- our approach was to prune any outdated/broken links or information, and >>> focus on making the first part of the whiteboard an easy one-click place >>> for folks to see easy ways to contribute. All the rest of the information >>> was carried over and reformatted. >>> >>> Once there is consensus from the team about this being a positive >>> change, we should either replace the existing IronicWhiteBoard with the >>> contents of the v2 page, or just update links to point to the new one >>> instead. >>> >>> What do you all think? >>> >>> Thanks, >>> Jay Faulkner >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From shalabhgoel13 at gmail.com Fri Apr 23 08:09:10 2021 From: shalabhgoel13 at gmail.com (Shalabh Goel) Date: Fri, 23 Apr 2021 13:39:10 +0530 Subject: [keystone][victoria] keystone fernet token not found/recognized Message-ID: Hi, I have installed Openstack victoria on Ubuntu server 20.04. The issue is that other services like nova and neutron are not able to authenticate with the keystone service. In the keystone log, the following lines are appearing. 2021-04-23 13:24:53.217 23331 WARNING keystone.server.flask.application [req-2b575a77-2f9f-444b-9f31-061de41fd602 f3571e3ee8d84fb0ba14bdbe2542916d 930b9693e7694aa68370352c7b911168 - default default] Could not recognize Fernet token: keystone.exception.TokenNotFound: Could not recognize Fernet token Also, I am able to run openstack client commands related to keystone on the server which tells me that keystone related things are working fine. But I am not able to execute commands like openstack compute service list. I am not able to find anything on this after searching online. Can this be an issue with the way haproxy is configured? Since there are 2 servers with keystone installed on them with pacemaker and haproxy. Please tell me what I can do to solve this issue. Thanks Shalabh -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Apr 23 10:04:15 2021 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 23 Apr 2021 12:04:15 +0200 Subject: [largescale-sig] PTG meeting: April 21, 15utc In-Reply-To: <203c7f30-b8dd-8c83-8042-f8bc56075ddb@openstack.org> References: <203c7f30-b8dd-8c83-8042-f8bc56075ddb@openstack.org> Message-ID: <0ef898f9-b305-ccd4-d6cf-dd99548d7152@openstack.org> We held our PTG meeting Wednesday. We had a pretty good turn-out with new faces, which is great! Beyond the presentation of the Large Scale SIG, we discussed doing our next video meetings as part of the OpenInfra.Live event and which topic would be great in that setting. Since a few of us could not join in the originally-targeted openinfra.live slot (May 13), we managed to move the slot to Thursday May 20 at 14:00UTC. The topic will be "Upgrades in a Large Scale OpenStack infrastructure", with several presenters going over their techniques for keeping up to date. We already have CERN (Belmiro) and OVH (Arnaud) signed up. If you're interested in participating in the live show and giving a quick overview (5min, 2-4 slides) of how your organization handles upgrades in a large scale deployment, before taking questions from the live audience, let me know! Our next meeting will be Wednesday, May 5 at 15utc on #openstack-meeting-3 in Freenode IRC. See https://etherpad.opendev.org/p/large-scale-sig-meeting for the agenda! Cheers, -- Thierry Carrez (ttx) From mdulko at redhat.com Fri Apr 23 10:43:09 2021 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Fri, 23 Apr 2021 12:43:09 +0200 Subject: [kuryr] vPTG April 2021 In-Reply-To: References: Message-ID: <103de81b54953aa59cc78dc2030a20618ffe1091.camel@redhat.com> Hi, The Kuryr PTG sessions ended today. Here's a high level summary of the sessions: * We've did a retrospective of the Wallaby cycle. Turns out quite a lot got implemented with highlights like support for SCTP, multiple nodes subnets and services without selectors, as well as stabilizing the e2e network policy gate. Some of those contributions were done by Outreachy intern Tabitha, which I see as a great success. * We had some Outreachy applicants present this time too. We've talked about the pain points of starting contribution to the project. This mainly focuses on problems with setting up a VM with DevStack and resources required to run it. * There was a discussion about having an OpenShift 4.x gate in kuryr- kubernetes. There are multiple challenges here, the main being access to a cloud able to run OpenShift 4.x + Kuryr. * A go-ahead was given for OpenShift 3.11 support in DevStack. The main argument is that current kuryr-kubernetes cannot work with so ancient K8s API version. * It was decided that we're going to default our CI gates to use OVN + ovn-octavia-provider instead of ovs + Amphora and enable network policy support. This is mainly triggered by Neutron team switching the DevStack's default to OVN. * With work switching DevStack plugin to use kubeadm progressing we see opportunity to test multiple K8s versions in the gate. * Regarding dual stack feature we discussed design detail of Endpoints vs EndpointSlices problem and decided that we might need to just support both and let Kuryr decide which has more important information. Full notes can be found in etherpad [1]. Thanks, Michał [1] https://etherpad.opendev.org/p/apr2021-ptg-kuryr On Wed, 2021-03-31 at 09:57 -0300, Maysa De Macedo Souza wrote: > Hello everyone, > > The April 20nd session needed to be rescheduled to April 23rd at the > same time. > Feel free to include any topic that is desired to be discussed at the > Kuryr etherpad[1]. > > Cheers, > Maysa. > > [1] https://etherpad.opendev.org/p/xena-ptg-kuryr > > On Mon, Mar 22, 2021 at 3:40 PM Maysa De Macedo Souza > wrote: > > Hello, > > > > Two sessions were scheduled for Kuryr on the upcoming PTG: > > - 7-8 UTC on April 20 > > - 2-3 UTC on April 22 > > > > Everyone is more than welcome to join the sessions and check our > > future plans, give feedback or discuss anything regarding Kuryr. > > > > Even though participation is free registration is needed[1]. > > > > Regards, > > Maysa Macedo. > > > > [1] https://april2021-ptg.eventbrite.com > > > > From fungi at yuggoth.org Fri Apr 23 11:53:30 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 23 Apr 2021 11:53:30 +0000 Subject: [infra] How do I delete my account of review.opendev.org? In-Reply-To: <6727c765-623e-119c-fe4a-75bf1873ebe7@ntt-tx.co.jp> References: <6727c765-623e-119c-fe4a-75bf1873ebe7@ntt-tx.co.jp> Message-ID: <20210423115330.e2bjnkxywg7uzlzj@yuggoth.org> On 2021-04-23 15:39:32 +0900 (+0900), Rikimaru Honjo wrote: > How do I delete my account of review.opendev.org? > I couldn't find the UI or documents for it. > > I'm sorry if I overlooked it. Gerrit doesn't have an account deletion mechanism, however administrators can set an account to inactive state so it is no longer usable. I've done this for you now. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corey.bryant at canonical.com Fri Apr 23 12:18:56 2021 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 23 Apr 2021 08:18:56 -0400 Subject: OpenStack Wallaby for Ubuntu 21.04 and Ubuntu 20.04 LTS In-Reply-To: References: <466900a4-b0dc-2e52-7ea3-1c430b8e1d96@linaro.org> Message-ID: On Fri, Apr 23, 2021 at 2:07 AM Marcin Juszkiewicz < marcin.juszkiewicz at linaro.org> wrote: > W dniu 22.04.2021 o 22:41, Corey Bryant pisze: > > On Thu, Apr 22, 2021 at 3:02 PM Marcin Juszkiewicz W dniu 22.04.2021 > > o 20:13, Corey Bryant pisze: > > >>> The Ubuntu OpenStack team at Canonical is pleased to announce the > >>> general availability of OpenStack Wallaby on Ubuntu 21.04 > >>> (Hirsute Hippo) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu > >>> Cloud Archive. > > >> We cannot build Horizon image because UCA packages have file > >> conflicts: > > > That heat-dashboard issue should be fixed now. Can you give it > > another try? > > I did. Same issue, different package versions. > > Ok this is a similar issue that heat-dashboard was hitting but it's in murano-dashboard. I'll get it fixed today and will follow up in your bug that you linked earlier. Thanks again for reporting this! Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Apr 23 12:37:16 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 23 Apr 2021 15:37:16 +0300 Subject: [TripleO] Xena PTG wrapup and links Message-ID: Hello TripleO o/ a massive ** THANK YOU! ** to everyone that took part in our PTG meeting and especially the folks that prepared for 18 sessions across four days. Participants were variable but at most I saw/noticed these numbers: Monday:54, Tuesday:49, Wednesday:46, Thursday:42 I've included a condensed listing below [3] for convenience and you can find the full schedule at [1] with session descriptions, links to individual topic etherpads and links to the daily recordings. Note there is a problem with day 1 recording, I am waiting to hear back but otherwise that's all we have right now (day 2/3/4 are complete as far as i can see - didn't watch it all again!). Our "team photo" from Monday is at [2] apologies to the person who is hidden behind the jitsi controls. In my defense we did this when we just switched to meetpad after the initial zoom issues. For what it's worth, if you send me a link to an updated photo I can add you in again ;) regards, marios [1] https://etherpad.opendev.org/p/tripleo-ptg-xena [2] https://i.imgur.com/mxaGaSf.jpg [3] DAY 1: Topic: Wallaby cycle community retrospective Proposer: marios Topic: Plan/Swift removal update Proposer: cloudnull Topic: Ephemeral Heat Update Proposer: slagle Topic: update and planned work networks & ports v2 "no heat" Proposer: hjensas Topic: Ceph Integration Update Proposer: gfidente, fmount DAY2: Topic: Lets move to whole disk overcloud-full images by default Proposer: sbaker Topic: Get rid of the service healthchecks Proposer: Tengu Topic: Tripleo Repos - single sourcing repo setup Proposer: sshaidm, marios, chkumar, rlandy Topic: One yaml to rule all tempest tests Proposer: arxcruz Topic: TripleO.Next Proposer: mwhahaha DAY3: Topic: Get rid of the terrible tripleo-ansible-inventory script Proposer: Tengu Topic: Validation Framework UI Output Proposer: dpeacock/strider/matbu/jpodivin Topic Validation Framework Next Generation Proposer: dpeacock Topic: BGP Routing with FRR Proposer: dsneddon Topic: Consolidate Update/Upgrade Proposer: holser DAY4: Topic: RBAC and TripleO Proposer: LBragstad, HRybacki Topic: What and when to expect w/ CentOS-Stream-9 Proposer: Wes Hayutin Topic: os-migrate brief introduction Proposer: jistr, ccamacho -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Apr 23 12:54:37 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Fri, 23 Apr 2021 14:54:37 +0200 Subject: [keystone] Token Message-ID: Hi Which CLI setting sets domain_id field in a token? I tried openstack —os-domain-id SOME_OS_COMMAND, openstack —os-default-domain SOME_OS_COMMAND, openstack —os-default-domain_id SOME_OS_COMMAND but none of them sets this field and policies checking domain_id:%(domain_id) don’t work because of that. Interesting thing is that horizon somehow generates token with domain_id set and everything works with the same policies, I have a problem only with CLI. Can user_domain_id (which is inside of every token is see for particular user) be used instead of domain_id? Example token from CLI: 2021-04-23 12:16:38.090 700 DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-117bc600-490e-46ae-a857-0c8d09dc1dbc 9adbxxxxb02ef 61d4xxxx9c0f - 3a08xxxx82c1 3a08xxxx82c1] RBAC: auth_context: {'token': , 'domain_id': None, 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': None, 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': '61d4xxxx9c0f', 'project_domain_id': '3a08xxxx82c1', 'roles': ['member', 'project_admin', 'reader', 'domain_admin'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 Example token from Horizon: 2021-04-23 12:48:21.009 704 DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-d6d89d3e-c3c1-48c0-b3ed-b3dcedb54db3 9adbxxxx02ef - 3a08xxxx82c1 3a08xxxx82c1 -] RBAC: auth_context: {'token': , 'domain_id': '3a08xxx82c1', 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': ‚xxxx', 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': None, 'project_domain_id': None, 'roles': ['project_admin', 'member', 'reader', 'domain_admin'], 'is_admin_project': False, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 Best regards Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Apr 23 13:29:34 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 23 Apr 2021 08:29:34 -0500 Subject: [openstack-community] Error in downloading Devstack In-Reply-To: References: Message-ID: Jean, I'm including the OpenStack Discuss list as it is the appropriate list. In order to help we'll need to know what issues you're seeing. We are finishing up our Project Team Gathering today so it may be a few days to get a response. Thanks, Amy (spotz) On Fri, Apr 23, 2021 at 1:50 AM Jean Richard wrote: > Dear Openstack Community, > > I am new to Openstack and I am having problems downloading Devstack. I > followed the instructions of "DevStack Docs" and some books. But I am > getting a lot of error messages. Can someone please help me with a good > reference. > I am looking forward to hearing from you. > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Apr 23 13:43:06 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 23 Apr 2021 16:43:06 +0300 Subject: [TripleO] Xena PTG wrapup and links In-Reply-To: References: Message-ID: On Fri, Apr 23, 2021 at 3:37 PM Marios Andreou wrote: > > Hello TripleO o/ > > a massive ** THANK YOU! ** to everyone that took part in our PTG meeting and especially the folks that prepared for 18 sessions across four days. Participants were variable but at most I saw/noticed these numbers: Monday:54, Tuesday:49, Wednesday:46, Thursday:42 > > I've included a condensed listing below [3] for convenience and you can find the full schedule at [1] > with session descriptions, links to individual topic etherpads and links to the daily recordings. Note there is a problem with day 1 recording, I am waiting to hear back but otherwise that's all we have right now (day 2/3/4 are complete as far as i can see - didn't watch it all again!). > > Our "team photo" from Monday is at [2] apologies to the person who is hidden behind the jitsi controls. In my defense we did this when we just switched to meetpad after the initial zoom issues. For what it's worth, if you send me a link to an updated photo I can add you in again ;) > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > [2] https://i.imgur.com/mxaGaSf.jpg > > [3] DAY 1: > Topic: Wallaby cycle community retrospective Proposer: marios > Topic: Plan/Swift removal update Proposer: cloudnull > Topic: Ephemeral Heat Update Proposer: slagle > Topic: update and planned work networks & ports v2 "no heat" Proposer: hjensas > Topic: Ceph Integration Update Proposer: gfidente, fmount apologies John: I dropped ", fultonj" on the ceph session in my copy/pasta thanks From Arkady.Kanevsky at dell.com Fri Apr 23 14:07:56 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 23 Apr 2021 14:07:56 +0000 Subject: [Ironic] Proposal to move weekly meeting 1 hour earlier Message-ID: Team, On today's PTG Interop meeting we proposed to move weekly Friday meeting 1 hour earlier to 16:00 UTC. Please, respond if the proposed time will or not will work for you. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Apr 23 14:12:11 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 23 Apr 2021 16:12:11 +0200 Subject: [Ironic] Proposal to move weekly meeting 1 hour earlier In-Reply-To: References: Message-ID: I suspect you wanted to use [interop] tag, not [ironic] :) On Fri, Apr 23, 2021 at 4:10 PM Kanevsky, Arkady wrote: > Team, > > On today’s PTG Interop meeting we proposed to move weekly Friday meeting 1 > hour earlier to 16:00 UTC. > > Please, respond if the proposed time will or not will work for you. > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Apr 23 14:12:54 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 23 Apr 2021 17:12:54 +0300 Subject: [TripleO] Xena PTG wrapup and links In-Reply-To: References: Message-ID: On Friday, April 23, 2021, Marios Andreou wrote: > On Fri, Apr 23, 2021 at 3:37 PM Marios Andreou wrote: > > > > Hello TripleO o/ > > > > a massive ** THANK YOU! ** to everyone that took part in our PTG > meeting and especially the folks that prepared for 18 sessions across four > days. Participants were variable but at most I saw/noticed these numbers: > Monday:54, Tuesday:49, Wednesday:46, Thursday:42 > > > > I've included a condensed listing below [3] for convenience and you can > find the full schedule at [1] > > with session descriptions, links to individual topic etherpads and links > to the daily recordings. Note there is a problem with day 1 recording, I am > waiting to hear back but otherwise that's all we have right now (day 2/3/4 > are complete as far as i can see - didn't watch it all again!). > > > > Our "team photo" from Monday is at [2] apologies to the person who is > hidden behind the jitsi controls. In my defense we did this when we just > switched to meetpad after the initial zoom issues. For what it's worth, if > you send me a link to an updated photo I can add you in again ;) > > > > regards, marios > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > [2] https://i.imgur.com/mxaGaSf.jpg > > > > [3] DAY 1: > > Topic: Wallaby cycle community retrospective Proposer: marios > > Topic: Plan/Swift removal update Proposer: cloudnull and dropped ramishra here ^^^ sorry ! folks better just refer directly to https://etherpad.opendev.org/p/tripleo-ptg-xena for the correct info seems my copy pasta in this email was not very good i dropped names from the presenters thanks > > Topic: Ephemeral Heat Update Proposer: slagle > > Topic: update and planned work networks & ports v2 "no heat" Proposer: > hjensas > Topic: Ceph Integration Update Proposer: gfidente, fmount > > apologies John: I dropped ", fultonj" on the ceph session in my copy/pasta > > thanks > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 23 14:18:08 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 23 Apr 2021 14:18:08 +0000 Subject: [Interop] Proposal to move weekly meeting 1 hour earlier Message-ID: Thanks Dmitry. You are correct it is for Interop WG not Ironic. Thanks for catching it. That is for Friday weekly Interop Meeting. Arkady From: Dmitry Tantsur Sent: Friday, April 23, 2021 9:12 AM To: Kanevsky, Arkady Cc: openstack-discuss at lists.openstack.org Subject: Re: [Ironic] Proposal to move weekly meeting 1 hour earlier [EXTERNAL EMAIL] I suspect you wanted to use [interop] tag, not [ironic] :) On Fri, Apr 23, 2021 at 4:10 PM Kanevsky, Arkady > wrote: Team, On today’s PTG Interop meeting we proposed to move weekly Friday meeting 1 hour earlier to 16:00 UTC. Please, respond if the proposed time will or not will work for you. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -- Red Hat GmbH, https://de.redhat.com/ [de.redhat.com] , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Fri Apr 23 14:24:22 2021 From: mkopec at redhat.com (Martin Kopec) Date: Fri, 23 Apr 2021 16:24:22 +0200 Subject: [Interop] Proposal to move weekly meeting 1 hour earlier In-Reply-To: References: Message-ID: +1 on having the meeting one hour earlier, thanks. On Fri, 23 Apr 2021 at 16:22, Kanevsky, Arkady wrote: > Thanks Dmitry. > > You are correct it is for Interop WG not Ironic. > > Thanks for catching it. > > > > That is for Friday weekly Interop Meeting. > > Arkady > > > > *From:* Dmitry Tantsur > *Sent:* Friday, April 23, 2021 9:12 AM > *To:* Kanevsky, Arkady > *Cc:* openstack-discuss at lists.openstack.org > *Subject:* Re: [Ironic] Proposal to move weekly meeting 1 hour earlier > > > > [EXTERNAL EMAIL] > > I suspect you wanted to use [interop] tag, not [ironic] :) > > > > On Fri, Apr 23, 2021 at 4:10 PM Kanevsky, Arkady > wrote: > > Team, > > On today’s PTG Interop meeting we proposed to move weekly Friday meeting 1 > hour earlier to 16:00 UTC. > > Please, respond if the proposed time will or not will work for you. > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > > > > -- > > Red Hat GmbH, https://de.redhat.com/ [de.redhat.com] > > , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From owalsh at redhat.com Fri Apr 23 14:35:40 2021 From: owalsh at redhat.com (Oliver Walsh) Date: Fri, 23 Apr 2021 15:35:40 +0100 Subject: Create OpenStack VMs in few seconds In-Reply-To: References: <20210325150440.dkwn6gquyojumz6s@yuggoth.org> <20210419132620.noqzlfui7ycstkvc@yuggoth.org> Message-ID: On Thu, 22 Apr 2021 at 15:08, Donny Davis wrote: > FWIW I had some excellent startup times in Fort Nebula due to using local > storage backed by nvme drives. Once the cloud image is copied to the > hypervisor, startup's of the vms were usually measured in seconds. > Nova image pre-caching might be useful here: https://docs.openstack.org/nova/ussuri/admin/image-caching.html#image-pre-caching Cheers, Ollie > Not sure if that fits the requirements, but sub 30 second startups were > the norm. This time was including the actual connection from nodepool to > the instance. So I imagine the local start time was even faster. > > What is the requirement for startup times? > > On Mon, Apr 19, 2021 at 9:31 AM Jeremy Stanley wrote: > >> On 2021-04-19 16:31:24 +0530 (+0530), open infra wrote: >> > On Thu, Mar 25, 2021 at 8:38 PM Jeremy Stanley >> wrote: >> [...] >> > > The next best thing is basically what Nodepool[*] does: start new >> > > virtual machines ahead of time and keep them available in the >> > > tenant. This does of course mean you're occupying additional quota >> > > for whatever base "ready" capacity you've set for your various >> > > images/flavors, and that you need to be able to predict how many of >> > > what kinds of virtual machines you're going to need in advance. >> > > >> > > [*] https://zuul-ci.org/docs/nodepool/ >> > >> > Is it recommended to use nodepool in a production environment? >> >> I can't begin to guess what you mean by "in a production >> environment," but it forms the lifecycle management basis for our >> production CI/CD system (as it does for many other Zuul >> installations). In the case of the deployment I help run, it's >> continuously connected to over a dozen production clouds, both >> public and private. >> >> But anyway, I didn't say "use Nodepool." I suggested you look at >> "what Nodepool does" as a model for starting server instances in >> advance within the tenants/projects which regularly require instant >> access to new virtual machines. >> -- >> Jeremy Stanley >> > > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vhariria at redhat.com Fri Apr 23 15:13:52 2021 From: vhariria at redhat.com (Vida Haririan) Date: Fri, 23 Apr 2021 11:13:52 -0400 Subject: [interop] Proposal to move weekly meeting 1 hour earlier In-Reply-To: References: Message-ID: +1 On Fri, Apr 23, 2021 at 10:17 AM Dmitry Tantsur wrote: > I suspect you wanted to use [interop] tag, not [ironic] :) > > On Fri, Apr 23, 2021 at 4:10 PM Kanevsky, Arkady > wrote: > >> Team, >> >> On today’s PTG Interop meeting we proposed to move weekly Friday meeting >> 1 hour earlier to 16:00 UTC. >> >> Please, respond if the proposed time will or not will work for you. >> >> >> >> Thanks, >> >> Arkady Kanevsky, Ph.D. >> >> SP Chief Technologist & DE >> >> Dell Technologies office of CTO >> >> Dell Inc. One Dell Way, MS PS2-91 >> >> Round Rock, TX 78682, USA >> >> Phone: 512 7204955 >> >> >> > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Fri Apr 23 15:32:20 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Fri, 23 Apr 2021 10:32:20 -0500 Subject: [keystone][victoria] keystone fernet token not found/recognized In-Reply-To: References: Message-ID: We've seen this error before when a proxy certificate was invalidly configured, it ended up messing up the request and keystone was getting an invalid token payload that it didn't recognize. On Fri, Apr 23, 2021 at 3:17 AM Shalabh Goel wrote: > > Hi, > > I have installed Openstack victoria on Ubuntu server 20.04. The issue is > that other services like nova and neutron are not able to authenticate with > the keystone service. In the keystone log, the following lines are > appearing. > > 2021-04-23 13:24:53.217 23331 WARNING > keystone.server.flask.application [req-2b575a77-2f9f-444b-9f31-061de41fd602 > f3571e3ee8d84fb0ba14bdbe2542916d 930b9693e7694aa68370352c7b911168 - default > default] Could not recognize Fernet token: > keystone.exception.TokenNotFound: Could not recognize Fernet token > > Also, I am able to run openstack client commands related to keystone on > the server which tells me that keystone related things are working fine. > But I am not able to execute commands like openstack compute service list. > I am not able to find anything on this after searching online. > > Can this be an issue with the way haproxy is configured? Since there are 2 > servers with keystone installed on them with pacemaker and haproxy. > > Please tell me what I can do to solve this issue. > > Thanks > > Shalabh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bekir.fajkovic at citynetwork.eu Fri Apr 23 06:24:05 2021 From: bekir.fajkovic at citynetwork.eu (Bekir Fajkovic) Date: Fri, 23 Apr 2021 08:24:05 +0200 Subject: Scheduling backups in Trove In-Reply-To: References: Message-ID: <2466322c572e931fd52e767684ee81e2@citynetwork.eu> Hello! A question regarding the best practices when it comes to scheduling backups: Is there any built-in mechanism implemented today in the service or do the customer or cloud service provider have to schedule the backup themselves? I see some proposals about implementing backup schedules through Mistral workflows: https://specs.openstack.org/openstack/trove-specs/specs/newton/scheduled-backup.html But i am not sure about the status of that. Best Regards Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED ----- Original Message ----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Sat Apr 24 02:58:32 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 23 Apr 2021 19:58:32 -0700 Subject: [Interop] Proposal to move weekly meeting 1 hour earlier In-Reply-To: References: Message-ID: On Fri, Apr 23, 2021 at 7:24 AM Kanevsky, Arkady wrote: > Thanks Dmitry. > > You are correct it is for Interop WG not Ironic. > > Thanks for catching it. > > > > That is for Friday weekly Interop Meeting. > > Arkady > > > > *From:* Dmitry Tantsur > *Sent:* Friday, April 23, 2021 9:12 AM > *To:* Kanevsky, Arkady > *Cc:* openstack-discuss at lists.openstack.org > *Subject:* Re: [Ironic] Proposal to move weekly meeting 1 hour earlier > > > > [EXTERNAL EMAIL] > > I suspect you wanted to use [interop] tag, not [ironic] :) > > > > On Fri, Apr 23, 2021 at 4:10 PM Kanevsky, Arkady > wrote: > > Team, > > On today’s PTG Interop meeting we proposed to move weekly Friday meeting 1 > hour earlier to 16:00 UTC. > > Please, respond if the proposed time will or not will work for you. > > +1 works for me > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > > > > -- > > Red Hat GmbH, https://de.redhat.com/ [de.redhat.com] > > , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Sat Apr 24 03:01:59 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 23 Apr 2021 22:01:59 -0500 Subject: [Interop] Proposal to move weekly meeting 1 hour earlier In-Reply-To: References: Message-ID: While I'm thinking of it, the wiki says PDT, but I don't think we're in Daylight Savings anymore. Should we just move that to UTC? On Apr 23 2021, at 9:58 pm, Goutham Pacha Ravi wrote: > > > On Fri, Apr 23, 2021 at 7:24 AM Kanevsky, Arkady wrote: > > Thanks Dmitry. > > > > You are correct it is for Interop WG not Ironic. > > Thanks for catching it. > > > > That is for Friday weekly Interop Meeting. > > Arkady > > > > From: Dmitry Tantsur > > Sent: Friday, April 23, 2021 9:12 AM > > To: Kanevsky, Arkady > > Cc: openstack-discuss at lists.openstack.org (mailto:openstack-discuss at lists.openstack.org) > > Subject: Re: [Ironic] Proposal to move weekly meeting 1 hour earlier > > > > > > > > > > [EXTERNAL EMAIL] > > I suspect you wanted to use [interop] tag, not [ironic] :) > > > > > > > > On Fri, Apr 23, 2021 at 4:10 PM Kanevsky, Arkady wrote: > > > Team, > > > > > > On today’s PTG Interop meeting we proposed to move weekly Friday meeting 1 hour earlier to 16:00 UTC. > > > Please, respond if the proposed time will or not will work for you. > > +1 works for me > > > > > > > > > > > > > Thanks, > > > Arkady Kanevsky, Ph.D. > > > SP Chief Technologist & DE > > > Dell Technologies office of CTO > > > Dell Inc. One Dell Way, MS PS2-91 > > > Round Rock, TX 78682, USA > > > Phone: 512 7204955 > > > > > > -- > > Red Hat GmbH, https://de.redhat.com/ [de.redhat.com] (https://urldefense.com/v3/__https:/de.redhat.com/__;!!LpKI!wCVrafnGLyCBhmrCcVZCh7_u_glH8A9p1qgwRCRJGZ-i50tz6uiRVQcx1mAQC5jSxVUj$) , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmohanty at gmail.com Sat Apr 24 05:52:59 2021 From: dmohanty at gmail.com (Deepak Mohanty) Date: Fri, 23 Apr 2021 22:52:59 -0700 Subject: placement service issue (placement-status upgrade check failure) Message-ID: I need your help with installation of placement service on Ubuntu 20.04. I have followed the steps in: https://docs.openstack.org/placement/wallaby/install/install-ubuntu.html I am facing the following issue when I try to verify the installation: $ placement-status upgrade check Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/oslo_upgradecheck/upgradecheck.py", line 196, in run return conf.command.action_fn() File "/usr/lib/python3/dist-packages/oslo_upgradecheck/upgradecheck.py", line 104, in check result = func_name(self, **kwargs) File "/usr/lib/python3/dist-packages/oslo_upgradecheck/common_checks.py", line 41, in check_policy_json policy_path = conf.find_file(conf.oslo_policy.policy_file) File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2543, in find_file raise NotInitializedError() oslo_config.cfg.NotInitializedError: call expression on parser has not been invoked I did a bit of debugging and found that the policy file is: policy.json. I could not find any policy.json on my machine. I created an empty policy.json at: /etc/placement/policy.json (user and group = placement). That too did not work. I have followed the steps in the install guide twice, deleting the placement database and other resources and recreating them. I continue to see the same issue. Thank you for your time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Sat Apr 24 07:44:16 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Sat, 24 Apr 2021 09:44:16 +0200 Subject: placement service issue (placement-status upgrade check failure) In-Reply-To: References: Message-ID: On Fri, Apr 23, 2021 at 22:52, Deepak Mohanty wrote: > I need your help with installation of placement service on Ubuntu > 20.04. I have followed the steps in: > https://docs.openstack.org/placement/wallaby/install/install-ubuntu.html > > I am facing the following issue when I try to verify the installation: > $ placement-status upgrade check > Error: > Traceback (most recent call last): > File > "/usr/lib/python3/dist-packages/oslo_upgradecheck/upgradecheck.py", > line 196, in run > return conf.command.action_fn() > File > "/usr/lib/python3/dist-packages/oslo_upgradecheck/upgradecheck.py", > line 104, in check > result = func_name(self, **kwargs) > File > "/usr/lib/python3/dist-packages/oslo_upgradecheck/common_checks.py", > line 41, in check_policy_json > policy_path = conf.find_file(conf.oslo_policy.policy_file) > File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line > 2543, in find_file > raise NotInitializedError() > oslo_config.cfg.NotInitializedError: call expression on parser has > not been invoked I think you hit the bug [1]. The fix has been merged to master and a backport is proposed to stable/wallaby as well[2]. Cheers, gibi [1] https://storyboard.openstack.org/#!/story/2008831 [2] https://review.opendev.org/q/topic:story/2008831 > > I did a bit of debugging and found that the policy file is: > policy.json. I could not find any policy.json on my machine. I > created an empty policy.json at: /etc/placement/policy.json (user and > group = placement). That too did not work. > > I have followed the steps in the install guide twice, deleting the > placement database and other resources and recreating them. I > continue to see the same issue. > > Thank you for your time. From tjoen at dds.nl Sat Apr 24 09:17:17 2021 From: tjoen at dds.nl (tjoen) Date: Sat, 24 Apr 2021 11:17:17 +0200 Subject: placement service issue (placement-status upgrade check failure) In-Reply-To: References: Message-ID: On 4/24/21 9:44 AM, Balazs Gibizer wrote: > [1] https://storyboard.openstack.org/#!/story/2008831 > [2] https://review.opendev.org/q/topic:story/2008831 I haven't checked if it solved the problem: boto3 is a dependency of glance_store in [s3] section Resolved errors found in the logs. I got it working until status=ACTIVE in $ openstack server list and $ openstack console url show provider-instance but couldn't ping or login into the instance From pramchan at yahoo.com Sat Apr 24 15:32:52 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Sat, 24 Apr 2021 15:32:52 +0000 (UTC) Subject: [Ironic] Proposal to move weekly meeting 1 hour earlier In-Reply-To: References: Message-ID: <1047425070.439737.1619278372302@mail.yahoo.com> LGTM - However my attending is based om several factors for next 6 weeks and will try whenever I can. Thx Prakash Sent from Yahoo Mail on Android On Fri, Apr 23, 2021 at 7:38 PM, Kanevsky, Arkady wrote: Team, On today’s PTG Interop meeting we proposed to move weekly Friday meeting 1 hour earlier to 16:00 UTC. Please, respond if the proposed time will or not will work for you.   Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955   -------------- next part -------------- An HTML attachment was scrubbed... URL: From xin-ran.wang at intel.com Sun Apr 25 09:33:43 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Sun, 25 Apr 2021 09:33:43 +0000 Subject: [cyborg][ptg] Cyborg vPTG Summary April 2021 Message-ID: Hi all, Thanks for all your participation! We've conducted a successful meeting last week. Here is the aggregated summary from Cyborg vPTG discussion. Please check it out and feel free to feedback any concerns you might have. We did a retrospective of Wallaby release, including: * We supports more operations supported for a VM with accelerator attached. * We introduced new drivers for Intel x710 NIC and Inspur's NVMe SSD Card. * We implemented a new configuration file allowing more flexible device configuration. Topic discussion: Here's some major discussion and conclusion of Cyborg vPTG. For more details, please refer to the etherpad[1]. * More nova operation supporting: - We prioritized the tasks: 1. suspend/resume. 2. cold migration. 3. live migration. * vGPU support: - We reached an internal agreement on whole workflow which can be apply as a generic framework for mdev device. * API enhancement: Some of the following items requires a new micro-version. - Add refresh policy check for all APIs. - Add device profile update API. - Add ARQ query by multiple instances API. - Add disable/enable device API, this one requires a spec first. - Enhance device profile show API with more information. * Cleanup issue: - This issue comes from the case where one compute node shutdown accidently, and the accelerator records in placement and cyborg DB remains as the orphaned resources. We agreed to implement a mechanism to clean up the orphaned resources, this one also need a spec. [1] https://etherpad.opendev.org/p/cyborg-xena-ptg Thanks, Xin-Ran Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: From xin-ran.wang at intel.com Sun Apr 25 09:39:25 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Sun, 25 Apr 2021 09:39:25 +0000 Subject: Recall: [cyborg][ptg] Cyborg vPTG Summary April 2021 Message-ID: Wang, Xin-ran would like to recall the message, "[cyborg][ptg] Cyborg vPTG Summary April 2021". From xin-ran.wang at intel.com Sun Apr 25 09:45:28 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Sun, 25 Apr 2021 09:45:28 +0000 Subject: Recall: [cyborg][ptg] Cyborg vPTG Summary April 2021 Message-ID: Wang, Xin-ran would like to recall the message, "[cyborg][ptg] Cyborg vPTG Summary April 2021". From xin-ran.wang at intel.com Sun Apr 25 09:55:41 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Sun, 25 Apr 2021 09:55:41 +0000 Subject: [cyborg][ptg]Cyborg vPTG Summary April 2021 Message-ID: Hi all, Please ignore my previous message, it is sent to wrong person. Sorry about this. Thanks for all your participation! We've conducted a successful meeting last week. Here is the aggregated summary from Cyborg vPTG discussion. Please check it out and feel free to feedback any concerns you might have. We did a retrospective of Wallaby release, including: * We supports more operations supported for a VM with accelerator attached. * We introduced new drivers for Intel x710 NIC and Inspur's NVMe SSD Card. * We implemented a new configuration file allowing more flexible device configuration. Topic discussion: Here's some major discussion and conclusion of Cyborg vPTG. For more details, please refer to the etherpad[1]. * More nova operation supporting: - We prioritized the tasks: 1. suspend/resume. 2. cold migration. 3. live migration. * vGPU support: - We reached an internal agreement on whole workflow which can be apply as a generic framework for mdev device. * API enhancement: Some of the following items requires a new micro-version. - Add refresh policy check for all APIs. - Add device profile update API. - Add ARQ query by multiple instances API. - Add disable/enable device API, this one requires a spec first. - Enhance device profile show API with more information. * Cleanup issue: - This issue comes from the case where one compute node shutdown accidently, and the accelerator records in placement and cyborg DB remains as the orphaned resources. We agreed to implement a mechanism to clean up the orphaned resources, this one also need a spec. [1] https://etherpad.opendev.org/p/cyborg-xena-ptg Thanks, Xin-Ran Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Sun Apr 25 13:55:55 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Sun, 25 Apr 2021 22:55:55 +0900 Subject: [puppet] Propose retiring puppet-glare Message-ID: Hello, I'd like to propose retiring puppet-galre project, because the Glare[1] project looks inactive for a while based on the following three points - No actual development is made for 2 years - No release was made since the last Rocky release - setup.cfg is not maintained and the python versions listed are very outdated [1] https://opendev.org/x/glare I'll wait for 1-2 weeks to hear opinions from others. If anybody is interested in keeping the puppet-glare project or has intention to maintain Glare itself then please let me know. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Apr 25 16:52:58 2021 From: zigo at debian.org (Thomas Goirand) Date: Sun, 25 Apr 2021 18:52:58 +0200 Subject: [puppet] Propose retiring puppet-glare In-Reply-To: References: Message-ID: <756dc26c-3c69-b979-02eb-2800176602cb@debian.org> On 4/25/21 3:55 PM, Takashi Kajinami wrote: > Hello, > > > I'd like to propose retiring puppet-galre project, because the Glare[1] > project > looks inactive for a while based on the following three points >  - No actual development is made for 2 years >  - No release was made since the last Rocky release >  - setup.cfg is not maintained and the python versions listed are very > outdated > > [1] https://opendev.org/x/glare > > I'll wait for 1-2 weeks to hear opinions from others. > If anybody is interested in keeping the puppet-glare project or has > intention to > maintain Glare itself then please let me know. > > Thank you, > Takashi +1 From honjo.rikimaru at ntt-tx.co.jp Mon Apr 26 01:15:33 2021 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 26 Apr 2021 10:15:33 +0900 Subject: [infra] How do I delete my account of review.opendev.org? In-Reply-To: <20210423115330.e2bjnkxywg7uzlzj@yuggoth.org> References: <6727c765-623e-119c-fe4a-75bf1873ebe7@ntt-tx.co.jp> <20210423115330.e2bjnkxywg7uzlzj@yuggoth.org> Message-ID: Hi Jeremy, Thank you for your information. I'd appreciate it if you set the following account to inactive state. masakari-integration-test (Email: masakari.integration.test at gmail.com) By the way, this account was used for my 3rd party CI. But I disabled the CI. So I'd like to disable this account. Best regards, On 2021/04/23 20:53, Jeremy Stanley wrote: > On 2021-04-23 15:39:32 +0900 (+0900), Rikimaru Honjo wrote: >> How do I delete my account of review.opendev.org? >> I couldn't find the UI or documents for it. >> >> I'm sorry if I overlooked it. > > Gerrit doesn't have an account deletion mechanism, however > administrators can set an account to inactive state so it is no > longer usable. I've done this for you now. > From fungi at yuggoth.org Mon Apr 26 01:48:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 26 Apr 2021 01:48:35 +0000 Subject: [infra] How do I delete my account of review.opendev.org? In-Reply-To: References: <6727c765-623e-119c-fe4a-75bf1873ebe7@ntt-tx.co.jp> <20210423115330.e2bjnkxywg7uzlzj@yuggoth.org> Message-ID: <20210426014834.hcwra2worc6yki2b@yuggoth.org> On 2021-04-26 10:15:33 +0900 (+0900), Rikimaru Honjo wrote: > Thank you for your information. > I'd appreciate it if you set the following account to inactive state. > > masakari-integration-test > (Email: masakari.integration.test at gmail.com) > > By the way, this account was used for my 3rd party CI. > But I disabled the CI. So I'd like to disable this account. [...] I have set the account with address masakari.integration.test at gmail.com inactive in Gerrit now. Thanks for following up, and let us know if you need anything else. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From honjo.rikimaru at ntt-tx.co.jp Mon Apr 26 04:26:30 2021 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 26 Apr 2021 13:26:30 +0900 Subject: [infra] How do I delete my account of review.opendev.org? In-Reply-To: <20210426014834.hcwra2worc6yki2b@yuggoth.org> References: <6727c765-623e-119c-fe4a-75bf1873ebe7@ntt-tx.co.jp> <20210423115330.e2bjnkxywg7uzlzj@yuggoth.org> <20210426014834.hcwra2worc6yki2b@yuggoth.org> Message-ID: Hi Jeremy, On 2021/04/26 10:48, Jeremy Stanley wrote: > On 2021-04-26 10:15:33 +0900 (+0900), Rikimaru Honjo wrote: >> Thank you for your information. >> I'd appreciate it if you set the following account to inactive state. >> >> masakari-integration-test >> (Email: masakari.integration.test at gmail.com) >> >> By the way, this account was used for my 3rd party CI. >> But I disabled the CI. So I'd like to disable this account. > [...] > > I have set the account with address > masakari.integration.test at gmail.com inactive in Gerrit now. Thanks > for following up, and let us know if you need anything else. > Thank you! I close this thread. Best regards, From zhangbailin at inspur.com Mon Apr 26 06:08:12 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Mon, 26 Apr 2021 06:08:12 +0000 Subject: =?gb2312?B?tPC4tDogW2N5Ym9yZ11bcHRnXUN5Ym9yZyB2UFRHIFN1bW1hcnkgQXByaWwg?= =?gb2312?Q?2021?= In-Reply-To: References: Message-ID: <6798ae34ef3e45a7b51f33e5bf42d89f@inspur.com> Awesome. Thanks Xinran. ――――――――――――――――――――――――――――――――――――――――――― Brin Zhang | 张百林 CBRD | 云计算与大数据研发部 T: 13203809727 E: zhangbailin at inspur.com 浪潮电子信息产业股份有限公司 Inspur Electronic Information Industry Co.,Ltd. 郑州市郑东新区心怡路278号基运投资大厦17层 17th Floor, , No.278 Xinyi Road, Zhengzhou City 浪潮云海 发件人: Wang, Xin-ran [mailto:xin-ran.wang at intel.com] 发送时间: 2021年4月25日 17:56 收件人: OpenStack Discuss 主题: [cyborg][ptg]Cyborg vPTG Summary April 2021 Hi all, Please ignore my previous message, it is sent to wrong person. Sorry about this. Thanks for all your participation! We’ve conducted a successful meeting last week. Here is the aggregated summary from Cyborg vPTG discussion. Please check it out and feel free to feedback any concerns you might have. We did a retrospective of Wallaby release, including: * We supports more operations supported for a VM with accelerator attached. * We introduced new drivers for Intel x710 NIC and Inspur’s NVMe SSD Card. * We implemented a new configuration file allowing more flexible device configuration. Topic discussion: Here's some major discussion and conclusion of Cyborg vPTG. For more details, please refer to the etherpad[1]. * More nova operation supporting: - We prioritized the tasks: 1. suspend/resume. 2. cold migration. 3. live migration. * vGPU support: - We reached an internal agreement on whole workflow which can be apply as a generic framework for mdev device. * API enhancement: Some of the following items requires a new micro-version. - Add refresh policy check for all APIs. - Add device profile update API. - Add ARQ query by multiple instances API. - Add disable/enable device API, this one requires a spec first. - Enhance device profile show API with more information. * Cleanup issue: - This issue comes from the case where one compute node shutdown accidently, and the accelerator records in placement and cyborg DB remains as the orphaned resources. We agreed to implement a mechanism to clean up the orphaned resources, this one also need a spec. [1] https://etherpad.opendev.org/p/cyborg-xena-ptg Thanks, Xin-Ran Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3558 bytes Desc: image001.jpg URL: From mkopec at redhat.com Mon Apr 26 08:00:00 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 26 Apr 2021 10:00:00 +0200 Subject: [qa][ptg] Summary of the discussed topics Message-ID: Hi all, here's a summary of the topics we have discussed during QA sessions: * Migration of devstack and Tempest tests to new secure RBAC There was a great discussion about this goal, many good points raised. See the PTG etherpad for more details. * Patrole execution time improvement This is up for a few cycles already and is missing a volunteer. We will discuss next steps towards this goal during one of the following QA office hour meetings. * What FeatureFreeze means for QA projects We have discussed the types of changes we should avoid merging during Feature Freeze. * Tempest plugins should use master tempest in their testing jobs We have noticed that many check/gate CI jobs use latest tempest tag (downloaded form pip) instead of the master branch we may potentially lead to approving changes which are not compatible with the master branch of tempest. * Fix backward compatibility of run-tempest role We deprecated couple of tempest's CLI args and replaced them by new ones in 26.1.0, however, as it turns out the backward compatibility conditions in run-tempest role are not sufficient. We will try to create a stable job variant to be used in jobs with older tempest versions. * Cleanup of duplicated scenario.manager Announcing tempest.scenario.manager a stable interface in tempest 27.0.0 raises a question when to remove the duplicated scenario.manager in plugins' codebase. * Whether to branch tempest No, we won't. We wrote down all pros and cons. The main fact against is that per an agreement all stable releases + master should be tested with the same set of tests. However, we will discuss branching for EM with the release team. Full discussion (PTG etherpad): https://etherpad.opendev.org/p/qa-xena-ptg QA's priority items for Xena cycle: https://etherpad.opendev.org/p/qa-xena-priority Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Apr 26 08:45:39 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 26 Apr 2021 10:45:39 +0200 Subject: [neutron] bug deputy report for week of 2021-04-19 Message-ID: Hello Neutrinos: This is the list of bugs opened during the last week. I would like to remark the two critical bugs and the high one, that are not assigned. Critical: - https://bugs.launchpad.net/neutron/+bug/1925451: “[stable/rocky] grenade job is broken”. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1926109: “SSH timeout (wait timeout) due to potential paramiko issue“. Unassigned. High: - https://bugs.launchpad.net/neutron/+bug/1925498: “SNAT is not working“. Unassigned. - NOTE: I didn't have time to test if this issue is legit. IMO that's something that should have been detected years ago but I'm until this bug is triaged, I'll set it as "high". Medium: - https://bugs.launchpad.net/neutron/+bug/1925406: “[functional] run dsvm-functional locally will need ovn no matter the running cases are”. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1916428: “dibbler tool for dhcpv6 is concluded“. Unassigned. - NOTE: I’m adding this bug into this list because this week a duplicate of this bug was filled. Worth mentioning it again. - https://bugs.launchpad.net/neutron/+bug/1925836: “[OVN] OVN metadata should not need to monitor the enterily Chassis table”. Assigned to Lucas. - https://review.opendev.org/c/openstack/neutron/+/787780 Low: - https://bugs.launchpad.net/neutron/+bug/1925218: “[OVN] BW limit QoS rules assigned to networks with SR-IOV ports are created on NBDB”. Assigned to Rodolfo. - https://review.opendev.org/c/openstack/neutron/+/787351 - https://bugs.launchpad.net/neutron/+bug/1925368: “[L3] Router GW can be removed with routes defined”. Unassigned. - NOTE: I would like to know if this is a valid issue or not. - https://bugs.launchpad.net/neutron/+bug/1925433: “[doc] SR-IOV config documentation has not been updated when SRIOV attach got implemented”. Assigned to Balazs. - https://review.opendev.org/c/openstack/neutron/+/787500 - https://bugs.launchpad.net/neutron/+bug/1925841: “"backref" warnings for "QosNetworkPolicyBinding.port" and "StandardAttribute.tags"“. Assigned to Rodolfo. - https://review.opendev.org/c/openstack/neutron/+/787782 Whishlist: - https://bugs.launchpad.net/neutron/+bug/1925528: “Improve "NeutronDbObject.objects_exist" performance“. Assigned to Rodolfo. - https://review.opendev.org/c/openstack/neutron/+/787681 Incomplete: - https://bugs.launchpad.net/neutron/+bug/1925213: “o-api seems to leak memory when ovn-octavia-provider is used” - https://bugs.launchpad.net/neutron/+bug/1926045: “Restrictions on FIP binding” - https://bugs.launchpad.net/neutron/+bug/1926049: “check_changed_vlans failed“ Undecided/unconfirmed: - https://bugs.launchpad.net/neutron/+bug/1925789: “neutron fwaas2 l3 - inconsistent order of jump rules“. FWaaS bug in Train. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 26 09:16:14 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 26 Apr 2021 11:16:14 +0200 Subject: [release] Meeting Time Poll In-Reply-To: References: Message-ID: Hello everyone, As Thierry proposed during our PTG here is our new poll [1] about our meeting time. Indeed, we have a few regular attendees of the Release Management meeting who have conflicts with the previously chosen meeting time. As a result, we would like to find a new time to hold the meeting. I've created a Doodle poll [1] for everyone to give their input on times. It's mostly limited to times that reasonably overlap the working day in the US and Europe since that's where most of our attendees are located. If you attend the Release Management meeting, please fill out the poll so we can hopefully find a time that works better for everyone. For the sake of organization and to allow everyone to schedule his agenda accordingly, the poll will be closed on May 3rd. On that date, I will announce the time of this meeting and the date on which it will take effect. Notice that potentially that will force us to move our meeting on another day than Thursdays. I'll soon initiate our meeting tracking etherpad for Xena, and since we are at the beginning of a new series so we don't have a lot of topics to discuss, so I think that it could be worth waiting until next week to initiate our first meeting. Let me know if you are ok with that. That will allow us to plan it accordingly to the chosen meeting time. Thanks! [1] https://doodle.com/poll/2kcdh83r3hmwmxie Le mer. 7 avr. 2021 à 12:14, Herve Beraud a écrit : > Greetings, > > The poll is now terminated, everybody voted and we reached a consensus, > our new meeting time is at 2pm UTC on Thursdays. > > https://doodle.com/poll/ip6tg4fvznz7p3qx > > It will take effect from our next meeting, i.e tomorrow. > > I'm going to update our agenda accordingly. > > Thanks to everyone for your vote. > > Le mer. 31 mars 2021 à 17:55, Herve Beraud a écrit : > >> Hello deliveryers, >> >> Don't forget to vote for our new meeting time. >> >> Thank you >> >> Le ven. 26 mars 2021 à 13:43, Herve Beraud a écrit : >> >>> Hello >>> >>> We have a few regular attendees of the Release Management meeting who >>> have conflicts >>> with the current meeting time. As a result, we would like to find a new >>> time to hold the meeting. I've created a Doodle poll[1] for everyone to >>> give their input on times. It's mostly limited to times that reasonably >>> overlap the working day in the US and Europe since that's where most of >>> our attendees are located. >>> >>> If you attend the Release Management meeting, please fill out the poll >>> so we can hopefully find a time that works better for everyone. >>> >>> For the sake of organization and to allow everyone to schedule his >>> agenda accordingly, the poll will be closed on April 5th. On that date, >>> I will announce the time of this meeting and the date on which it will take >>> effect. >>> >>> Thanks! >>> >>> [1] https://doodle.com/poll/ip6tg4fvznz7p3qx >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Mon Apr 26 09:25:40 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 26 Apr 2021 09:25:40 +0000 Subject: [puppet] Propose retiring puppet-glare In-Reply-To: References: Message-ID: <2cd342d33e7c4f5aaf4d799f3d8868a1@binero.com> +1 ________________________________ From: Takashi Kajinami Sent: Sunday, April 25, 2021 3:55:55 PM To: openstack-discuss Subject: [puppet] Propose retiring puppet-glare Hello, I'd like to propose retiring puppet-galre project, because the Glare[1] project looks inactive for a while based on the following three points - No actual development is made for 2 years - No release was made since the last Rocky release - setup.cfg is not maintained and the python versions listed are very outdated [1] https://opendev.org/x/glare I'll wait for 1-2 weeks to hear opinions from others. If anybody is interested in keeping the puppet-glare project or has intention to maintain Glare itself then please let me know. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Mon Apr 26 10:01:18 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Mon, 26 Apr 2021 12:01:18 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change Message-ID: Hello fellow openstackers! During the recent xena ptg, the ironic community had a discussion about the need to move the ironic-python-agent-builder project from an independent model to the standard release model. When we initially split the builder from ironic-python-agent, we decided against it, but considering some problems we encountered during the road, the ironic community seems to be in favor of the change. The reasons for this are mainly to strictly align the image building project to ironic-python-agent releases, and ease dealing with the occasional upgrade of tinycore linux, the base image used to build the "tinyipa" ironic-python-agent ramdisk. We'd like to involve the release team to ask for advice, not only on the process, but also considering that we need to ask to cut the first branch for the wallaby stable release, and we know we're a bit late for that! :) Thank you in advance for your help! Riccardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Apr 26 13:06:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 26 Apr 2021 09:06:51 -0400 Subject: [cinder] reminder: this week's meeting in video+IRC Message-ID: <3f0ae2f1-e0db-a593-7368-69d2b6834e75@gmail.com> Quick reminder that this week's Cinder team meeting on Wednesday 28 April, being the final meeting of the month, will be held in both videoconference and IRC at the regularly scheduled time of 1400 UTC. These are the video meeting rules we've agreed to: * Everyone will keep IRC open during the meeting. * We'll take notes in IRC to leave a record similar to what we have for our regular IRC meetings. * Some people are more comfortable communicating in written English. So at any point, any attendee may request that the discussion of the current topic be conducted entirely in IRC. * The meeting will be recorded. connection info: https://bluejeans.com/3228528973 meeting agenda: https://etherpad.opendev.org/p/cinder-xena-meetings cheers, brian From rosmaita.fossdev at gmail.com Mon Apr 26 13:14:05 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 26 Apr 2021 09:14:05 -0400 Subject: [cinder] festival of mypy reviews 30 April 2021 Message-ID: Hello Cinder community members, As discussed at last week's PTG, we will be holding a special Cinder Festival of mypy Reviews at the end of this week on Friday 30 April. what: Cinder Festival of mypy Reviews when: Friday 30 April 2021 from 1400-1600 UTC where: https://meetpad.opendev.org/cinder-festival-of-reviews See you there! brian From juliaashleykreger at gmail.com Mon Apr 26 13:29:41 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 26 Apr 2021 06:29:41 -0700 Subject: [TripleO] overcloud node introspect failed In-Reply-To: References: Message-ID: Greetings, In all likelihood, the credentials are wrong for the baremetal node and the lock is being held by the conductor who is still trying to record the power state. The lock is an intentional behavior clients should retry if they encounter the lock. This is because BMC's often cannot handle concurrent requests. I would first manually verify: * That the nodes are not in maintenance state (openstack baremetal node show). The node last_error field may have a hint or indication to the actual error, but visit the next two bullet points. * That a power state of on or off has been recorded. If it has not been recorded, the supplied credentials or or access is correct. * If you're sure about the credentials, verify basic connectivity to the BMC address. Some BMCs are very particular about *how* the networking is configured, specifically to help limit attacks from the network itself. -Julia On Wed, Apr 21, 2021 at 7:25 PM Vinesh N wrote: > > hi, > i am facing an issue while introspect the bare metal nodes, > > error message > "4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf | 2021-04-22T01:41:32 | 2021-04-22T01:41:35 | Failed to set boot device to PXE: Failed to set boot device for node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf: Client Error for url: http://10.0.1.202:6385/v1/nodes/4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf/management/boot_device, Node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf is locked by host undercloud.localdomain, please retry after the current operation is completed" > > > (undercloud) [stack at undercloud ~]$ cat /etc/*release > CentOS Linux release 8.3.2011 > > ussuri version > > (undercloud) [stack at undercloud ~]$ openstack image list > /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! > RequestsDependencyWarning) > +--------------------------------------+------------------------+--------+ > | ID | Name | Status | > +--------------------------------------+------------------------+--------+ > | 8ddcd168-cc18-4ce2-97c5-c3502ac471a4 | overcloud-full | active | > | 8d9cfac9-400b-4570-b0b1-baeb175b16c4 | overcloud-full-initrd | active | > | c561f1d5-41ae-4599-81ea-de2c1e74eae7 | overcloud-full-vmlinuz | active | > +--------------------------------------+------------------------+--------+ > > Using the command to introspect the node, it was able to discover the node and I could provision the node boot via pxe, and load the image on the node. I could see the login prompt on the server, after some time of provision shut the node down. > > openstack overcloud node discover --range 10.0.40.5 --credentials admin:XXXX --introspect --provide > > /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! > RequestsDependencyWarning) > Successfully probed node IP 10.0.40.5 > Successfully registered node UUID 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf > /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! > RequestsDependencyWarning) > > PLAY [Baremetal Introspection for multiple Ironic Nodes] *********************** > 2021-04-22 07:04:28.978299 | 002590fe-0d22-76eb-1a70-000000000008 | TASK | Check for required inputs > 2021-04-22 07:04:29.002729 | 002590fe-0d22-76eb-1a70-000000000008 | SKIPPED | Check for required inputs | localhost | item=node_uuids > 2021-04-22 07:04:29.004468 | 002590fe-0d22-76eb-1a70-000000000008 | TIMING | Check for required inputs | localhost | 0:00:00.069134 | 0.0 > .... > .... > .... > > 2021-04-22 07:11:43.261714 | 002590fe-0d22-76eb-1a70-000000000016 | TASK | Nodes that failed introspection > 2021-04-22 07:11:43.296417 | 002590fe-0d22-76eb-1a70-000000000016 | FATAL | Nodes that failed introspection | localhost | error={ > "msg": " 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf" > } > 2021-04-22 07:11:43.297359 | 002590fe-0d22-76eb-1a70-000000000016 | TIMING | Nodes that failed introspection | localhost | 0:07:14.362025 | 0.03s > > NO MORE HOSTS LEFT ************************************************************* > > PLAY RECAP ********************************************************************* > localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 > 2021-04-22 07:11:43.301553 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2021-04-22 07:11:43.302101 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2021-04-22 07:11:43.302609 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Elapsed Time: 0:07:14.367265 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2021-04-22 07:11:43.303162 | UUID | Info | Host | Task Name | Run Time > 2021-04-22 07:11:43.303740 | 002590fe-0d22-76eb-1a70-000000000014 | SUMMARY | localhost | Start baremetal introspection | 434.03s > 2021-04-22 07:11:43.304248 | 002590fe-0d22-76eb-1a70-000000000015 | SUMMARY | localhost | Nodes that passed introspection | 0.04s > 2021-04-22 07:11:43.304814 | 002590fe-0d22-76eb-1a70-000000000016 | SUMMARY | localhost | Nodes that failed introspection | 0.03s > 2021-04-22 07:11:43.305341 | 002590fe-0d22-76eb-1a70-000000000008 | SUMMARY | localhost | Check for required inputs | 0.03s > 2021-04-22 07:11:43.305854 | 002590fe-0d22-76eb-1a70-00000000000a | SUMMARY | localhost | Set node_uuids_intro fact | 0.02s > 2021-04-22 07:11:43.306397 | 002590fe-0d22-76eb-1a70-000000000010 | SUMMARY | localhost | Check if validation enabled | 0.02s > 2021-04-22 07:11:43.306904 | 002590fe-0d22-76eb-1a70-000000000012 | SUMMARY | localhost | Fail if validations are disabled | 0.02s > 2021-04-22 07:11:43.307379 | 002590fe-0d22-76eb-1a70-00000000000e | SUMMARY | localhost | Set concurrency fact | 0.02s > 2021-04-22 07:11:43.307913 | 002590fe-0d22-76eb-1a70-00000000000c | SUMMARY | localhost | Notice | 0.02s > 2021-04-22 07:11:43.308417 | 002590fe-0d22-76eb-1a70-000000000011 | SUMMARY | localhost | Run Validations | 0.02s > 2021-04-22 07:11:43.308926 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2021-04-22 07:11:43.309423 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ State Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2021-04-22 07:11:43.310021 | ~~~~~~~~~~~~~~~~~~ Number of nodes which did not deploy successfully: 1 ~~~~~~~~~~~~~~~~~ > 2021-04-22 07:11:43.310545 | The following node(s) had failures: localhost > 2021-04-22 07:11:43.311080 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Ansible execution failed. playbook: /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run Status: failed, Return Code: 2 > Exception occured while running the command > Traceback (most recent call last): > File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run > super(Command, self).run(parsed_args) > File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 41, in run > return super(Command, self).run(parsed_args) > File "/usr/lib/python3.6/site-packages/cliff/command.py", line 187, in run > return_code = self.take_action(parsed_args) or 0 > File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_node.py", line 462, in take_action > retry_timeout=parsed_args.retry_timeout, > File "/usr/lib/python3.6/site-packages/tripleoclient/workflows/baremetal.py", line 193, in introspect > "retry_timeout": retry_timeout, > File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 728, in run_ansible_playbook > raise RuntimeError(err_msg) > RuntimeError: Ansible execution failed. playbook: /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run Status: failed, Return Code: 2 > Ansible execution failed. playbook: /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run Status: failed, Return Code: 2 > > > (undercloud) [stack at undercloud ~]$ openstack baremetal introspection list > /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! > RequestsDependencyWarning) > +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | UUID | Started at | Finished at | Error | > +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf | 2021-04-22T01:41:32 | 2021-04-22T01:41:35 | Failed to set boot device to PXE: Failed to set boot device for node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf: Client Error for url: http://10.0.1.202:6385/v1/nodes/4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf/management/boot_device, Node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf is locked by host undercloud.localdomain, please retry after the current operation is completed. | > | 3d091348-e9c7-4e99-80e3-df72d332d935 | 2021-04-21T12:36:30 | 2021-04-21T12:36:32 | Failed to set boot device to PXE: Failed to set boot device for node 3d091348-e9c7-4e99-80e3-df72d332d935: Client Error for url: http://10.0.1.202:6385/v1/nodes/3d091348-e9c7-4e99-80e3-df72d332d935/management/boot_device, Node 3d091348-e9c7-4e99-80e3-df72d332d935 is locked by host undercloud.localdomain, please retry after the current operation is completed. | > +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ From balazs.gibizer at est.tech Sat Apr 24 07:41:11 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Sat, 24 Apr 2021 09:41:11 +0200 Subject: [nova][placement] Xena PTG results Message-ID: Hi, Thank you for joining the nova PTG! We recorded our agreements in the etherpad[1]. And I made an export of that etherpad (see attached) to avoid loosing the information. Cheers, gibi [1] https://etherpad.opendev.org/p/nova-xena-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanumantha.b at acldigital.com Sun Apr 25 11:58:13 2021 From: hanumantha.b at acldigital.com (Basireddy Hanumantha Reddy) Date: Sun, 25 Apr 2021 17:28:13 +0530 (IST) Subject: audit middleware logs Message-ID: <2026255938.54502.1619351893239.JavaMail.zimbra@acldigital.com> Hi, I am using pike version. I have configured nova service as mentioned in https://docs.openstack.org/keystonemiddleware/pike/audit.html 1. added in api-paste.ini file [ filter : audit ] paste . filter_factory = keystonemiddleware . audit : filter_factory audit_map_file = / etc / nova / api_audit_map . conf 2. verified composite:openstack_compute_api_v2 section. 3. Copied the api_audit_map.conf file to the /etc/nova/ After that I have restared all openstack services. But I couldn't find any audit logs with oslo.messaging.notification.audit.http.request in nova.log file I can see only logs related to oslo.messaging as openstack nova-scheduler[17534]: INFO oslo.messaging.notification.scheduler.select_destinations.start [None req-01612e5a-37d9-4b62-970a-fcf7d3c409ad admin admin] {"event_type": "scheduler.select_destinations.start", "timestamp": "2021-04-25 11:10:40.249260", "payload": {"request_spec": {"instance_properties": {"root_gb": 1, "uuid": "ee4ddae2-9397-401c-b80f-543061e75dd2", "availability_zone": null, "ephemeral_gb": 0, "numa_topology": null, "memory_mb": 512, "vcpus": 1, "project_id": "d078fd26a1204e6abede5faca777da8b", "pci_requests": {"requests": []}}, "instance_type": {"disabled": false, "root_gb": 1, "name": "m1.tiny", "flavorid": "1", "deleted": false, "created_at": "2021-04-22T17:34:26.000000", "ephemeral_gb": 0, "updated_at": null, "memory_mb": 512, "vcpus": 1, "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": true, "deleted_at": null, "vcpu_weight": 0, "id": 6}, "image": {"status": "active", "name": "cirros-0.3.5-x86_64-disk", "container_format": "bare", "created_at": "2021-04-22T17:32:54.000000", "disk_format": "qcow2", "updated_at": "2021-04-22T17:32:55.000000", "id": "e0d062fa-2b3c-4a6e-90e4-3ef796c3a594", "min_disk": 0, "min_ram": 0, "checksum": "f8ab98ff5e73ebab884d80c9dc9c7290", "owner": "d078fd26a1204e6abede5faca777da8b", "properties": {}, "size": 13267968}, "num_instances": 1}}, "priority": "INFO", "publisher_id": "scheduler.openstack", "message_id": "3128ed95-1599-46f8-b110-93fcb0776812"} what I understood that as part of audit middleware, logs should contain oslo.messaging.notification.audit.http.request and oslo.messaging.notification.audit.http.response. Please suggest what I am missing. Thanks, Hanumantha Reddy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xxxcloudlearner at gmail.com Mon Apr 26 04:40:19 2021 From: xxxcloudlearner at gmail.com (cloud learner) Date: Mon, 26 Apr 2021 10:10:19 +0530 Subject: rocky 2 node installation unable to access internet in instance Message-ID: Dear All, I am new to this, installed rocky on centos 7 with 2 node controller and compute nodes, followed https://docs.openstack.org/neutron/rocky/install/environment-networking-controller-rdo.html as mentioned in the document used vxlan and the Networking option 2: Self service network and used linuxbridge, verified all the steps as mentioned in the document and all things are working fine but i am unable to access the internet in instances. The network layout image attached herewith for reference there are two ports, one is for management and another is a provider network through which the internet is accessible. The provider network is 10.1.0.0/16 and the gateway is 10.1.1.1 which I have set in the external network.. When i run # ip netns command on controller node it shows qdhcp and qrouter [root at controller ~]# ip netns qrouter-2f84ff08-7293-4822-a4c1-57278d80f1cf (id: 2) qdhcp-186cc689-267b-49ca-90a7-b312fd0367f3 (id: 1) qdhcp-513292db-3eb8-4902-974f-4bb9c4757652 (id: 0) but when I run # ip netns command on compute it shows nothing. [root at controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 0286adac-2a32-4866-8b75-95f190c2c310 | Linux bridge agent | compute1 | None | :-) | UP | neutron-linuxbridge-agent | | 47728860-d7b0-4b15-b3dc-1f8a6bf81125 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | 4957d1b2-ea33-4034-9869-8518438416b0 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | a81e4138-0190-41ef-b0a1-bd1b66c01343 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | ea43868d-8394-4cf5-a104-edc5ecbaff9e | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ Is there any need to create br-int or br-ex on nodes because that is not mentioned in the link which i have followed, if need kindly guide the steps to do that and on which node these steps need to follow. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- brq186cc689-26: flags=4163 mtu 1450 ether 36:3e:b7:51:3b:19 txqueuelen 1000 (Ethernet) RX packets 808 bytes 63976 (62.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 brq513292db-3e: flags=4163 mtu 1450 ether 12:c4:f3:c1:ec:e8 txqueuelen 1000 (Ethernet) RX packets 32 bytes 1810 (1.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens192: flags=4163 mtu 1500 inet 10.1.75.11 netmask 255.255.0.0 broadcast 10.1.255.255 inet6 fe80::fe47:9de8:51aa:64e8 prefixlen 64 scopeid 0x20 ether 00:0c:29:04:bb:3a txqueuelen 1000 (Ethernet) RX packets 2299253 bytes 649712257 (619.6 MiB) RX errors 0 dropped 987 overruns 0 frame 0 TX packets 196560 bytes 17836300 (17.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens224: flags=4163 mtu 1500 inet 192.168.81.175 netmask 255.255.255.0 broadcast 192.168.81.255 inet6 fe80::a800:74a2:292e:de75 prefixlen 64 scopeid 0x20 ether 00:0c:29:04:bb:44 txqueuelen 1000 (Ethernet) RX packets 3526143 bytes 11329973560 (10.5 GiB) RX errors 0 dropped 96 overruns 0 frame 0 TX packets 2599729 bytes 7262613751 (6.7 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1000 (Local Loopback) RX packets 5350144 bytes 280975121 (267.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5350144 bytes 280975121 (267.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap037e23ee-6a: flags=4163 mtu 1450 inet6 fe80::fc16:3eff:fe5c:fbbb prefixlen 64 scopeid 0x20 ether fe:16:3e:5c:fb:bb txqueuelen 1000 (Ethernet) RX packets 1716 bytes 129014 (125.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1502 bytes 148238 (144.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap47f07098-a2: flags=4163 mtu 1450 inet6 fe80::fc16:3eff:fe58:89ef prefixlen 64 scopeid 0x20 ether fe:16:3e:58:89:ef txqueuelen 1000 (Ethernet) RX packets 641 bytes 59131 (57.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 483 bytes 53900 (52.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vxlan-39: flags=4163 mtu 1450 ether 36:3e:b7:51:3b:19 txqueuelen 1000 (Ethernet) RX packets 19263 bytes 1657518 (1.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 21107 bytes 1290443 (1.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vxlan-51: flags=4163 mtu 1450 ether 12:c4:f3:c1:ec:e8 txqueuelen 1000 (Ethernet) RX packets 63 bytes 6172 (6.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 123 bytes 8653 (8.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 -------------- next part -------------- A non-text attachment was scrubbed... Name: Untitled.png Type: image/png Size: 26748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Untitled1.png Type: image/png Size: 24394 bytes Desc: not available URL: -------------- next part -------------- brq186cc689-26: flags=4163 mtu 1450 inet6 fe80::10e1:83ff:feb7:a5cc prefixlen 64 scopeid 0x20 ether ba:49:b4:59:89:0b txqueuelen 1000 (Ethernet) RX packets 427 bytes 41788 (40.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 656 (656.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 brq4a468f83-b2: flags=4163 mtu 1450 inet6 fe80::f03a:ffff:fe4a:c256 prefixlen 64 scopeid 0x20 ether 3e:71:0f:66:4e:5e txqueuelen 1000 (Ethernet) RX packets 15986 bytes 448472 (437.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 656 (656.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 brq513292db-3e: flags=4163 mtu 1450 inet6 fe80::8cb:93ff:fed2:4ec5 prefixlen 64 scopeid 0x20 ether 7a:b2:22:85:09:ca txqueuelen 1000 (Ethernet) RX packets 32 bytes 1810 (1.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 656 (656.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens192: flags=4163 mtu 1500 inet 10.1.75.10 netmask 255.255.0.0 broadcast 10.1.255.255 inet6 fe80::20c:29ff:fef9:e9e prefixlen 64 scopeid 0x20 ether 00:0c:29:f9:0e:9e txqueuelen 1000 (Ethernet) RX packets 3042377 bytes 7121823630 (6.6 GiB) RX errors 0 dropped 333 overruns 0 frame 0 TX packets 1778260 bytes 206324820 (196.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens224: flags=4163 mtu 1500 inet 192.168.81.174 netmask 255.255.255.0 broadcast 192.168.81.255 inet6 fe80::20c:29ff:fef9:ea8 prefixlen 64 scopeid 0x20 ether 00:0c:29:f9:0e:a8 txqueuelen 1000 (Ethernet) RX packets 1379802 bytes 347810145 (331.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1409271 bytes 21594335725 (20.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1000 (Local Loopback) RX packets 11707183 bytes 13466290899 (12.5 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11707183 bytes 13466290899 (12.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap179db067-39: flags=4163 mtu 1450 ether ce:a1:6f:c6:1c:cf txqueuelen 1000 (Ethernet) RX packets 68 bytes 7500 (7.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 130 bytes 10941 (10.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap5b6879c9-b1: flags=4163 mtu 1450 ether d6:b3:64:af:08:e2 txqueuelen 1000 (Ethernet) RX packets 18644 bytes 1838142 (1.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 19906 bytes 1475452 (1.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap82edd50b-82: flags=4163 mtu 1450 ether ba:49:b4:59:89:0b txqueuelen 1000 (Ethernet) RX packets 109 bytes 19486 (19.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 543 bytes 61281 (59.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tapf4160210-e5: flags=4163 mtu 1450 ether d2:b4:23:ca:c6:ac txqueuelen 1000 (Ethernet) RX packets 2903 bytes 122462 (119.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vxlan-39: flags=4163 mtu 1450 ether e6:3c:04:3a:2a:6b txqueuelen 1000 (Ethernet) RX packets 20043 bytes 1209413 (1.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18744 bytes 1594506 (1.5 MiB) TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0 vxlan-51: flags=4163 mtu 1450 ether 7a:b2:22:85:09:ca txqueuelen 1000 (Ethernet) RX packets 123 bytes 8653 (8.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 63 bytes 6172 (6.0 KiB) TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 vxlan-123: flags=4163 mtu 1450 ether 3e:71:0f:66:4e:5e txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 15993 overruns 0 carrier 0 collisions 0 -------------- next part -------------- A non-text attachment was scrubbed... Name: Network layout.jpeg Type: image/jpeg Size: 112513 bytes Desc: not available URL: From C-Albert.Braden at charter.com Mon Apr 26 13:46:16 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 26 Apr 2021 13:46:16 +0000 Subject: [kolla] VM build fails after Train-Ussuri upgrade In-Reply-To: <13f2c3e3131a407c87d403d6cad3cd53@ncwmexgp009.CORP.CHARTERCOM.com> References: <13f2c3e3131a407c87d403d6cad3cd53@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: <98d0effb5f07449b8fb2098a4ca5b218@ncwmexgp009.CORP.CHARTERCOM.com> Can anyone help with this upgrade issue? From: Braden, Albert Sent: Monday, April 19, 2021 8:20 AM To: openstack-discuss at lists.openstack.org Subject: [kolla] VM build fails after Train-Ussuri upgrade I upgraded my Train test cluster to Ussuri following these instructions: OpenStack Docs: Operating Kolla The upgrade completed successfully with no failures, and the existing VMs are fine, but new VM build fails with rados.Rados.connect\nrados.PermissionDeniedError: Ubuntu Pastebin I'm running external ceph so I looked at this document: OpenStack Docs: External Ceph It says that I need the following in /etc/kolla/config/glance/ceph.conf: auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx I didn't have that, so I added it and then redeployed, but still can't build VMs. I tried adding the same to all copies of ceph.conf and redeployed again, but that didn't help. Does anything else need to change in my ceph config when upgrading from Train to Ussuri? I see some cryptic talk about ceph in the release notes but it's not obvious what I'm being asked to change: OpenStack Docs: Ussuri Series Release Notes I read the bug that it refers to: Bug #1904062 "external ceph cinder volume config breaks volumes ..." : Bugs : kolla-ansible (launchpad.net) But I already have "backend_host=rbd:volumes" so I don't think I'm hitting that. Also I read these sections but I don't see anything obvious here that needs to be changed: * For cinder (cinder-volume and cinder-backup), glance-api and manila keyrings behavior has changed and Kolla Ansible deployment will not copy those keys using wildcards (ceph.*), instead will use newly introduced variables. Your environment may render unusable after an upgrade if your keys in /etc/kolla/config do not match default values for introduced variables. * The default behavior for generating the cinder.conf template has changed. An rbd-1 section will be generated when external Ceph functionality is used, i.e. cinder_backend_ceph is set to true. Previously it was only included when Kolla Ansible internal Ceph deployment mechanism was used. * The rbd section of nova.conf for nova-compute is now generated when nova_backend is set to "rbd". Previously it was only generated when both enable_ceph was "yes" and nova_backend was set to "rbd". My ceph keys have the default name and are in the default locations. I have cinder_backend_ceph: "yes". I don't have a nova_backend setting but I have nova_backend_ceph: "yes" I added nova_backend: "rbd" and redeployed and now I get a different error: rados.Rados.connect\nrados.ObjectNotFound Ubuntu Pastebin I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Mon Apr 26 13:50:25 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 26 Apr 2021 15:50:25 +0200 Subject: [keystone] Token In-Reply-To: References: Message-ID: <3EFE3E00-DBD1-4C94-8E04-8757453D90C7@poczta.onet.pl> Hello, really no one knows how to do it? Best regards, Adam > Wiadomość napisana przez Adam Tomas w dniu 23.04.2021, o godz. 14:54: > > Hi > Which CLI setting sets domain_id field in a token? I tried > > openstack —os-domain-id SOME_OS_COMMAND, > openstack —os-default-domain SOME_OS_COMMAND, > openstack —os-default-domain_id SOME_OS_COMMAND > > but none of them sets this field and policies checking domain_id:%(domain_id) don’t work because of that. Interesting thing is that horizon somehow generates token with domain_id set and everything works with the same policies, I have a problem only with CLI. Can user_domain_id (which is inside of every token is see for particular user) be used instead of domain_id? > > Example token from CLI: > 2021-04-23 12:16:38.090 700 DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-117bc600-490e-46ae-a857-0c8d09dc1dbc 9adbxxxxb02ef 61d4xxxx9c0f - 3a08xxxx82c1 3a08xxxx82c1] RBAC: auth_context: {'token': , > 'domain_id': None, 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': None, 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': '61d4xxxx9c0f', 'project_domain_id': '3a08xxxx82c1', 'roles': ['member', 'project_admin', 'reader', 'domain_admin'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 > > Example token from Horizon: > 2021-04-23 12:48:21.009 704 DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-d6d89d3e-c3c1-48c0-b3ed-b3dcedb54db3 9adbxxxx02ef - 3a08xxxx82c1 3a08xxxx82c1 -] RBAC: auth_context: {'token': , 'domain_id': '3a08xxx82c1', 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': ‚xxxx', 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': None, 'project_domain_id': None, 'roles': ['project_admin', 'member', 'reader', 'domain_admin'], 'is_admin_project': False, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 > > Best regards > Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From yingjisun at vmware.com Mon Apr 26 13:56:29 2021 From: yingjisun at vmware.com (Yingji Sun) Date: Mon, 26 Apr 2021 13:56:29 +0000 Subject: An compute service hang issue Message-ID: Buddies, Have you ever met such issue that the nova-compute service looks hanging there and not able to boot instance? When creating an instance, we can only see logs like below and there is NOT any information. 2021-04-23 02:39:00.252 1 DEBUG nova.compute.manager [req-a2ed90b6-792f-48e1-ba7f-e1e35d9537c2 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] [instance: e0e77edc-99b4-473e-b318-8e6f04428cda] Starting instance... _do_build_and_run_instance /usr/lib/python3.7/site-packages/nova/compute/manager.py:2202^[[00m 2021-04-23 03:03:04.927 1 DEBUG oslo_concurrency.lockutils [req-50659582-ad9b-4c17-bbb2-254d5e1141f9 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] Lock "a3619b50-3704-4f3e-b908-d525a41756eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.7/site-packages/oslo_concurrency/lockutils.py:327 There is NO other messages, including the periodically tasks. If I sent a request to create another instance, I can only see another _do_build_and_run_instance log. 2021-04-23 03:03:05.061 1 DEBUG nova.compute.manager [req-50659582-ad9b-4c17-bbb2-254d5e1141f9 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] [instance: a3619b50-3704-4f3e-b908-d525a41756eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.7/site-packages/nova/compute/manager.py:2202^[[00m 2021-04-23 03:32:44.718 1 DEBUG oslo_concurrency.lockutils [req-c7857cb3-02ae-4c7b-92d7-b2ec178c1b13 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] Lock "3b07a6eb-12bb-4d19-9302-25ea3e746944" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.7/site-packages/oslo_concurrency/lockutils.py:327 I am sure the compute service is still running as I can see the heart-beat time changes correctly. From the rabbitmq message, It looks the code goes here nova/compute/manager.py def _build_and_run_instance(self, context, instance, image, injected_files, admin_password, requested_networks, security_groups, block_device_mapping, node, limits, filter_properties, request_spec=None): image_name = image.get('name') self._notify_about_instance_usage(context, instance, 'create.start', extra_usage_info={'image_name': image_name}) compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.START, bdms=block_device_mapping) as I can see a message with "event_type": "compute.instance.create.start" INFO:root:Body: {'oslo.version': '2.0', 'oslo.message': '{"message_id": "0ef11509-7a65-46b8-ac7e-ed6482fc527d", "publisher_id": "compute.compute01", "event_type": "compute.instance.create.start", "priority": "INFO", "payload": {"tenant_id": "1ee7955a2eaf4c86bcc3e650f2a8e2a7", "user_id": "373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e", "instance_id": "3b07a6eb-12bb-4d19-9302-25ea3e746944", "display_name": "yingji-06", "reservation_id": "r-f8iphdcf", "hostname": "yingji-06", "instance_type": "m1.tiny", "instance_type_id": 15, "instance_flavor_id": "4d5644e1-c561-4b6c-952c-f4dd93c87948", "architecture": null, "memory_mb": 512, "disk_gb": 1, "vcpus": 1, "root_gb": 1, "ephemeral_gb": 0, "host": null, "node": null, "availability_zone": "nova", "cell_name": "", "created_at": "2021-04-23 03:32:39+00:00", "terminated_at": "", "deleted_at": "", "launched_at": "", "image_ref_url": "https://192.168.111.160:9292//images/303a325f-048 This issue is not always reproducible and restarting the compute service can work around this. Could you please give any suggestion on how to resolve this issue or how I can investigate ? Yingji. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronal.mauricio.faraj.rodriguez at nttdata.com Mon Apr 26 14:02:00 2021 From: ronal.mauricio.faraj.rodriguez at nttdata.com (Ronal Mauricio Faraj Rodriguez) Date: Mon, 26 Apr 2021 14:02:00 +0000 Subject: [kolla] VM build fails after Train-Ussuri upgrade In-Reply-To: <98d0effb5f07449b8fb2098a4ca5b218@ncwmexgp009.CORP.CHARTERCOM.com> References: <13f2c3e3131a407c87d403d6cad3cd53@ncwmexgp009.CORP.CHARTERCOM.com> <98d0effb5f07449b8fb2098a4ca5b218@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Hi, Did you check your keys files in nova, kvm and cinder generated by ceph to auth? Example to generate key file and then copy to compute: ceph auth get-or-create compute mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow allow rwx pool=compute, allow allow rwx pool=volumes, allow rx pool=images' -o /ceph.client.compute.keyring Hope this help you. Regards. De: Braden, Albert Enviado el: lunes, 26 de abril de 2021 15:46 Para: openstack-discuss at lists.openstack.org Asunto: RE: [kolla] VM build fails after Train-Ussuri upgrade everis Security Awareness - This is an incoming mail from an EXTERNAL DOMAIN. Please verify sender before you open attachments or access links. Can anyone help with this upgrade issue? From: Braden, Albert Sent: Monday, April 19, 2021 8:20 AM To: openstack-discuss at lists.openstack.org Subject: [kolla] VM build fails after Train-Ussuri upgrade I upgraded my Train test cluster to Ussuri following these instructions: OpenStack Docs: Operating Kolla The upgrade completed successfully with no failures, and the existing VMs are fine, but new VM build fails with rados.Rados.connect\nrados.PermissionDeniedError: Ubuntu Pastebin I'm running external ceph so I looked at this document: OpenStack Docs: External Ceph It says that I need the following in /etc/kolla/config/glance/ceph.conf: auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx I didn't have that, so I added it and then redeployed, but still can't build VMs. I tried adding the same to all copies of ceph.conf and redeployed again, but that didn't help. Does anything else need to change in my ceph config when upgrading from Train to Ussuri? I see some cryptic talk about ceph in the release notes but it's not obvious what I'm being asked to change: OpenStack Docs: Ussuri Series Release Notes I read the bug that it refers to: Bug #1904062 "external ceph cinder volume config breaks volumes ..." : Bugs : kolla-ansible (launchpad.net) But I already have "backend_host=rbd:volumes" so I don't think I'm hitting that. Also I read these sections but I don't see anything obvious here that needs to be changed: * For cinder (cinder-volume and cinder-backup), glance-api and manila keyrings behavior has changed and Kolla Ansible deployment will not copy those keys using wildcards (ceph.*), instead will use newly introduced variables. Your environment may render unusable after an upgrade if your keys in /etc/kolla/config do not match default values for introduced variables. * The default behavior for generating the cinder.conf template has changed. An rbd-1 section will be generated when external Ceph functionality is used, i.e. cinder_backend_ceph is set to true. Previously it was only included when Kolla Ansible internal Ceph deployment mechanism was used. * The rbd section of nova.conf for nova-compute is now generated when nova_backend is set to "rbd". Previously it was only generated when both enable_ceph was "yes" and nova_backend was set to "rbd". My ceph keys have the default name and are in the default locations. I have cinder_backend_ceph: "yes". I don't have a nova_backend setting but I have nova_backend_ceph: "yes" I added nova_backend: "rbd" and redeployed and now I get a different error: rados.Rados.connect\nrados.ObjectNotFound Ubuntu Pastebin I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Apr 26 14:41:08 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 26 Apr 2021 15:41:08 +0100 Subject: An compute service hang issue In-Reply-To: References: Message-ID: <4e5d3beb2f7921b3a494ca853621da5d59cda1f5.camel@redhat.com> On Mon, 2021-04-26 at 13:56 +0000, Yingji Sun wrote: > Buddies, > > Have you ever met such issue that the nova-compute service looks hanging there and not able to boot instance? > > When creating an instance, we can only see logs like below and there is NOT any information. > > > 2021-04-23 02:39:00.252 1 DEBUG nova.compute.manager [req-a2ed90b6-792f-48e1-ba7f-e1e35d9537c2 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] [instance: e0e77edc-99b4-473e-b318-8e6f04428cda] Starting instance... _do_build_and_run_instance /usr/lib/python3.7/site-packages/nova/compute/manager.py:2202^[[00m > > 2021-04-23 03:03:04.927 1 DEBUG oslo_concurrency.lockutils [req-50659582-ad9b-4c17-bbb2-254d5e1141f9 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] Lock "a3619b50-3704-4f3e-b908-d525a41756eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.7/site-packages/oslo_concurrency/lockutils.py:327 > > There is NO other messages, including the periodically tasks. > > If I sent a request to create another instance, I can only see another _do_build_and_run_instance log. > > > > > 2021-04-23 03:03:05.061 1 DEBUG nova.compute.manager [req-50659582-ad9b-4c17-bbb2-254d5e1141f9 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] [instance: a3619b50-3704-4f3e-b908-d525a41756eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.7/site-packages/nova/compute/manager.py:2202^[[00m > > 2021-04-23 03:32:44.718 1 DEBUG oslo_concurrency.lockutils [req-c7857cb3-02ae-4c7b-92d7-b2ec178c1b13 373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e 1ee7955a2eaf4c86bcc3e650f2a8e2a7 - fc86abb50a684911a30f7955d386a3ea fc86abb50a684911a30f7955d386a3ea] Lock "3b07a6eb-12bb-4d19-9302-25ea3e746944" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.7/site-packages/oslo_concurrency/lockutils.py:327 > > I am sure the compute service is still running as I can see the heart-beat time changes correctly. > > From the rabbitmq message, It looks the code goes here > > nova/compute/manager.py > > def _build_and_run_instance(self, context, instance, image, injected_files, >         admin_password, requested_networks, security_groups, >         block_device_mapping, node, limits, filter_properties, >         request_spec=None): > >     image_name = image.get('name') >     self._notify_about_instance_usage(context, instance, 'create.start', >             extra_usage_info={'image_name': image_name}) >     compute_utils.notify_about_instance_create( >         context, instance, self.host, >         phase=fields.NotificationPhase.START, >         bdms=block_device_mapping) > > as I can see a message with "event_type": "compute.instance.create.start" > > > INFO:root:Body: {'oslo.version': '2.0', 'oslo.message': '{"message_id": "0ef11509-7a65-46b8-ac7e-ed6482fc527d", "publisher_id": "compute.compute01", "event_type": "compute.instance.create.start", "priority": "INFO", "payload": {"tenant_id": "1ee7955a2eaf4c86bcc3e650f2a8e2a7", "user_id": "373cb863547407cf3b99034b3b66395e76c137b40f905e7a61e25b1f97df4f3e", "instance_id": "3b07a6eb-12bb-4d19-9302-25ea3e746944", "display_name": "yingji-06", "reservation_id": "r-f8iphdcf", "hostname": "yingji-06", "instance_type": "m1.tiny", "instance_type_id": 15, "instance_flavor_id": "4d5644e1-c561-4b6c-952c-f4dd93c87948", "architecture": null, "memory_mb": 512, "disk_gb": 1, "vcpus": 1, "root_gb": 1, "ephemeral_gb": 0, "host": null, "node": null, "availability_zone": "nova", "cell_name": "", "created_at": "2021-04-23 03:32:39+00:00", "terminated_at": "", "deleted_at": "", "launched_at": "", "image_ref_url": "https://192.168.111.160:9292//images/303a325f-048 > > > This issue is not always reproducible and restarting the compute service can work around this. > > Could you please give any suggestion on how to resolve this issue or how I can investigate ? i assume this is with the vmware driver? it kind of sounds like eventlet is not monkey patching properly and its blocking on a call or something like that. we have seen this in the pass when talking to libvirt where we were not properly proxying calls into the libvirt lib and as a result we would end up blocking in the compute agent when making some external calls to libvirt. i wonder if you are seing something similar? > > Yingji. > > From skaplons at redhat.com Mon Apr 26 14:44:51 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 26 Apr 2021 16:44:51 +0200 Subject: [neutron] April 2021 - PTG Summary Message-ID: <10568268.U2WvfeDNJK@p1> Hi, Below is my summary of the Neutron sessions during the PTG last week. Etherpad with agenda and all notes from the sessions is available at https:// etherpad.opendev.org/p/neutron-xena-ptg If You want to discuss some topic in more details, please start new thread for it to keep this one clean. ## Day 1 (Monday) ### Retrospective of the Wallaby cycle Good things: * we accomplished our community goals this cycle, like e.g. rootwrap to privsep migration, * migration to the secure RBAC, * many performance improvements, * CI improvements and the job simplification/reducing, * stable team, and a lot of small patches from all around the world, * OVN feature gap with OVS is shrinking, * team support to implement address groups, including RBAC support. In the list of not so good things we mentioned: * During the previous PTG we were talking about some OVN related guides/sessions but we didn't made any, * networking-midonet - (yet another) stadium project deprecated due to lack of maintainers, but it's better to have them clearly marked as out of stadium, than false expectations - we can bring such projects back when needed anyway, * still unstable CI, mainly tempest and tempest-plugin jobs, also during this release functional tests. As for action items for the _Xena_ cycle, we listed couple of things: * Miguel wants to do the ovn knowledge share internally in his company and later share the material, * we will keep working on the improvements for our CI, list of all opened, CI related bugs can be found at https://tinyurl.com/neutron-gate-failures and we will work on reducing that list in next months. * Rodolfo and Miguel will follow up on Nova's plans to drop eventlet and move to the threading and will investigate if we can try to do something similar in the Neutron, * Akihiro raised good point about ML2/OVN and stadium projects. We should clarify the support plan for OVN in all of our stadium projects. Lajos and Lucas volunteered to work on that. ### Secure RBAC - testing - discussion with QA team As next session in the first day, we attended QA team's session about testing new secure RBAC policies. Summary of that discussion can be found in the etherpad: https://etherpad.opendev.org/p/qa-xena-ptg. QA team wants to switch each project to enforce new defaults and the system scope tokens in the Devstack. We will have to monitor that and check what needs to be fixed on our side to accomplish that. ### Deprecated hybrid plug and the iptables_hybrid firewall driver Bence proposed discussion about potential deprecation of the iptables_hybrid firewall driver as we have now native openvswitch driver too. After discussion we decided that we are not going to deprecate it, at least for now. There are known differences between those two drivers and there is still many usecases where people wants to use iptables_hybrid driver. ### Future of the Linuxbridge backend As the last topic of the first day we discussed future of the Linuxbridge backend and ML2 driver. The same topic was discussed in the http://kaplonski.pl/ blog/shanghai_ptg_summary/ and then, based on the feedback from operators we decided that we are not going to deprecate this backend anytime soon as many people are still using it. But now we are couple of cycle later and things didn't change a lot. We still don't have anyone who would like to maintain it. In most cases it works fine but there are also pretty many bugs reported for that backend, which don't have owners - see https://tinyurl.com/linuxbridge-bugs. But things may change in the future and e.g. removal of the legacy iptables from the main distributions may makes things harder to work. We again had feedback, e.g. from Jon that NASA is using that driver and want's to keep using it. But from the other hand in the core Neutron we don't have anyone interested in maintaining it currently. The outcome of the discussion is that we will again raise that topic and ask operators about: - who is using it and what are the use cases addressed by that driver - maybe we can help with proposing another solution, - who is willing to help with maintenance of that backend. If You are using Linuxbridge backend, please give us such feedback. For Xena status is that we still do our best to keep Linuxbridge backend to be running and fully supported but we will also revisit its status again in the next PTG. ## Day 2 (Tuesday) ### Continuation of "L3 feature improvements" During the second day of the PTG we had very interesting discussions about L3 improvements. We discussed bunch of the RFEs proposed by Bence, Lajos and Manu: * [RFE] Add BFD support for Neutron https://bugs.launchpad.net/neutron/+bug/ 1907089 and the spec: https://review.opendev.org/c/openstack/neutron-specs/+/767337 * [RFE] Allow explicit management of default routes: https://bugs.launchpad.net/ neutron/+bug/1921126) and the spec: https://review.opendev.org/c/openstack/neutron-specs/+/781475 * Allow multiple external gateways on a router: https://bugs.launchpad.net/neutron/ +bug/1905295 and the spec: https://review.opendev.org/c/openstack/neutron-specs/+/779511 * [RFE] Enhancement to Neutron BGPaaS to directly support Neutron Routers & bgp-peering from such routers over internal & external Neutron Networks https:// bugs.launchpad.net/neutron/+bug/1921461 and the spec: https://review.opendev.org/c/openstack/neutron-specs/+/783791 * [RFE] BFD for BGP Dynamic Routing: https://bugs.launchpad.net/neutron/+bug/ 1922716 and the spec: https://review.opendev.org/c/openstack/neutron-specs/+/785349. That one still needs to be discussed and approved during the drivers meeting. General outcome from the discussion is that during the Xena cycle we at least wants to have all those specs merged to have agreement about implementation details. When that will be done, Bence, Lajos and others from Ericsson will work on the implementation of those RFEs. ### Tap-as-a-service (taas) project Next topic which we discussed during Tuesday's session was about Tap-as-a-service: https://opendev.org/x/tap-as-a-service project. Few cycle back we were thinking about including it in the Neutron stadium but finally that -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bkslash at poczta.onet.pl Mon Apr 26 14:50:44 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 26 Apr 2021 16:50:44 +0200 Subject: [Vitrage] Multiregion In-Reply-To: References: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> Message-ID: <82838909-EE0D-49FC-99CA-F143F2ED9944@poczta.onet.pl> Hi, after deploying Vitrage in multi region environment Horizon always uses first public Vitrage endpoint (I have one public endpoint in each region) regardless if I’m logged in first or second region. So in both regions I see exactly the same entity map, etc. (from the first region). When I disable this endpoint, Horizon uses next one - and again I see the same things in both regions but this time from the second region. Horizon should check in which region I’m logged in and display Vitrage data for that region - right? So what’s wrong? Best regards Adam Tomas From Arkady.Kanevsky at dell.com Mon Apr 26 14:57:19 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Mon, 26 Apr 2021 14:57:19 +0000 Subject: [Interop] Proposal to move weekly meeting 1 hour earlier In-Reply-To: References: Message-ID: I will update WIKI with UTC time. We will still need to update it when daylight saving time changes happens. Which is different in different countries. From: Jimmy McArthur Sent: Friday, April 23, 2021 10:02 PM To: Goutham Pacha Ravi Cc: Kanevsky, Arkady; openstack-discuss at lists.openstack.org Subject: Re: [Interop] Proposal to move weekly meeting 1 hour earlier [EXTERNAL EMAIL] While I'm thinking of it, the wiki says PDT, but I don't think we're in Daylight Savings anymore. Should we just move that to UTC? On Apr 23 2021, at 9:58 pm, Goutham Pacha Ravi > wrote: On Fri, Apr 23, 2021 at 7:24 AM Kanevsky, Arkady > wrote: Thanks Dmitry. You are correct it is for Interop WG not Ironic. Thanks for catching it. That is for Friday weekly Interop Meeting. Arkady From: Dmitry Tantsur > Sent: Friday, April 23, 2021 9:12 AM To: Kanevsky, Arkady Cc: openstack-discuss at lists.openstack.org Subject: Re: [Ironic] Proposal to move weekly meeting 1 hour earlier [EXTERNAL EMAIL] I suspect you wanted to use [interop] tag, not [ironic] :) On Fri, Apr 23, 2021 at 4:10 PM Kanevsky, Arkady > wrote: Team, On today’s PTG Interop meeting we proposed to move weekly Friday meeting 1 hour earlier to 16:00 UTC. Please, respond if the proposed time will or not will work for you. +1 works for me Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -- Red Hat GmbH, https://de.redhat.com/ [de.redhat.com] , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From vineshnellaiappan at gmail.com Mon Apr 26 14:53:11 2021 From: vineshnellaiappan at gmail.com (Vinesh N) Date: Mon, 26 Apr 2021 20:23:11 +0530 Subject: [TripleO] overcloud node introspect failed In-Reply-To: References: Message-ID: Thanks for your suggestion, here are the details, 1) ipmitool with password, 2) node show details 3) node validate 4) driver list *(undercloud) [stack at n001 ~]$ ipmitool -I lanplus -H 10.0.40.6 -L ADMINISTRATOR -p 623 -U admin -R 1 -N 5 -P *** lan print 1* Set in Progress : Set Complete Auth Type Support : NONE MD2 MD5 PASSWORD Auth Type Enable : Callback : MD2 MD5 PASSWORD : User : MD2 MD5 PASSWORD : Operator : MD2 MD5 PASSWORD : Admin : MD2 MD5 PASSWORD : OEM : MD2 MD5 PASSWORD IP Address Source : Static Address IP Address : 10.0.40.6 Subnet Mask : 255.255.0.0 MAC Address : 0c:c4:7a:3c:c0:b9 SNMP Community String : public IP Header : TTL=0x00 Flags=0x00 Precedence=0x00 TOS=0x00 BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Disabled Default Gateway IP : 0.0.0.0 Default Gateway MAC : 00:00:00:00:00:00 Backup Gateway IP : 0.0.0.0 Backup Gateway MAC : 00:00:00:00:00:00 802.1q VLAN ID : Disabled 802.1q VLAN Priority : 0 RMCP+ Cipher Suites : 1,2,3,6,7,8,11,12 Cipher Suite Priv Max : XaaaXXaaaXXaaXX : X=Cipher Suite Unused : c=CALLBACK : u=USER : o=OPERATOR : a=ADMIN : O=OEM Bad Password Threshold : Not Available *(undercloud) [stack at n001 ~]$ openstack baremetal node show 23a17f0f-d683-4f90-8696-f947485900f9* /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | allocation_uuid | None | | automated_clean | None | | bios_interface | no-bios | | boot_interface | ipxe | | chassis_uuid | None | | clean_step | {} | | conductor | n001.localdomain | | conductor_group | | | console_enabled | False | | console_interface | ipmitool-socat | | created_at | 2021-04-26T08:22:49+00:00 | | deploy_interface | direct | | deploy_step | {} | | description | None | | driver | ipmi | | driver_info | {'deploy_kernel': 'file:///var/lib/ironic/httpboot/agent.kernel', 'rescue_kernel': 'file:///var/lib/ironic/httpboot/agent.kernel', 'deploy_ramdisk': 'file:///var/lib/ironic/httpboot/agent.ramdisk', 'rescue_ramdisk': 'file:///var/lib/ironic/httpboot/agent.ramdisk', 'ipmi_address': '10.0.1.11', 'ipmi_username': 'admin', 'ipmi_password': '******', 'ipmi_port': 623} | | driver_internal_info | {} | | extra | {} | | fault | None | | inspect_interface | inspector | | inspection_finished_at | None | | inspection_started_at | None | | instance_info | {} | | instance_uuid | None | | last_error | None | | lessee | None | | maintenance | False | | maintenance_reason | None | | management_interface | ipmitool | | name | None | | network_data | {} | | network_interface | flat | | owner | None | | power_interface | ipmitool | | power_state | power off | | properties | {'capabilities': 'boot_option:local', 'vendor': 'supermicro'} | | protected | False | | protected_reason | None | | provision_state | manageable | | provision_updated_at | 2021-04-26T08:22:55+00:00 | | raid_config | {} | | raid_interface | no-raid | | rescue_interface | agent | | reservation | None | | resource_class | baremetal | | retired | False | | retired_reason | None | | storage_interface | noop | | target_power_state | None | | target_provision_state | None | | target_raid_config | {} | | traits | [] | | updated_at | 2021-04-26T08:35:27+00:00 | | uuid | 23a17f0f-d683-4f90-8696-f947485900f9 | | vendor_interface | ipmitool | +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ and here is the validate output, i could see the pxe assign ipaddress to node and loaded the operating system. *(undercloud) [stack at n001 ~]$ openstack baremetal node validate 23a17f0f-d683-4f90-8696-f947485900f9* /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) +------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | bios | False | Driver ipmi does not support bios (disabled or not implemented). | | boot | False | Cannot validate image information for node 23a17f0f-d683-4f90-8696-f947485900f9 because one or more parameters are missing from its instance_info and insufficent information is present to boot from a remote volume. Missing are: ['image_source', 'kernel', 'ramdisk'] | | console | False | Either missing 'ipmi_terminal_port' parameter in node's driver_info or [console]port_range is not configured | | deploy | False | Cannot validate image information for node 23a17f0f-d683-4f90-8696-f947485900f9 because one or more parameters are missing from its instance_info and insufficent information is present to boot from a remote volume. Missing are: ['image_source', 'kernel', 'ramdisk'] | | inspect | True | | | management | True | | | network | True | | | power | True | | | raid | False | Driver ipmi does not support raid (disabled or not implemented). | | rescue | False | Cannot validate image information for node 23a17f0f-d683-4f90-8696-f947485900f9 because one or more parameters are missing from its instance_info and insufficent information is present to boot from a remote volume. Missing are: ['image_source', 'kernel', 'ramdisk'] | | storage | True | | +------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ *(undercloud) [stack at n001 ~]$ openstack baremetal driver list* /usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) +---------------------+------------------+ | Supported driver(s) | Active host(s) | +---------------------+------------------+ | idrac | n001.localdomain | | ilo | n001.localdomain | | ipmi | n001.localdomain | | redfish | n001.localdomain | +---------------------+------------------+ -Vinesh On Mon, Apr 26, 2021 at 6:59 PM Julia Kreger wrote: > Greetings, > > In all likelihood, the credentials are wrong for the baremetal node > and the lock is being held by the conductor who is still trying to > record the power state. The lock is an intentional behavior clients > should retry if they encounter the lock. This is because BMC's often > cannot handle concurrent requests. > > I would first manually verify: > > * That the nodes are not in maintenance state (openstack baremetal > node show). The node last_error field may have a hint or indication to > the actual error, but visit the next two bullet points. > * That a power state of on or off has been recorded. If it has not > been recorded, the supplied credentials or or access is correct. > * If you're sure about the credentials, verify basic connectivity to > the BMC address. Some BMCs are very particular about *how* the > networking is configured, specifically to help limit attacks from the > network itself. > > -Julia > > > On Wed, Apr 21, 2021 at 7:25 PM Vinesh N > wrote: > > > > hi, > > i am facing an issue while introspect the bare metal nodes, > > > > error message > > "4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf | 2021-04-22T01:41:32 | > 2021-04-22T01:41:35 | Failed to set boot device to PXE: Failed to set boot > device for node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf: Client Error for url: > http://10.0.1.202:6385/v1/nodes/4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf/management/boot_device, > Node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf is locked by host > undercloud.localdomain, please retry after the current operation is > completed" > > > > > > (undercloud) [stack at undercloud ~]$ cat /etc/*release > > CentOS Linux release 8.3.2011 > > > > ussuri version > > > > (undercloud) [stack at undercloud ~]$ openstack image list > > /usr/lib/python3.6/site-packages/requests/__init__.py:91: > RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't > match a supported version! > > RequestsDependencyWarning) > > > +--------------------------------------+------------------------+--------+ > > | ID | Name | Status > | > > > +--------------------------------------+------------------------+--------+ > > | 8ddcd168-cc18-4ce2-97c5-c3502ac471a4 | overcloud-full | active > | > > | 8d9cfac9-400b-4570-b0b1-baeb175b16c4 | overcloud-full-initrd | active > | > > | c561f1d5-41ae-4599-81ea-de2c1e74eae7 | overcloud-full-vmlinuz | active > | > > > +--------------------------------------+------------------------+--------+ > > > > Using the command to introspect the node, it was able to discover the > node and I could provision the node boot via pxe, and load the image on the > node. I could see the login prompt on the server, after some time of > provision shut the node down. > > > > openstack overcloud node discover --range 10.0.40.5 --credentials > admin:XXXX --introspect --provide > > > > /usr/lib/python3.6/site-packages/requests/__init__.py:91: > RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't > match a supported version! > > RequestsDependencyWarning) > > Successfully probed node IP 10.0.40.5 > > Successfully registered node UUID 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf > > /usr/lib/python3.6/site-packages/requests/__init__.py:91: > RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't > match a supported version! > > RequestsDependencyWarning) > > > > PLAY [Baremetal Introspection for multiple Ironic Nodes] > *********************** > > 2021-04-22 07:04:28.978299 | 002590fe-0d22-76eb-1a70-000000000008 | > TASK | Check for required inputs > > 2021-04-22 07:04:29.002729 | 002590fe-0d22-76eb-1a70-000000000008 | > SKIPPED | Check for required inputs | localhost | item=node_uuids > > 2021-04-22 07:04:29.004468 | 002590fe-0d22-76eb-1a70-000000000008 | > TIMING | Check for required inputs | localhost | 0:00:00.069134 | 0.0 > > .... > > .... > > .... > > > > 2021-04-22 07:11:43.261714 | 002590fe-0d22-76eb-1a70-000000000016 | > TASK | Nodes that failed introspection > > 2021-04-22 07:11:43.296417 | 002590fe-0d22-76eb-1a70-000000000016 | > FATAL | Nodes that failed introspection | localhost | error={ > > "msg": " 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf" > > } > > 2021-04-22 07:11:43.297359 | 002590fe-0d22-76eb-1a70-000000000016 | > TIMING | Nodes that failed introspection | localhost | 0:07:14.362025 | > 0.03s > > > > NO MORE HOSTS LEFT > ************************************************************* > > > > PLAY RECAP > ********************************************************************* > > localhost : ok=4 changed=1 unreachable=0 > failed=1 skipped=5 rescued=0 ignored=0 > > 2021-04-22 07:11:43.301553 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary > Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > 2021-04-22 07:11:43.302101 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total > Tasks: 10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > 2021-04-22 07:11:43.302609 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Elapsed Time: > 0:07:14.367265 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > 2021-04-22 07:11:43.303162 | UUID | > Info | Host | Task Name | Run Time > > 2021-04-22 07:11:43.303740 | 002590fe-0d22-76eb-1a70-000000000014 | > SUMMARY | localhost | Start baremetal introspection | 434.03s > > 2021-04-22 07:11:43.304248 | 002590fe-0d22-76eb-1a70-000000000015 | > SUMMARY | localhost | Nodes that passed introspection | 0.04s > > 2021-04-22 07:11:43.304814 | 002590fe-0d22-76eb-1a70-000000000016 | > SUMMARY | localhost | Nodes that failed introspection | 0.03s > > 2021-04-22 07:11:43.305341 | 002590fe-0d22-76eb-1a70-000000000008 | > SUMMARY | localhost | Check for required inputs | 0.03s > > 2021-04-22 07:11:43.305854 | 002590fe-0d22-76eb-1a70-00000000000a | > SUMMARY | localhost | Set node_uuids_intro fact | 0.02s > > 2021-04-22 07:11:43.306397 | 002590fe-0d22-76eb-1a70-000000000010 | > SUMMARY | localhost | Check if validation enabled | 0.02s > > 2021-04-22 07:11:43.306904 | 002590fe-0d22-76eb-1a70-000000000012 | > SUMMARY | localhost | Fail if validations are disabled | 0.02s > > 2021-04-22 07:11:43.307379 | 002590fe-0d22-76eb-1a70-00000000000e | > SUMMARY | localhost | Set concurrency fact | 0.02s > > 2021-04-22 07:11:43.307913 | 002590fe-0d22-76eb-1a70-00000000000c | > SUMMARY | localhost | Notice | 0.02s > > 2021-04-22 07:11:43.308417 | 002590fe-0d22-76eb-1a70-000000000011 | > SUMMARY | localhost | Run Validations | 0.02s > > 2021-04-22 07:11:43.308926 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End > Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > 2021-04-22 07:11:43.309423 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ State > Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > 2021-04-22 07:11:43.310021 | ~~~~~~~~~~~~~~~~~~ Number of nodes which > did not deploy successfully: 1 ~~~~~~~~~~~~~~~~~ > > 2021-04-22 07:11:43.310545 | The following node(s) had failures: > localhost > > 2021-04-22 07:11:43.311080 | > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > Ansible execution failed. playbook: > /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run > Status: failed, Return Code: 2 > > Exception occured while running the command > > Traceback (most recent call last): > > File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line > 34, in run > > super(Command, self).run(parsed_args) > > File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", > line 41, in run > > return super(Command, self).run(parsed_args) > > File "/usr/lib/python3.6/site-packages/cliff/command.py", line 187, in > run > > return_code = self.take_action(parsed_args) or 0 > > File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_node.py", line > 462, in take_action > > retry_timeout=parsed_args.retry_timeout, > > File > "/usr/lib/python3.6/site-packages/tripleoclient/workflows/baremetal.py", > line 193, in introspect > > "retry_timeout": retry_timeout, > > File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line > 728, in run_ansible_playbook > > raise RuntimeError(err_msg) > > RuntimeError: Ansible execution failed. playbook: > /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run > Status: failed, Return Code: 2 > > Ansible execution failed. playbook: > /usr/share/ansible/tripleo-playbooks/cli-baremetal-introspect.yaml, Run > Status: failed, Return Code: 2 > > > > > > (undercloud) [stack at undercloud ~]$ openstack baremetal introspection > list > > /usr/lib/python3.6/site-packages/requests/__init__.py:91: > RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't > match a supported version! > > RequestsDependencyWarning) > > > +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > | UUID | Started at | Finished > at | Error > > > > | > > > +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > | 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf | 2021-04-22T01:41:32 | > 2021-04-22T01:41:35 | Failed to set boot device to PXE: Failed to set boot > device for node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf: Client Error for url: > http://10.0.1.202:6385/v1/nodes/4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf/management/boot_device, > Node 4bdd55bd-c4d9-4af0-b619-2e0b7b0107cf is locked by host > undercloud.localdomain, please retry after the current operation is > completed. | > > | 3d091348-e9c7-4e99-80e3-df72d332d935 | 2021-04-21T12:36:30 | > 2021-04-21T12:36:32 | Failed to set boot device to PXE: Failed to set boot > device for node 3d091348-e9c7-4e99-80e3-df72d332d935: Client Error for url: > http://10.0.1.202:6385/v1/nodes/3d091348-e9c7-4e99-80e3-df72d332d935/management/boot_device, > Node 3d091348-e9c7-4e99-80e3-df72d332d935 is locked by host > undercloud.localdomain, please retry after the current operation is > completed. | > > > +--------------------------------------+---------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From destienne.maxime at gmail.com Mon Apr 26 15:13:19 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Mon, 26 Apr 2021 17:13:19 +0200 Subject: [zun][horizon] How to display containers among VMs on network topology ? Message-ID: Hello, I installed Zun on my stack and it works very well, I have a few containers up and running. I also added the zun-ui plugin for Horizon wich allows me to see the list of containers and to create them. But I often use the network topology view, wich I found very clean sometimes. I wondered if there is a solution to display the containers as are Vm's ? Thank you ! Maxime -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Apr 26 15:01:14 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Mon, 26 Apr 2021 15:01:14 +0000 Subject: [Interop]: Interop Working Group Meeting Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: "Ramchandran, Prakash" Subject: Interop Working Group Meeting Date: Fri, 14 Aug 2020 18:16:01 +0000 Size: 52622 URL: From eyalb1 at gmail.com Mon Apr 26 15:35:05 2021 From: eyalb1 at gmail.com (Eyal B) Date: Mon, 26 Apr 2021 18:35:05 +0300 Subject: [Vitrage] Multiregion In-Reply-To: <82838909-EE0D-49FC-99CA-F143F2ED9944@poczta.onet.pl> References: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> <82838909-EE0D-49FC-99CA-F143F2ED9944@poczta.onet.pl> Message-ID: Hi, Currently Vitrage supports only one region to get the data from. The region is configured in vitrage.conf under section [service_credentials] and is used by its clients to get the data Eyal On Mon, Apr 26, 2021 at 5:50 PM Adam Tomas wrote: > > Hi, > after deploying Vitrage in multi region environment Horizon always uses > first public Vitrage endpoint (I have one public endpoint in each region) > regardless if I’m logged in first or second region. So in both regions I > see exactly the same entity map, etc. (from the first region). When I > disable this endpoint, Horizon uses next one - and again I see the same > things in both regions but this time from the second region. Horizon should > check in which region I’m logged in and display Vitrage data for that > region - right? So what’s wrong? > > Best regards > Adam Tomas > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kira034 at 163.com Mon Apr 26 15:47:59 2021 From: kira034 at 163.com (Hongbin Lu) Date: Mon, 26 Apr 2021 23:47:59 +0800 (CST) Subject: [zun][horizon] How to display containers among VMs on network topology ? In-Reply-To: References: Message-ID: <5df04329.6e03.1790eddac72.Coremail.kira034@163.com> From Zun's perspective, I would like to know if it is possible for a Horizon plugin (like Zun-UI) to add resources (like containers) to the network topology. If it is possible, I will consider to add support for that. The clostest solution is Heat. In the Horizon Heat's dashboard, it can display containers in the topology defined by a Heat template (suppose you create Zun containers in Heat). Best regards, Hongbin At 2021-04-26 23:13:19, "Maxime d'Estienne" wrote: Hello, I installed Zun on my stack and it works very well, I have a few containers up and running. I also added the zun-ui plugin for Horizon wich allows me to see the list of containers and to create them. But I often use the network topology view, wich I found very clean sometimes. I wondered if there is a solution to display the containers as are Vm's ? Thank you ! Maxime -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Mon Apr 26 15:48:08 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 26 Apr 2021 17:48:08 +0200 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop Message-ID: Hi everyone, I would like to further discuss the topics we covered with the neutron team during the PTG [1]. * adding address_group API capability It's tested by tests in neutron-tempest-plugin. First question is if tests which are not directly in tempest can be a part of a non-add-on marketing program? It's possible to move them to tempest though, by the time we do so, could they be marked as advisory? * Shall we include QoS tempest tests since we don't know what share of vendors enable QoS? Could it be an add-on? These tests are also in neutron-tempest-plugin, I assume we're talking about neutron_tempest_plugin.api.test_qos tests. If we want to include these tests, which program should they belong to? Do we wanna create a new one? [1] https://etherpad.opendev.org/p/neutron-xena-ptg Thanks, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Apr 26 16:37:32 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 26 Apr 2021 19:37:32 +0300 Subject: [TripleO] Xena PTG session summaries Message-ID: Hello folks, I sent out some stats and links on our PTG meetup with http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021999.html already, but, as a couple of different people asked me about it, I took the time to write a summary for each session today. Of course you can find all etherpad links and recordings via https://etherpad.opendev.org/p/tripleo-ptg-xena (which seems to be down right now but I have backups if it isn't resolved by tomorrow I can try sharing that content somewhere else). Below is a (very) concise summary of the main points in each session and I hope that is useful to someone (especially since it took far longer than I expected ;)). Please reply here (or ping privately if you prefer) for any glaring omissions or obvious issues that should be revised (I originally intended to put this in an etherpad for easier collaboration but as I wrote above, etherpad.opendev.org seems down right now at least for me). regards, marios MON: * https://etherpad.opendev.org/p/tripleo-ptg-retrospective Retrospective of the Wallaby cycle - there are some community and team level 'headlines' on the main items worked on during this cycle on the etherpad. Some identified ideas for improvement include targeting another older branch for end-of-life likely Queens, improving upstream documentation especially removal of stale content, and creating a tag in Launchpad for teams so we can more easily identify which squad is currently assigned. * Topic: Plan/Swift removal update Presentation link: https://drive.google.com/file/d/1igOW4XuAbU55Tat73DwLqO4UGZu8MiNi/view?usp=sharing An update of the work completed in the W allay cycle to remove the Swift service and the deployment plan (which is no longer used as part of our deployments) from the undercloud. From wallaby onward by default there is no undercloud Swift. There may be a revision of the spec https://opendev.org/openstack/tripleo-specs/commit/e83d8aba3a950da83a33c23bcef6ffc38f00002f as the original plan didn't explicitly consider removal of the deployment plan. * https://etherpad.opendev.org/p/tripleo-ephemeral-heat Update on the ephemeral heat work (i.e. no permanent heat process on the undercloud). There has been very strong progress made in this cycle and there are still some outstanding patches https://review.opendev.org/q/topic:%22ephemeral-heat%22+(status:open) to be merged. Goal is to make this the default in Xena deployments and backport to Wallaby as optional. Besides the main feature, some related planned work includes consolidation of the python-tripleoclient "overcloud deploy" and "tripleo deploy" (eg standalone) commands. Note that this work depends on the tripleo-network-v2 work (next session below). * https://etherpad.opendev.org/p/tripleo-network-v2 Update on the network ports v2 work (moving network port creation out of the heat stack) https://opendev.org/openstack/tripleo-specs/src/branch/master/specs/wallaby/triplo-network-data-v2-node-ports.rst - again good progress on this during Wallaby but there is still some ongoing work there https://review.opendev.org/q/topic:%2522network-data-v2%2522+(status:open) . The goal for Xena is to make this the default (i.e. no node/networking config in deploy-steps-playbook.yaml). One main area of work for X in this topic is integration of the baremetal network config in the overcloud deployment (i.e. allow a single command). * https://etherpad.opendev.org/p/tripleo-ceph-xena Update from the ceph team about the main work items completed in Wallaby including the tripleo-ceph-client and tripleo-ceph in place of ceph-ansible for RBD ( https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/tripleo-ceph-client.html and https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/tripleo-ceph.html ). The main work planned for Xena is to continue trying to achieve feature parity with ceph-ansible - including resolving cephadm blockers, , Ganesha & https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/tripleo-ceph-ganesha.html . One major consideration is how to move ceph creation/config outside of the heat stack - some parts such as pools, keyrings and haproxy config will have to remain as part of the tripleo deployment. Note that this work depends on the network ports v2 (previous session above). TUE: * https://etherpad.opendev.org/p/tripleo-xena-whole-disk-images A proposal to move to whole disk images instead of the current overcloud-full.qcow+overcloud-full.initrd+overcloud-full.vmlinuz. There were many compelling arguments made for the proposal including: with the overcloud-full.qcow2 partition image, as of centos 8.4 grub2 no longer supports UEFI boot, there will be much less for ironic-python-agent to do during deployment with a single disk image, there will be just one file to distribute (vs 3), we will no longer need to define and build a separate 'hardened' image (and also remove the related CI jobs). One of the main technical issues that needs to be addressed first is the grow partition for /var which is where we are storing containers and config for deployment. * https://etherpad.opendev.org/p/tripleo-xena-drop-healthchecks Proposal to drop the container health check since are using deployment resources but aren't providing value. There was no push back against this proposal and the details are being discussed in the newly posted spec @ https://review.opendev.org/c/openstack/tripleo-specs/+/787535. * https://etherpad.opendev.org/p/ci-tripleo-repos Proposal to consolidate the various ways and places that tripleo-ci is using to configure the repos in the CI jobs. There is a spec proposed @ https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - some of the work here is split into sub items which are ongoing (tripleo-get-hash there https://review.opendev.org/c/openstack/tripleo-ci/+/784392). The main outstanding blocking item here is to agree on the common data format for the various personas upstream downstream and product that we need to support eg https://github.com/mwhahaha/rhos-bootstrap/blob/main/versions/centos.yaml vs https://review.opendev.org/c/openstack/tripleo-repos/+/785593/1/tripleo_repos/conf/master.yaml * openstack tempest skiplist https://docs.google.com/presentation/d/1aCiV35IYNhPV7SRmi4_A9vkIjZ89pwfC4VvL6frFJNE/edit?usp=sharing Update on the tempest skiplist effort during Wallaby to consolidate the skipped Tempest tests in a central location with the ability to specify particular jobs and or branches for which specific skips will apply. * https://etherpad.opendev.org/p/tripleo-next One of the main items discussed here was the 'first principles' proposal at https://review.opendev.org/c/openstack/tripleo-specs/+/786980 - these are meant to guide us when discussing changes to our deployment tooling and architecture. The proposal will merge in Xena specs once we've reached consensus on the review. Another topic discussed in this session was an update on exploratory work to replace "heat & ansible" in our deployment tooling with 'something else' - some ongoing work here is at https://github.com/cloudnull/director & https://github.com/mwhahaha/task-core. More info and pointers (also discussed Kube/OCP with an operator to deploy tripleo) on the etherpad. WED: * https://etherpad.opendev.org/p/tripleo-xena-inventory-script This was a proposal to remove the "tripleo-ansible-inventory script" @ https://github.com/openstack/tripleo-common/blob/ccd990b58b6583dda3a0e0f34135ae343c833f70/tripleo_common/inventory.py#L744 and instead generate it from the deployment data (e.g. metalsmith or user data from deployed-server deployments). The consensus reached was that instead of removing it we should instead use it in a better way, for example make sure static inventories are generated and exported to known locations (especially for the ephemeral heat case) and re-used. * https://etherpad.opendev.org/p/vf-ui-output This was an update from the validations squad about the main items worked on during Wallaby (integrated the validation framework into the component CI pipelines, enabled the standalone job in upstream check/gate and increased adoption especially by the upgrades squad). Followed by discussions for planned Xena work, including changes in the UI/CLI (eg jq queries can be handled better and various other UI improvements more on the etherpad). Some of the other topics raised here were to make the validations themselves component aware (run all validations related to a given component) and discussion around the requirement for a molecule test on all validation additions (especially the example of mocking out OpenStack services like keystone); the compromise could be to instead use a standalone job for such cases. * https://etherpad.opendev.org/p/Validation-Framework-Next-Generation In this session the validations squad introduced ideas for the future direction of the validation framework. Some of the main proposals are to remove the validations repos - validations-common and validations-libs out of tripleo governance but still within openstack and establishing a new validations project (discussion but no clear consensus on this point), to re-merge the two repos into one consolidated validations repo and fixup the CLI (see previous session) - more items and other considerations on the etherpad. * https://etherpad.opendev.org/p/tripleo-frr-integration Update on Wallaby progress from the cross-squad team looking at FRR/BGP integration in the tripleo deployment (https://opendev.org/openstack/tripleo-specs/src/branch/master/specs/wallaby/triplo-bgp-frrouter.rst). Some of the main items discussed for Xena work included how we might approximate some part of this feature in upstream CI (high resource requirements - downstream CI has 9 nodes) and backport considerations (no backport to upstream/train). * https://etherpad.opendev.org/p/update-upgrade-consolidation In this session the upgrades squad outlined their proposal for consolidation of the minor update and major upgrade workflows - without any blockers or objections coming out of the discussion. One of the main considerations was around how we can decouple the operating system updates/upgrades from the tripleo container upgrade - one action item is to de-containerize those containers that are tied to the kernel version (ABI) such as libvirt and openvswitch. THU: * https://etherpad.opendev.org/p/policy-popup-xena-ptg In this session the security squad gave an update on progress during Wallaby on the Role Based Access Control (RBAC) - many services have completed implementation (Keystone, Nova, Ironic - more on the etherpad). Then there was a discussion around potential integration points during the tripleo deployment, for example https://review.opendev.org/c/openstack/tripleo-heat-templates/+/781571/7/environments/enable-secure-rbac.yaml . One of the considerations was around how we can test this in CI (possibly the standalone job is a good fit) as well as the use of multiple clouds.yaml for project specific operations during the deployment (with the root clouds yaml having the system-admin profile). * https://etherpad.opendev.org/p/centos-stream-9-upstream In this session the CI squad lead a discussion around centos9 stream (possibly coming Apr/May) and what we should consider/prepare for with respect to upstream CI. Some of the main changes and discussion items included NetworkManager and firewalld replacing iptables, ansible version (2.11/2.12?/?). Mainly this effort is blocked on the actual 9-stream release and getting the relevant nodepool node. Another main discussion point here was whether we would support both stream-8 and stream-9 on particular branches - consensus here is that wallaby has both 8/9 and for X can have only 9 - but this is all dependent on when 9 becomes available with respect to when Xena is released. * https://etherpad.opendev.org/p/os-migrate This session was an update from the upgrades squad around the os-migrate tool ( https://github.com/os-migrate/os-migrate ) - which aims to 'copy' your openstack deployment and in particular the end-user workloads (i.e. user data, vms etc, but not the controlplane) onto new hardware, as an alternative to the in-place upgrade. More information and slides @ https://docs.google.com/presentation/d/1UYGOI89MBLHLpS89mPp0VK1yvTYtb2BamUL_DmfGLGA/edit?usp=sharing From marios at redhat.com Mon Apr 26 16:40:17 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 26 Apr 2021 19:40:17 +0300 Subject: [TripleO] next irc meeting Tuesday Apr 27 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 27 April at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on Mar 13 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-04-13-14.00.html Hope you can make it on Tuesday, regards, marios From marios at redhat.com Mon Apr 26 16:44:44 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 26 Apr 2021 19:44:44 +0300 Subject: [TripleO] Xena PTG session summaries In-Reply-To: References: Message-ID: On Mon, Apr 26, 2021 at 7:37 PM Marios Andreou wrote: > > Hello folks, > > I sent out some stats and links on our PTG meetup with > http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021999.html > already, but, as a couple of different people asked me about it, I > took the time to write a summary for each session today. Of course you > can find all etherpad links and recordings via > https://etherpad.opendev.org/p/tripleo-ptg-xena (which seems to be > down right now but I have backups if it isn't resolved by tomorrow I > can try sharing that content somewhere else). > > Below is a (very) concise summary of the main points in each session > and I hope that is useful to someone (especially since it took far > longer than I expected ;)). Please reply here (or ping privately if > you prefer) for any glaring omissions or obvious issues that should be > revised (I originally intended to put this in an etherpad for easier > collaboration but as I wrote above, etherpad.opendev.org seems down > right now at least for me). etherpad.opendev.org back now, so I pasted the summaries into https://etherpad.opendev.org/p/tripleo-ptg-xena-summaries and linked it via our agenda etherpad, so, please help me to capture the "glaring omissions or obvious issues" I have missed in the summary of your or others' sessions? regards, marios > > regards, marios > > MON: > > * https://etherpad.opendev.org/p/tripleo-ptg-retrospective > > Retrospective of the Wallaby cycle - there are some community and team > level 'headlines' on the main items worked on during this cycle on the > etherpad. > Some identified ideas for improvement include targeting another older > branch for end-of-life likely Queens, improving upstream documentation > especially removal of stale content, and creating a tag in Launchpad > for teams so we can more easily identify which squad is currently > assigned. > > * Topic: Plan/Swift removal update Presentation link: > https://drive.google.com/file/d/1igOW4XuAbU55Tat73DwLqO4UGZu8MiNi/view?usp=sharing > > An update of the work completed in the W allay cycle to remove the > Swift service and the deployment plan (which is no longer used as part > of our deployments) from the undercloud. From wallaby onward by > default there is no undercloud Swift. There may be a revision of the > spec https://opendev.org/openstack/tripleo-specs/commit/e83d8aba3a950da83a33c23bcef6ffc38f00002f > as the original plan didn't explicitly consider removal of the > deployment plan. > > * https://etherpad.opendev.org/p/tripleo-ephemeral-heat > > Update on the ephemeral heat work (i.e. no permanent heat process on > the undercloud). There has been very strong progress made in this > cycle and there are still some outstanding patches > https://review.opendev.org/q/topic:%22ephemeral-heat%22+(status:open) > to be merged. Goal is to make this the default in Xena deployments and > backport to Wallaby as optional. Besides the main feature, some > related planned work includes consolidation of the > python-tripleoclient "overcloud deploy" and "tripleo deploy" (eg > standalone) commands. Note that this work depends on the > tripleo-network-v2 work (next session below). > > > * https://etherpad.opendev.org/p/tripleo-network-v2 > > Update on the network ports v2 work (moving network port creation out > of the heat stack) > https://opendev.org/openstack/tripleo-specs/src/branch/master/specs/wallaby/triplo-network-data-v2-node-ports.rst > - again good progress on this during Wallaby but there is still some > ongoing work there > https://review.opendev.org/q/topic:%2522network-data-v2%2522+(status:open) > . The goal for Xena is to make this the default (i.e. no > node/networking config in deploy-steps-playbook.yaml). One main area > of work for X in this topic is integration of the baremetal network > config in the overcloud deployment (i.e. allow a single command). > > * https://etherpad.opendev.org/p/tripleo-ceph-xena > > Update from the ceph team about the main work items completed in > Wallaby including the tripleo-ceph-client and tripleo-ceph in place of > ceph-ansible for RBD ( > https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/tripleo-ceph-client.html > and https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/tripleo-ceph.html > ). The main work planned for Xena is to continue trying to achieve > feature parity with ceph-ansible - including resolving cephadm > blockers, , Ganesha & > https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/tripleo-ceph-ganesha.html > . One major consideration is how to move ceph creation/config outside > of the heat stack - some parts such as pools, keyrings and haproxy > config will have to remain as part of the tripleo deployment. Note > that this work depends on the network ports v2 (previous session > above). > > TUE: > > * https://etherpad.opendev.org/p/tripleo-xena-whole-disk-images > > A proposal to move to whole disk images instead of the current > overcloud-full.qcow+overcloud-full.initrd+overcloud-full.vmlinuz. > There were many compelling arguments made for the proposal including: > with the overcloud-full.qcow2 partition image, as of centos 8.4 grub2 > no longer supports UEFI boot, there will be much less for > ironic-python-agent to do during deployment with a single disk image, > there will be just one file to distribute (vs 3), we will no longer > need to define and build a separate 'hardened' image (and also remove > the related CI jobs). One of the main technical issues that needs to > be addressed first is the grow partition for /var which is where we > are storing containers and config for deployment. > > * https://etherpad.opendev.org/p/tripleo-xena-drop-healthchecks > > Proposal to drop the container health check since are using deployment > resources but aren't providing value. There was no push back against > this proposal and the details are being discussed in the newly posted > spec @ https://review.opendev.org/c/openstack/tripleo-specs/+/787535. > > * https://etherpad.opendev.org/p/ci-tripleo-repos > > Proposal to consolidate the various ways and places that tripleo-ci is > using to configure the repos in the CI jobs. There is a spec proposed > @ https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - some > of the work here is split into sub items which are ongoing > (tripleo-get-hash there > https://review.opendev.org/c/openstack/tripleo-ci/+/784392). The main > outstanding blocking item here is to agree on the common data format > for the various personas upstream downstream and product that we need > to support eg https://github.com/mwhahaha/rhos-bootstrap/blob/main/versions/centos.yaml > vs https://review.opendev.org/c/openstack/tripleo-repos/+/785593/1/tripleo_repos/conf/master.yaml > > * openstack tempest skiplist > https://docs.google.com/presentation/d/1aCiV35IYNhPV7SRmi4_A9vkIjZ89pwfC4VvL6frFJNE/edit?usp=sharing > > Update on the tempest skiplist effort during Wallaby to consolidate > the skipped Tempest tests in a central location with the ability to > specify particular jobs and or branches for which specific skips will > apply. > > * https://etherpad.opendev.org/p/tripleo-next > > One of the main items discussed here was the 'first principles' > proposal at https://review.opendev.org/c/openstack/tripleo-specs/+/786980 > - these are meant to guide us when discussing changes to our > deployment tooling and architecture. The proposal will merge in Xena > specs once we've reached consensus on the review. Another topic > discussed in this session was an update on exploratory work to replace > "heat & ansible" in our deployment tooling with 'something else' - > some ongoing work here is at https://github.com/cloudnull/director & > https://github.com/mwhahaha/task-core. More info and pointers (also > discussed Kube/OCP with an operator to deploy tripleo) on the > etherpad. > > WED: > > * https://etherpad.opendev.org/p/tripleo-xena-inventory-script > > This was a proposal to remove the "tripleo-ansible-inventory script" @ > https://github.com/openstack/tripleo-common/blob/ccd990b58b6583dda3a0e0f34135ae343c833f70/tripleo_common/inventory.py#L744 > and instead generate it from the deployment data (e.g. metalsmith or > user data from deployed-server deployments). The consensus reached was > that instead of removing it we should instead use it in a better way, > for example make sure static inventories are generated and exported to > known locations (especially for the ephemeral heat case) and re-used. > > * https://etherpad.opendev.org/p/vf-ui-output > > This was an update from the validations squad about the main items > worked on during Wallaby (integrated the validation framework into the > component CI pipelines, enabled the standalone job in upstream > check/gate and increased adoption especially by the upgrades squad). > Followed by discussions for planned Xena work, including changes in > the UI/CLI (eg jq queries can be handled better and various other UI > improvements more on the etherpad). Some of the other topics raised > here were to make the validations themselves component aware (run all > validations related to a given component) and discussion around the > requirement for a molecule test on all validation additions > (especially the example of mocking out OpenStack services like > keystone); the compromise could be to instead use a standalone job for > such cases. > > * https://etherpad.opendev.org/p/Validation-Framework-Next-Generation > > In this session the validations squad introduced ideas for the future > direction of the validation framework. Some of the main proposals are > to remove the validations repos - validations-common and > validations-libs out of tripleo governance but still within openstack > and establishing a new validations project (discussion but no clear > consensus on this point), to re-merge the two repos into one > consolidated validations repo and fixup the CLI (see previous session) > - more items and other considerations on the etherpad. > > * https://etherpad.opendev.org/p/tripleo-frr-integration > > Update on Wallaby progress from the cross-squad team looking at > FRR/BGP integration in the tripleo deployment > (https://opendev.org/openstack/tripleo-specs/src/branch/master/specs/wallaby/triplo-bgp-frrouter.rst). > Some of the main items discussed for Xena work included how we might > approximate some part of this feature in upstream CI (high resource > requirements - downstream CI has 9 nodes) and backport considerations > (no backport to upstream/train). > > * https://etherpad.opendev.org/p/update-upgrade-consolidation > > In this session the upgrades squad outlined their proposal for > consolidation of the minor update and major upgrade workflows - > without any blockers or objections coming out of the discussion. One > of the main considerations was around how we can decouple the > operating system updates/upgrades from the tripleo container upgrade - > one action item is to de-containerize those containers that are tied > to the kernel version (ABI) such as libvirt and openvswitch. > > THU: > > * https://etherpad.opendev.org/p/policy-popup-xena-ptg > > In this session the security squad gave an update on progress during > Wallaby on the Role Based Access Control (RBAC) - many services have > completed implementation (Keystone, Nova, Ironic - more on the > etherpad). Then there was a discussion around potential integration > points during the tripleo deployment, for example > https://review.opendev.org/c/openstack/tripleo-heat-templates/+/781571/7/environments/enable-secure-rbac.yaml > . One of the considerations was around how we can test this in CI > (possibly the standalone job is a good fit) as well as the use of > multiple clouds.yaml for project specific operations during the > deployment (with the root clouds yaml having the system-admin > profile). > > * https://etherpad.opendev.org/p/centos-stream-9-upstream > > In this session the CI squad lead a discussion around centos9 stream > (possibly coming Apr/May) and what we should consider/prepare for with > respect to upstream CI. Some of the main changes and discussion items > included NetworkManager and firewalld replacing iptables, ansible > version (2.11/2.12?/?). Mainly this effort is blocked on the actual > 9-stream release and getting the relevant nodepool node. Another main > discussion point here was whether we would support both stream-8 and > stream-9 on particular branches - consensus here is that wallaby has > both 8/9 and for X can have only 9 - but this is all dependent on when > 9 becomes available with respect to when Xena is released. > > * https://etherpad.opendev.org/p/os-migrate > > This session was an update from the upgrades squad around the > os-migrate tool ( https://github.com/os-migrate/os-migrate ) - which > aims to 'copy' your openstack deployment and in particular the > end-user workloads (i.e. user data, vms etc, but not the controlplane) > onto new hardware, as an alternative to the in-place upgrade. More > information and slides @ > https://docs.google.com/presentation/d/1UYGOI89MBLHLpS89mPp0VK1yvTYtb2BamUL_DmfGLGA/edit?usp=sharing From C-Albert.Braden at charter.com Mon Apr 26 17:15:04 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 26 Apr 2021 17:15:04 +0000 Subject: [kolla] VM build fails after Train-Ussuri upgrade In-Reply-To: References: <13f2c3e3131a407c87d403d6cad3cd53@ncwmexgp009.CORP.CHARTERCOM.com> <98d0effb5f07449b8fb2098a4ca5b218@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Hi Ronal, I'm not sure I understand your question. I used ceph-ansible to setup ceph when I built the cluster, and it created the keys. I didn't change anything key-related during the upgrade. Do I need to? From: Ronal Mauricio Faraj Rodriguez Sent: Monday, April 26, 2021 10:02 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] RE: [kolla] VM build fails after Train-Ussuri upgrade CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi, Did you check your keys files in nova, kvm and cinder generated by ceph to auth? Example to generate key file and then copy to compute: ceph auth get-or-create compute mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow allow rwx pool=compute, allow allow rwx pool=volumes, allow rx pool=images' -o /ceph.client.compute.keyring Hope this help you. Regards. De: Braden, Albert > Enviado el: lunes, 26 de abril de 2021 15:46 Para: openstack-discuss at lists.openstack.org Asunto: RE: [kolla] VM build fails after Train-Ussuri upgrade everis Security Awareness - This is an incoming mail from an EXTERNAL DOMAIN. Please verify sender before you open attachments or access links. Can anyone help with this upgrade issue? From: Braden, Albert Sent: Monday, April 19, 2021 8:20 AM To: openstack-discuss at lists.openstack.org Subject: [kolla] VM build fails after Train-Ussuri upgrade I upgraded my Train test cluster to Ussuri following these instructions: OpenStack Docs: Operating Kolla The upgrade completed successfully with no failures, and the existing VMs are fine, but new VM build fails with rados.Rados.connect\nrados.PermissionDeniedError: Ubuntu Pastebin I'm running external ceph so I looked at this document: OpenStack Docs: External Ceph It says that I need the following in /etc/kolla/config/glance/ceph.conf: auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx I didn't have that, so I added it and then redeployed, but still can't build VMs. I tried adding the same to all copies of ceph.conf and redeployed again, but that didn't help. Does anything else need to change in my ceph config when upgrading from Train to Ussuri? I see some cryptic talk about ceph in the release notes but it's not obvious what I'm being asked to change: OpenStack Docs: Ussuri Series Release Notes I read the bug that it refers to: Bug #1904062 "external ceph cinder volume config breaks volumes ..." : Bugs : kolla-ansible (launchpad.net) But I already have "backend_host=rbd:volumes" so I don't think I'm hitting that. Also I read these sections but I don't see anything obvious here that needs to be changed: * For cinder (cinder-volume and cinder-backup), glance-api and manila keyrings behavior has changed and Kolla Ansible deployment will not copy those keys using wildcards (ceph.*), instead will use newly introduced variables. Your environment may render unusable after an upgrade if your keys in /etc/kolla/config do not match default values for introduced variables. * The default behavior for generating the cinder.conf template has changed. An rbd-1 section will be generated when external Ceph functionality is used, i.e. cinder_backend_ceph is set to true. Previously it was only included when Kolla Ansible internal Ceph deployment mechanism was used. * The rbd section of nova.conf for nova-compute is now generated when nova_backend is set to "rbd". Previously it was only generated when both enable_ceph was "yes" and nova_backend was set to "rbd". My ceph keys have the default name and are in the default locations. I have cinder_backend_ceph: "yes". I don't have a nova_backend setting but I have nova_backend_ceph: "yes" I added nova_backend: "rbd" and redeployed and now I get a different error: rados.Rados.connect\nrados.ObjectNotFound Ubuntu Pastebin I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Mon Apr 26 18:57:31 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Mon, 26 Apr 2021 20:57:31 +0200 Subject: [telemetry][ptg] Summary of the discussed topics Message-ID: <3ebebdd3-81b4-1fe5-ba89-ed775de10cbf@matthias-runge.de> Hi there, last week, we had two days of PTG sessions for telemetry. There were a couple of topics discussed, and here's the summary. - the CI situation for telemetry is not very satisfying. There are a couple of repositories interacting with each other. telemetry gates have been blocked for over a month on the external dependency gnocchi. - We talked about the use (or not using) storyboard for the case of tracking bugs etc. for telemetry. It is/was not very user friendly to report bugs. - for the future, we'll go back and use launchpad for tracking bugs etc. - We had a collab session with the folks from venus, talked about what they are doing, and if there is a room for collaboration. - we also decided to drop panko from OpenStack. A separate mail will follow. Best, Matthias From anlin.kong at gmail.com Mon Apr 26 22:30:18 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 27 Apr 2021 10:30:18 +1200 Subject: Scheduling backups in Trove In-Reply-To: <2466322c572e931fd52e767684ee81e2@citynetwork.eu> References: <2466322c572e931fd52e767684ee81e2@citynetwork.eu> Message-ID: Hi Bekir, You can definitely create Mistral workflow to periodically trigger Trove backup if Mistral supports Trove action and you have already deployed Mistral in your cloud. Otherwise, another option is to implement schedule backups in Trove itself (by leveraging container running inside trove guest instance). --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Sat, Apr 24, 2021 at 3:58 AM Bekir Fajkovic < bekir.fajkovic at citynetwork.eu> wrote: > Hello! > > A question regarding the best practices when it comes to scheduling > backups: > > Is there any built-in mechanism implemented today in the service or do the > customer or cloud service provider have to schedule the > backup themselves? I see some proposals about implementing backup > schedules through Mistral workflows: > > > https://specs.openstack.org/openstack/trove-specs/specs/newton/scheduled-backup.html > > But i am not sure about the status of that. > > Best Regards > > *Bekir Fajkovic* > Senior DBA > Mobile: +46 70 019 48 47 > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > ----- Original Message ----- > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Apr 26 23:42:14 2021 From: melwittt at gmail.com (melanie witt) Date: Mon, 26 Apr 2021 16:42:14 -0700 Subject: [sdk]: compute service create_server method, how to create multiple servers In-Reply-To: <20bd2d0dd9ed5013919e036df2576cca@uvic.ca> References: <20bd2d0dd9ed5013919e036df2576cca@uvic.ca> Message-ID: On 4/16/21 15:56, dmeng wrote: > Hello there, > Hope this email finds you well. > We are currently using the openstacksdk for developing our product, and > have a question about the openstacksdk compute service create_server() > method. We are wondering if the "max_count" and "min_count" are > supported by openstackskd for creating multiple servers at once. I tried > both the max_count and the min_count, and they both only create one > server for me, but I'd like to create multiple servers at once. The code > I'm using is like the following: > conn = connection.Connection( > session=sess, > region_name=None, > compute_api_version='2') > > nova = conn.compute > > nova.create_server( > name='sdk-test-create', > image_id=image_id, > flavor_id=flavor_id, > key_name=my_key_name, > networks=[{"uuid": network_id}], > security_groups=[{'name':security_group_name}], > min_count=3, > ) > The above code will create one server "sdk-test-create", but I'm > assuming it should create three. Wondering did I miss anything, or if we > have any other option to archive this? Hi, I was able to reproduce the behavior you describe on a local devstack and have proposed the following patch to fix it: https://review.opendev.org/c/openstack/openstacksdk/+/788098 I was able to verify ^ works by doing the following hackaround for testing: from openstack import connection from openstack import resource class MyServer(_server.Server): min_count = resource.Body('min_count') max_count = resource.Body('max_count') conn = connection.Connection(region_name='RegionOne', auth=dict(auth_url='http://127.0.0.1/identity', username='demo', password='a', project_id='7c60976c662a414cb2661831ff41ee30', user_domain_id='default'), compute_api_version='2', identity_interface='internal') conn.compute._create(MyServer, name='mult', image_id='23d8bca0-5dfa-4dd0-9267-368ce1e1e8a0', flavor_id=42, min_count=2, networks='none') HTH, -melanie From zhangbailin at inspur.com Tue Apr 27 00:58:11 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 27 Apr 2021 00:58:11 +0000 Subject: =?gb2312?B?tPC4tDogW2N5Ym9yZ11bcHRnXUN5Ym9yZyB2UFRHIFN1bW1hcnkgQXByaWwg?= =?gb2312?Q?2021?= In-Reply-To: References: Message-ID: Add another decision. * We talked about the using Launchpad for tracking bugs instead of storyboard in xena release. If you have some issues with Cybrog, please reported on Launchpad https://bugs.launchpad.net/openstack-cyborg Thanks. brinzhang Inspur Electronic Information Industry Co.,Ltd. 发件人: Wang, Xin-ran [mailto:xin-ran.wang at intel.com] 发送时间: 2021年4月25日 17:56 收件人: OpenStack Discuss 主题: [cyborg][ptg]Cyborg vPTG Summary April 2021 Hi all, Please ignore my previous message, it is sent to wrong person. Sorry about this. Thanks for all your participation! We’ve conducted a successful meeting last week. Here is the aggregated summary from Cyborg vPTG discussion. Please check it out and feel free to feedback any concerns you might have. We did a retrospective of Wallaby release, including: * We supports more operations supported for a VM with accelerator attached. * We introduced new drivers for Intel x710 NIC and Inspur’s NVMe SSD Card. * We implemented a new configuration file allowing more flexible device configuration. Topic discussion: Here's some major discussion and conclusion of Cyborg vPTG. For more details, please refer to the etherpad[1]. * More nova operation supporting: - We prioritized the tasks: 1. suspend/resume. 2. cold migration. 3. live migration. * vGPU support: - We reached an internal agreement on whole workflow which can be apply as a generic framework for mdev device. * API enhancement: Some of the following items requires a new micro-version. - Add refresh policy check for all APIs. - Add device profile update API. - Add ARQ query by multiple instances API. - Add disable/enable device API, this one requires a spec first. - Enhance device profile show API with more information. * Cleanup issue: - This issue comes from the case where one compute node shutdown accidently, and the accelerator records in placement and cyborg DB remains as the orphaned resources. We agreed to implement a mechanism to clean up the orphaned resources, this one also need a spec. [1] https://etherpad.opendev.org/p/cyborg-xena-ptg Thanks, Xin-Ran Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: From yingjisun at vmware.com Tue Apr 27 01:19:45 2021 From: yingjisun at vmware.com (Yingji Sun) Date: Tue, 27 Apr 2021 01:19:45 +0000 Subject: An compute service hang issue In-Reply-To: <4e5d3beb2f7921b3a494ca853621da5d59cda1f5.camel@redhat.com> References: <4e5d3beb2f7921b3a494ca853621da5d59cda1f5.camel@redhat.com> Message-ID: Sean, You are right. I am working with vmware driver. Is it possible that you share some code fix samples so that I can have a try in my environment ? Below is my investigation. Would you please give any suggestion ? With my instance, vm_state is building and task_state is NULL. I have some suspect here ###################################################################### try: scheduler_hints = self._get_scheduler_hints(filter_properties, request_spec) with self.rt.instance_claim(context, instance, node, allocs, limits): # See my comments in instance_claim below. ###################################################################### Here is my investigation in def _build_and_run_instance(self, context, instance, image, injected_files, admin_password, requested_networks, security_groups, block_device_mapping, node, limits, filter_properties, request_spec=None): self._notify_about_instance_usage(context, instance, 'create.start', extra_usage_info={'image_name': image_name}) compute_utils.notify_about_instance_create( context, instance, self.host, phase=fields.NotificationPhase.START, bdms=block_device_mapping) I see rabbitmq sent here. <#yingji> # NOTE(mikal): cache the keystone roles associated with the instance # at boot time for later reference instance.system_metadata.update( {'boot_roles': ','.join(context.roles)}) self._check_device_tagging(requested_networks, block_device_mapping) self._check_trusted_certs(instance) request_group_resource_providers_mapping = \ self._get_request_group_mapping(request_spec) if request_group_resource_providers_mapping: self._update_pci_request_spec_with_allocated_interface_name( context, instance, request_group_resource_providers_mapping) # TODO(Luyao) cut over to get_allocs_for_consumer allocs = self.reportclient.get_allocations_for_consumer( context, instance.uuid) I see "GET /allocations/" in placement-api.log, so, it looks to reach here. # My suspect code snippet. ###################################################################### try: scheduler_hints = self._get_scheduler_hints(filter_properties, request_spec) with self.rt.instance_claim(context, instance, node, allocs, limits): # See my comments in instance_claim below. ###################################################################### ........ with self._build_resources(context, instance, requested_networks, security_groups, image_meta, block_device_mapping, request_group_resource_providers_mapping) as resources: instance.vm_state = vm_states.BUILDING instance.task_state = task_states.SPAWNING # NOTE(JoshNang) This also saves the changes to the # instance from _allocate_network_async, as they aren't # saved in that function to prevent races. instance.save(expected_task_state= task_states.BLOCK_DEVICE_MAPPING) block_device_info = resources['block_device_info'] network_info = resources['network_info'] LOG.debug('Start spawning the instance on the hypervisor.', instance=instance) with timeutils.StopWatch() as timer: The driver code starts here. However in my case, it looks not reach here. self.driver.spawn(context, instance, image_meta, injected_files, admin_password, allocs, network_info=network_info, block_device_info=block_device_info) @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE) def instance_claim(self, context, instance, nodename, allocations, limits=None): ...... if self.disabled(nodename): # instance_claim() was called before update_available_resource() # (which ensures that a compute node exists for nodename). We # shouldn't get here but in case we do, just set the instance's # host and nodename attribute (probably incorrect) and return a # NoopClaim. # TODO(jaypipes): Remove all the disabled junk from the resource # tracker. Servicegroup API-level active-checking belongs in the # nova-compute manager. self._set_instance_host_and_node(instance, nodename) return claims.NopClaim() # sanity checks: if instance.host: LOG.warning("Host field should not be set on the instance " "until resources have been claimed.", instance=instance) if instance.node: LOG.warning("Node field should not be set on the instance " "until resources have been claimed.", instance=instance) cn = self.compute_nodes[nodename] I did not see the rabbitmq messsge that should be sent here. pci_requests = objects.InstancePCIRequests.get_by_instance_uuid( context, instance.uuid) Yingji. > 在 4/26/21 下午10:46,“Sean Mooney” 写入: > > 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=6LOZaIYlm8ZHXZ%2FZS44fyhkAgrbHV8MVjfKf6pkeTwc%3D&reserved=0 > > > > > > This issue is not always reproducible and restarting the compute service can work around this. > > > > Could you please give any suggestion on how to resolve this issue or how I can investigate ? > i assume this is with the vmware driver? > it kind of sounds like eventlet is not monkey patching properly and its blocking on a call or something like that. > > we have seen this in the pass when talking to libvirt where we were not properly proxying calls into the libvirt lib and as a result we would end up blocking in the compute agent when making some external calls to libvirt. > i wonder if you are seing something similar? > > > > Yingji. > > From pangliye at inspur.com Tue Apr 27 01:58:34 2021 From: pangliye at inspur.com (=?gb2312?B?TGl5ZSBQYW5nKOXMwaLStSk=?=) Date: Tue, 27 Apr 2021 01:58:34 +0000 Subject: [venus][ptg]Venus vPTG Summary April 2021 Message-ID: <87a323fff39c43b197d328f69a7f1638@inspur.com> Hi all, Thanks for all your participation! We've conducted the first meeting of venus last week and it is successful. Here's the summary of the topics we have discussed: *Progress in the past six months: - Develop devstack-based deployment for venus - Add log retrieval of modules such as vitrage - Develop the configuration, based on which you can retrieve the chain log of the call *Next step: - Develop alarm task code to set threshold for the number of error logs of different modules at different times, and provides alarm services and notification services - The configuration, analysis and alarm of Venus will be integrated into horizon in the form of plugin. - Develop a deployment method based on kolla-ansible - (discuss)Summarize the log specifications of some typical scenarios and develop them to venus - Evaluate whether to collect event data (considering usage, pressure, etc.We will continue to discuss with the telemetry project team) *Promblem: - Many projects's log records are not standardized, so they can only support full-text retrieval, not multi-dimensional analysis Full discussion (PTG etherpad): [1] https://etherpad.opendev.org/p/venus-xena-ptg Introduction of venus: [2] https://youtu.be/mE2MoEx3awM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Tue Apr 27 06:41:08 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 27 Apr 2021 08:41:08 +0200 Subject: [venus][ptg]Venus vPTG Summary April 2021 In-Reply-To: <87a323fff39c43b197d328f69a7f1638@inspur.com> References: <87a323fff39c43b197d328f69a7f1638@inspur.com> Message-ID: On Tue, Apr 27, 2021 at 4:01 AM Liye Pang(逄立业) wrote: > - Develop a deployment method based on kolla-ansible Oh, nice! Feel welcome to join us on #openstack-kolla in case of questions. -yoctozepto From zhangbailin at inspur.com Tue Apr 27 06:55:15 2021 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Tue, 27 Apr 2021 06:55:15 +0000 Subject: =?utf-8?B?562U5aSNOiBbdmVudXNdW3B0Z11WZW51cyB2UFRHIFN1bW1hcnkgQXByaWwg?= =?utf-8?Q?2021?= In-Reply-To: References: <87a323fff39c43b197d328f69a7f1638@inspur.com> Message-ID: <6a7bfc04b4fb49889985c6b7331d7667@inspur.com> Hi, Radosław Piliszek. Cool, of course, we are happy to share and exchange, and we were talked in openstack-kolla channel before, but not about venus, hope we can talked more in future ^^ brinzhang Inspur Electronic Information Industry Co.,Ltd. -----邮件原件----- 发件人: Radosław Piliszek [mailto:radoslaw.piliszek at gmail.com] 发送时间: 2021年4月27日 14:41 收件人: Liye Pang(逄立业) 抄送: openstack-discuss at lists.openstack.org 主题: Re: [venus][ptg]Venus vPTG Summary April 2021 On Tue, Apr 27, 2021 at 4:01 AM Liye Pang(逄立业) wrote: > - Develop a deployment method based on kolla-ansible Oh, nice! Feel welcome to join us on #openstack-kolla in case of questions. -yoctozepto From bkslash at poczta.onet.pl Tue Apr 27 06:57:26 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 27 Apr 2021 08:57:26 +0200 Subject: [Vitrage] Multiregion In-Reply-To: References: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> <82838909-EE0D-49FC-99CA-F143F2ED9944@poczta.onet.pl> Message-ID: <0EEA92BF-2C77-40B4-A015-EE40A1D60D76@poczta.onet.pl> Hi Eyal, thank you for your answer. So it’s pointless to install Vitrage in a region other than „parent” region :( Is there any way to prevent users (other than admin, let’s say with member role) from seeing „Vitrage” menu in Horizon? If I put role:admin in horizon’s vitrage_policy.json file (for all options) would it make the Vitrage menu disappear for users other than admin? Best regards, Adam > Wiadomość napisana przez Eyal B w dniu 26.04.2021, o godz. 17:35: > > Hi, > > Currently Vitrage supports only one region to get the data from. The region is configured in vitrage.conf under section [service_credentials] > and is used by its clients to get the data > > Eyal > > On Mon, Apr 26, 2021 at 5:50 PM Adam Tomas > wrote: > > Hi, > after deploying Vitrage in multi region environment Horizon always uses first public Vitrage endpoint (I have one public endpoint in each region) regardless if I’m logged in first or second region. So in both regions I see exactly the same entity map, etc. (from the first region). When I disable this endpoint, Horizon uses next one - and again I see the same things in both regions but this time from the second region. Horizon should check in which region I’m logged in and display Vitrage data for that region - right? So what’s wrong? > > Best regards > Adam Tomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eyalb1 at gmail.com Tue Apr 27 07:26:58 2021 From: eyalb1 at gmail.com (Eyal B) Date: Tue, 27 Apr 2021 10:26:58 +0300 Subject: [Vitrage] Multiregion In-Reply-To: <0EEA92BF-2C77-40B4-A015-EE40A1D60D76@poczta.onet.pl> References: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> <82838909-EE0D-49FC-99CA-F143F2ED9944@poczta.onet.pl> <0EEA92BF-2C77-40B4-A015-EE40A1D60D76@poczta.onet.pl> Message-ID: HI, Vitrage is supposed to support multi-tenancy so the tenant in horizon should see only its own resources. Regarding horizon vitrage menu, I don't know, maybe you can write some code to horizon that can disable the vitrage menu for certain users. You need to consult with the horizon guys for that if there is an option for that. Eyal On Tue, Apr 27, 2021 at 9:57 AM Adam Tomas wrote: > Hi Eyal, thank you for your answer. > So it’s pointless to install Vitrage in a region other than „parent” > region :( Is there any way to prevent users (other than admin, let’s say > with member role) from seeing „Vitrage” menu in Horizon? If I put > role:admin in horizon’s vitrage_policy.json file (for all options) would it > make the Vitrage menu disappear for users other than admin? > > Best regards, > Adam > > Wiadomość napisana przez Eyal B w dniu 26.04.2021, o > godz. 17:35: > > Hi, > > Currently Vitrage supports only one region to get the data from. The > region is configured in vitrage.conf under section [service_credentials] > and is used by its clients to get the data > > Eyal > > On Mon, Apr 26, 2021 at 5:50 PM Adam Tomas wrote: > >> >> Hi, >> after deploying Vitrage in multi region environment Horizon always uses >> first public Vitrage endpoint (I have one public endpoint in each region) >> regardless if I’m logged in first or second region. So in both regions I >> see exactly the same entity map, etc. (from the first region). When I >> disable this endpoint, Horizon uses next one - and again I see the same >> things in both regions but this time from the second region. Horizon should >> check in which region I’m logged in and display Vitrage data for that >> region - right? So what’s wrong? >> >> Best regards >> Adam Tomas >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Apr 27 07:33:30 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 27 Apr 2021 09:33:30 +0200 Subject: [venus][ptg]Venus vPTG Summary April 2021 In-Reply-To: <6a7bfc04b4fb49889985c6b7331d7667@inspur.com> References: <87a323fff39c43b197d328f69a7f1638@inspur.com> <6a7bfc04b4fb49889985c6b7331d7667@inspur.com> Message-ID: On Tue, Apr 27, 2021 at 8:56 AM Brin Zhang(张百林) wrote: > > Hi, Radosław Piliszek. > Cool, of course, we are happy to share and exchange, and we were talked in openstack-kolla channel before, but not about venus, hope we can talked more in future ^^ Sure, I remember! Looking forward to it. :-) -yoctozepto From radoslaw.piliszek at gmail.com Tue Apr 27 07:34:52 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 27 Apr 2021 09:34:52 +0200 Subject: Masakari meeting on 2021-05-04 *cancelled* Message-ID: Dears, We are cancelling the Masakari meeting on 2021-05-04 as there are holidays in China. We will meet again on 2021-05-11. -yoctozepto From smooney at redhat.com Tue Apr 27 08:10:56 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 27 Apr 2021 09:10:56 +0100 Subject: An compute service hang issue In-Reply-To: References: <4e5d3beb2f7921b3a494ca853621da5d59cda1f5.camel@redhat.com> Message-ID: <872c4008732daa005ec95856db078e9f973110bf.camel@redhat.com> On Tue, 2021-04-27 at 01:19 +0000, Yingji Sun wrote: > Sean, > > You are right. I am working with vmware driver. Is it possible that you share some code fix samples so that I can have a try in my environment ? in the libvirt case we had service wide hangs https://bugs.launchpad.net/nova/+bug/1840912 that were resovled by https://github.com/openstack/nova/commit/36ee9c1913a449defd3b35f5ee5fb4afcd44169e > > Below is my investigation. Would you please give any suggestion ? > > With my instance, vm_state is building and task_state is NULL. > > I have some suspect here > ###################################################################### >         try: >             scheduler_hints = self._get_scheduler_hints(filter_properties, >                                                         request_spec) >             with self.rt.instance_claim(context, instance, node, allocs, >                                         limits): >         # See my comments in instance_claim below. >         ###################################################################### > > Here is my investigation in > >     def _build_and_run_instance(self, context, instance, image, injected_files, >             admin_password, requested_networks, security_groups, >             block_device_mapping, node, limits, filter_properties, >             request_spec=None): > >        self._notify_about_instance_usage(context, instance, 'create.start', >                 extra_usage_info={'image_name': image_name}) >         compute_utils.notify_about_instance_create( >             context, instance, self.host, >             phase=fields.NotificationPhase.START, >             bdms=block_device_mapping) >          >           I see rabbitmq sent here. >         <#yingji> > >         # NOTE(mikal): cache the keystone roles associated with the instance >         # at boot time for later reference >         instance.system_metadata.update( >             {'boot_roles': ','.join(context.roles)}) > >         self._check_device_tagging(requested_networks, block_device_mapping) >         self._check_trusted_certs(instance) > >         request_group_resource_providers_mapping = \ >             self._get_request_group_mapping(request_spec) > >         if request_group_resource_providers_mapping: >             self._update_pci_request_spec_with_allocated_interface_name( >                 context, instance, request_group_resource_providers_mapping) > >         # TODO(Luyao) cut over to get_allocs_for_consumer >         allocs = self.reportclient.get_allocations_for_consumer( >                 context, instance.uuid) >          >             I see "GET /allocations/" in placement-api.log, >             so, it looks to reach here. >          > >         # My suspect code snippet. >          >         ###################################################################### >         try: >             scheduler_hints = self._get_scheduler_hints(filter_properties, >                                                         request_spec) >             with self.rt.instance_claim(context, instance, node, allocs, >                                         limits): >         # See my comments in instance_claim below. >         ###################################################################### >          > >                 ........ > >                 with self._build_resources(context, instance, >                         requested_networks, security_groups, image_meta, >                         block_device_mapping, >                         request_group_resource_providers_mapping) as resources: >                     instance.vm_state = vm_states.BUILDING >                     instance.task_state = task_states.SPAWNING >                     # NOTE(JoshNang) This also saves the changes to the >                     # instance from _allocate_network_async, as they aren't >                     # saved in that function to prevent races. >                     instance.save(expected_task_state= >                             task_states.BLOCK_DEVICE_MAPPING) >                     block_device_info = resources['block_device_info'] >                     network_info = resources['network_info'] >                     LOG.debug('Start spawning the instance on the hypervisor.', >                               instance=instance) >                     with timeutils.StopWatch() as timer: > >          >           The driver code starts here. >           However in my case, it looks not reach here. >          >                         self.driver.spawn(context, instance, image_meta, >                                           injected_files, admin_password, >                                           allocs, network_info=network_info, >                                           block_device_info=block_device_info) > > > so this synchronized decorator prints a log message which you should see when it is aquired and released https://github.com/openstack/oslo.concurrency/blob/4da91987d6ce7de2bb61c6ed760a019961a0a344/oslo_concurrency/lockutils.py#L355-L371 you should see that in the logs. i notice also that in your code you do not have the fair=true argmument on master and for a few releases now we have enable the use of fair locking with https://github.com/openstack/nova/commit/1ed9f9dac59c36cdda54a9852a1f93939b3ebbc3 to resolve long delays in the ironic diriver https://bugs.launchpad.net/nova/+bug/1864122 but the same issues would also affect vmware or any other clustered hypervior where the resouce tracker is manageing multiple nodes. its very possible that that is what is causing your current issues. >     @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE) >     def instance_claim(self, context, instance, nodename, allocations, >                        limits=None): >         ...... >         if self.disabled(nodename): >             # instance_claim() was called before update_available_resource() >             # (which ensures that a compute node exists for nodename). We >             # shouldn't get here but in case we do, just set the instance's >             # host and nodename attribute (probably incorrect) and return a >             # NoopClaim. >             # TODO(jaypipes): Remove all the disabled junk from the resource >             # tracker. Servicegroup API-level active-checking belongs in the >             # nova-compute manager. >             self._set_instance_host_and_node(instance, nodename) >             return claims.NopClaim() > >         # sanity checks: >         if instance.host: >             LOG.warning("Host field should not be set on the instance " >                         "until resources have been claimed.", >                         instance=instance) > >         if instance.node: >             LOG.warning("Node field should not be set on the instance " >                         "until resources have been claimed.", >                         instance=instance) > >         cn = self.compute_nodes[nodename] >          >           I did not see the rabbitmq messsge that should be sent here. >          >         pci_requests = objects.InstancePCIRequests.get_by_instance_uuid( >             context, instance.uuid) > > > Yingji. > > > 在 4/26/21 下午10:46,“Sean Mooney” 写入: > > > > 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=6LOZaIYlm8ZHXZ%2FZS44fyhkAgrbHV8MVjfKf6pkeTwc%3D&reserved=0 > > > > > > > > > This issue is not always reproducible and restarting the compute service can work around this. > > > > > > Could you please give any suggestion on how to resolve this issue or how I can investigate ? > > i assume this is with the vmware driver? > > it kind of sounds like eventlet is not monkey patching properly and its blocking on a call or something like that. > > > > we have seen this in the pass when talking to libvirt where we were not properly proxying calls into the libvirt lib > and as a result we would end up blocking in the compute agent when making some external calls to libvirt. > > > i wonder if you are seing something similar? > > > > > > Yingji. > > >   > > > > > > > > > > > > > > > > > > > > > > > > > From ricolin at ricolky.com Tue Apr 27 09:07:43 2021 From: ricolin at ricolky.com (Rico Lin) Date: Tue, 27 Apr 2021 17:07:43 +0800 Subject: [Multi-arch SIG][sig][PTG] PTG Summary. Next meeting: May, 11. Message-ID: Dear all Before we starts, just like to notify that we're not running meeting this week, but will run meeting at May, 11. So please join and provide topics if you got any: https://etherpad.opendev.org/p/Multi-Arch-agenda As we just have our PTG meeting last week, and I thank everyone who joins our PTG session. Here are some summaries from PTG: - Success to run full tempest tests on Arm64 env. What's next? - ML: http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021600.html - Job patch: https://review.opendev.org/c/openstack/devstack/+/708317 - Job status: https://zuul.openstack.org/builds?job_name=devstack-platform-arm64+ - actions: - propose to add swift arm64 UT job: Swift is one of services directly touch file system or basic storage services. - pushing forward for integrating Ceph with current CI job to test more similar to user's environment - tunning volume backup/restore on the current job: - we might try to use POSIX as volume backup backend to testing performance issue. - ricolin and kevinz will take action to run tests locally for debugging - Also will try to switch to OSUOSL environment to check what kind of result we will face. - propose a periodic task for current CI job (once landed): as current job performance is not suitable for gating every patch set, I believe it will make sense to at least have a periodic job running. After performance issue is fixed, we can consider adding gating job or post-merge job. - We should also consider having multi-node job to test cross node. - SIG report (https://www.openstack.org/multi-arch-sig-report) - As general agreement that we definitely will generate report out to keep update the community with works related to multi-arch supports. Also hope to get use cases for share around. - We found some error in SIG report, but they are fixed now. - SIG video meeting - We discuss about the chances to run a video meeting. Once we have more material collected or topics ready. - Check libvirt 7 and cpu stuff - as libvirt 7 released this year, we should check if behavior changes: - There was code added for CPU features on migration - Need to check does 'cpu_mode' different than host-passthrough work now Please reference our PTG etherpad for more detailed discussion: https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Tue Apr 27 09:08:57 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 27 Apr 2021 18:08:57 +0900 Subject: [tacker] IRC meeting cancelled on 4th May Message-ID: <0d24c478-a821-4776-82a2-2778ea80a81d@gmail.com> Hi tacker team, It's a holiday season next week in Japan, and most of us are not going to join the next IRC meeting. We will cancel the meeting on 4th May. Cheers, Yasufumi From Istvan.Szabo at agoda.com Tue Apr 27 09:17:34 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Tue, 27 Apr 2021 09:17:34 +0000 Subject: Live migration fails In-Reply-To: References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> Message-ID: Hi, We are trying to live migrate instances out from compute nodes and tries to automate but seems like can't really do, when the migration stuck. Let me explain the issue a bit: 1. We initiate live migration 2. live migration finished, the machine disappeared from the /var/lib/nova/instances/ directory on the source server. 3. but when I query or see in horizon it stucked in migrating phase. We collected information like migration id and we try to force it but it is already finished, and can't force to complete. 4. I've restarted the nova service on the source node, it just make the machine to error phase, and the force not working also. 5. I changed the state from error to active but that one also can't force complete. What can I do to change the name of the compute node in the DB? How can I force it without touching the db? The goal is to automate the compute node draining as less as possible user intervention. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Szabo, Istvan (Agoda) Sent: Friday, April 23, 2021 9:13 AM To: Sean Mooney ; openstack-discuss at lists.openstack.org Subject: RE: Live migration fails My /etc/hostname has only short name. The nova.conf host value is also short name. The host has been selected by the scheduler: nova live-migration --block-migrate 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0 What has been changed is in the instances table in the nova DB the node field of the vm. So actually I don't change the compute host value just edited the VM value actually. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Sean Mooney Sent: Thursday, April 22, 2021 4:13 PM To: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails On Thu, 2021-04-22 at 06:01 +0000, Eugen Block wrote: > Yeah, the column "node" has the FQDN in my DB, too, only "host" is the > short name. The question is how did the short name get into the "node" > column, but it will probably be difficult to get to the bottom of that. well by default we do not expect to have FQDNs in either filed. novas default for both is the hostname of the host which will be the short name not the fqdn unless you set an fqdn in /etc/hostname which is not generally the recommended pratice. nova in general does nto support changing the hostname(/etc/hostname) of a host and you should avoid changeing the "host" value in the nova.conf too. changing these values can result in the creation fo addtional placment RP, compute service records and compute nodes and that can result in hard to fix situation wehre old vms are using one set of resouce and new vms are using the updated ones. so you should not modify either value in the db. did you perhaps specify a host when live migrating and just pass the wrong value or was the host selected by the scheduler. > > > Zitat von "Szabo, Istvan (Agoda)" : > > > I think I found the issue, in the instances nova db in the node > > column the compute node name somehow changed to short hostname. It > > works fith FQDN but it doesn't work with short ☹ I hope I didn't > > mess-up anything if I change to FQDN to make it work. > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Szabo, Istvan (Agoda) > > Sent: Thursday, April 22, 2021 11:19 AM > > To: Eugen Block > > Cc: openstack-discuss at lists.openstack.org > > Subject: RE: Live migration fails > > > > Sorry, in the log I haven't commented out the servername ☹ it is > > xy-osfecn-40250 > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Eugen Block > > Sent: Wednesday, April 21, 2021 5:37 PM > > To: Szabo, Istvan (Agoda) > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: Live migration fails > > > > The error message seems correct, I can't find am-osfecn-4025 either > > in the list of compute nodes. Can you check in the database if > > there's an active instance (or several) allocated to that compute > > node? In that case you would need to correct the allocation in order > > for the migration to work. > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > Sure: > > > > > > https://jpst.it/2u3uh > > > > > > These are the one where can't live migrate: > > > xy-osfecn-40250 > > > xy-osfecn-40281 > > > xy-osfecn-40290 > > > xy-osbecn-40073 > > > xy-osfecn-40238 > > > > > > The compute service are disabled on these because we don't want > > > anybody spawn a vm on these anymore so want to evacuate all vms. > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Eugen Block > > > Sent: Wednesday, April 21, 2021 3:26 PM > > > To: openstack-discuss at lists.openstack.org > > > Subject: Re: Live migration fails > > > > > > Hi, > > > > > > can you share the output of these commands? > > > > > > nova-manage cell_v2 list_hosts > > > openstack compute service list > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > Hi, > > > > > > > > I have couple of compute nodes where the live migration fails > > > > with existing vms. > > > > When I quickly spawn a vm and try live migration it works so I > > > > assume shouldn't be a big problem with the compute node. > > > > However I have many existing vms where it fails with a > > > > servername not found. > > > > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > ERROR nova.conductor.tasks.migrate > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Unable to find record for source node servername on servername: > > > > ComputeHostNotFound: Compute host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] Failed to > > > > compute_task_migrate_server: Compute host servername could not > > > > be found.: ComputeHostNotFound: Compute host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Setting instance to ACTIVE state.: ComputeHostNotFound: Compute > > > > host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 > > > > ERROR oslo_messaging.rpc.server > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] Exception during message handling: > > > > ComputeHostNotFound: Compute host am-osfecn-4025 > > > > > > > > Tried with this command: > > > > > > > > nova live-migration --block-migrate id. > > > > > > > > Any idea? > > > > > > > > Thank you. > > > > > > > > ________________________________ This message is confidential > > > > and is for the sole use of the intended recipient(s). It may > > > > also be privileged or otherwise protected by copyright or other > > > > legal rules. If you have received it by mistake please let us > > > > know by reply email and delete it from your system. It is > > > > prohibited to copy this message or disclose its content to anyone. > > > > Any confidentiality or privilege is not waived or lost by any > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > messages sent to and from Agoda may be monitored to ensure > > > > compliance with company policies, to protect the company's > > > > interests and to remove potential malware. Electronic messages > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the > > > intended recipient(s). It may also be privileged or otherwise > > > protected by copyright or other legal rules. If you have received > > > it by mistake please let us know by reply email and delete it from > > > your system. It is prohibited to copy this message or disclose its content to anyone. > > > Any confidentiality or privilege is not waived or lost by any > > > mistaken delivery or unauthorized disclosure of the message. All > > > messages sent to and from Agoda may be monitored to ensure > > > compliance with company policies, to protect the company's > > > interests and to remove potential malware. Electronic messages may > > > be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > ________________________________ > > This message is confidential and is for the sole use of the intended > > recipient(s). It may also be privileged or otherwise protected by > > copyright or other legal rules. If you have received it by mistake > > please let us know by reply email and delete it from your system. It > > is prohibited to copy this message or disclose its content to > > anyone. Any confidentiality or privilege is not waived or lost by > > any mistaken delivery or unauthorized disclosure of the message. All > > messages sent to and from Agoda may be monitored to ensure > > compliance with company policies, to protect the company's interests > > and to remove potential malware. Electronic messages may be > > intercepted, amended, lost or deleted, or contain viruses. > > > > ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From Istvan.Szabo at agoda.com Tue Apr 27 09:23:31 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Tue, 27 Apr 2021 09:23:31 +0000 Subject: Live migration fails In-Reply-To: References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> Message-ID: Sorry, I missed the query where the vm can be found after the migration: It can be found in the information_schema db process_list but I guess this is just the query. And it can be found in the nova db instances table and here I need to change the node name to be correct. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Szabo, Istvan (Agoda) Sent: Tuesday, April 27, 2021 4:18 PM To: Szabo, Istvan (Agoda) ; Sean Mooney ; openstack-discuss at lists.openstack.org Subject: RE: Live migration fails Hi, We are trying to live migrate instances out from compute nodes and tries to automate but seems like can't really do, when the migration stuck. Let me explain the issue a bit: 1. We initiate live migration 2. live migration finished, the machine disappeared from the /var/lib/nova/instances/ directory on the source server. 3. but when I query or see in horizon it stucked in migrating phase. We collected information like migration id and we try to force it but it is already finished, and can't force to complete. 4. I've restarted the nova service on the source node, it just make the machine to error phase, and the force not working also. 5. I changed the state from error to active but that one also can't force complete. What can I do to change the name of the compute node in the DB? How can I force it without touching the db? The goal is to automate the compute node draining as less as possible user intervention. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Szabo, Istvan (Agoda) Sent: Friday, April 23, 2021 9:13 AM To: Sean Mooney ; openstack-discuss at lists.openstack.org Subject: RE: Live migration fails My /etc/hostname has only short name. The nova.conf host value is also short name. The host has been selected by the scheduler: nova live-migration --block-migrate 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0 What has been changed is in the instances table in the nova DB the node field of the vm. So actually I don't change the compute host value just edited the VM value actually. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Sean Mooney Sent: Thursday, April 22, 2021 4:13 PM To: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails On Thu, 2021-04-22 at 06:01 +0000, Eugen Block wrote: > Yeah, the column "node" has the FQDN in my DB, too, only "host" is the > short name. The question is how did the short name get into the "node" > column, but it will probably be difficult to get to the bottom of that. well by default we do not expect to have FQDNs in either filed. novas default for both is the hostname of the host which will be the short name not the fqdn unless you set an fqdn in /etc/hostname which is not generally the recommended pratice. nova in general does nto support changing the hostname(/etc/hostname) of a host and you should avoid changeing the "host" value in the nova.conf too. changing these values can result in the creation fo addtional placment RP, compute service records and compute nodes and that can result in hard to fix situation wehre old vms are using one set of resouce and new vms are using the updated ones. so you should not modify either value in the db. did you perhaps specify a host when live migrating and just pass the wrong value or was the host selected by the scheduler. > > > Zitat von "Szabo, Istvan (Agoda)" : > > > I think I found the issue, in the instances nova db in the node > > column the compute node name somehow changed to short hostname. It > > works fith FQDN but it doesn't work with short ☹ I hope I didn't > > mess-up anything if I change to FQDN to make it work. > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Szabo, Istvan (Agoda) > > Sent: Thursday, April 22, 2021 11:19 AM > > To: Eugen Block > > Cc: openstack-discuss at lists.openstack.org > > Subject: RE: Live migration fails > > > > Sorry, in the log I haven't commented out the servername ☹ it is > > xy-osfecn-40250 > > > > Istvan Szabo > > Senior Infrastructure Engineer > > --------------------------------------------------- > > Agoda Services Co., Ltd. > > e: istvan.szabo at agoda.com > > --------------------------------------------------- > > > > -----Original Message----- > > From: Eugen Block > > Sent: Wednesday, April 21, 2021 5:37 PM > > To: Szabo, Istvan (Agoda) > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: Live migration fails > > > > The error message seems correct, I can't find am-osfecn-4025 either > > in the list of compute nodes. Can you check in the database if > > there's an active instance (or several) allocated to that compute > > node? In that case you would need to correct the allocation in order > > for the migration to work. > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > Sure: > > > > > > https://jpst.it/2u3uh > > > > > > These are the one where can't live migrate: > > > xy-osfecn-40250 > > > xy-osfecn-40281 > > > xy-osfecn-40290 > > > xy-osbecn-40073 > > > xy-osfecn-40238 > > > > > > The compute service are disabled on these because we don't want > > > anybody spawn a vm on these anymore so want to evacuate all vms. > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Eugen Block > > > Sent: Wednesday, April 21, 2021 3:26 PM > > > To: openstack-discuss at lists.openstack.org > > > Subject: Re: Live migration fails > > > > > > Hi, > > > > > > can you share the output of these commands? > > > > > > nova-manage cell_v2 list_hosts > > > openstack compute service list > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > Hi, > > > > > > > > I have couple of compute nodes where the live migration fails > > > > with existing vms. > > > > When I quickly spawn a vm and try live migration it works so I > > > > assume shouldn't be a big problem with the compute node. > > > > However I have many existing vms where it fails with a > > > > servername not found. > > > > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > ERROR nova.conductor.tasks.migrate > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Unable to find record for source node servername on servername: > > > > ComputeHostNotFound: Compute host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] Failed to > > > > compute_task_migrate_server: Compute host servername could not > > > > be found.: ComputeHostNotFound: Compute host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > WARNING nova.scheduler.utils > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > Setting instance to ACTIVE state.: ComputeHostNotFound: Compute > > > > host servername could not be found. > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 > > > > ERROR oslo_messaging.rpc.server > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > dce35e6eceea4312bb0baa0510cef363 > > > > ca7e35079f4440c78bd9870724b9638b - default default] Exception during message handling: > > > > ComputeHostNotFound: Compute host am-osfecn-4025 > > > > > > > > Tried with this command: > > > > > > > > nova live-migration --block-migrate id. > > > > > > > > Any idea? > > > > > > > > Thank you. > > > > > > > > ________________________________ This message is confidential > > > > and is for the sole use of the intended recipient(s). It may > > > > also be privileged or otherwise protected by copyright or other > > > > legal rules. If you have received it by mistake please let us > > > > know by reply email and delete it from your system. It is > > > > prohibited to copy this message or disclose its content to anyone. > > > > Any confidentiality or privilege is not waived or lost by any > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > messages sent to and from Agoda may be monitored to ensure > > > > compliance with company policies, to protect the company's > > > > interests and to remove potential malware. Electronic messages > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the > > > intended recipient(s). It may also be privileged or otherwise > > > protected by copyright or other legal rules. If you have received > > > it by mistake please let us know by reply email and delete it from > > > your system. It is prohibited to copy this message or disclose its content to anyone. > > > Any confidentiality or privilege is not waived or lost by any > > > mistaken delivery or unauthorized disclosure of the message. All > > > messages sent to and from Agoda may be monitored to ensure > > > compliance with company policies, to protect the company's > > > interests and to remove potential malware. Electronic messages may > > > be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > ________________________________ > > This message is confidential and is for the sole use of the intended > > recipient(s). It may also be privileged or otherwise protected by > > copyright or other legal rules. If you have received it by mistake > > please let us know by reply email and delete it from your system. It > > is prohibited to copy this message or disclose its content to > > anyone. Any confidentiality or privilege is not waived or lost by > > any mistaken delivery or unauthorized disclosure of the message. All > > messages sent to and from Agoda may be monitored to ensure > > compliance with company policies, to protect the company's interests > > and to remove potential malware. Electronic messages may be > > intercepted, amended, lost or deleted, or contain viruses. > > > > ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From mrunge at matthias-runge.de Tue Apr 27 09:29:37 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Tue, 27 Apr 2021 11:29:37 +0200 Subject: [telemetry] Retire panko Message-ID: <035722a2-7860-6469-be83-240aa4a72ff3@matthias-runge.de> Hi there, over the past couple of cycles, we have seen decreasing interest on panko. Also it has some debts, which were just carried over from the early days. We discussed over at the PTG and didn't really found a reason to keep it alive or included under OpenStack. With that, it also makes sense to retire puppet-panko. I'll wait for 1-2 weeks and propose appropriate patches to get it retired then, if I don't hear anything against it or if there are any takers. Matthias From songwenping at inspur.com Tue Apr 27 09:52:58 2021 From: songwenping at inspur.com (=?gb2312?B?QWxleCBTb25nICjLzs7Exr0p?=) Date: Tue, 27 Apr 2021 09:52:58 +0000 Subject: cyborg launchpad suppport Message-ID: Hi, We change to use launchpad to track cyborg bugs and features, with this patch merged: https://review.opendev.org/c/openstack/project-config/+/787306. But we cannot count on the website: https://www.stackalytics.io/ Please help us with this problem. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From bkslash at poczta.onet.pl Tue Apr 27 09:57:19 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 27 Apr 2021 11:57:19 +0200 Subject: [Vitrage] Multiregion In-Reply-To: References: <4D95094A-DA7C-4FE7-A207-EAE1593F342A@poczta.onet.pl> <82838909-EE0D-49FC-99CA-F143F2ED9944@poczta.onet.pl> <0EEA92BF-2C77-40B4-A015-EE40A1D60D76@poczta.onet.pl> Message-ID: <973C7E83-D00B-4842-91C3-99F7887E889B@poczta.onet.pl> Thanks again. In kolla there is an option to completely disable Horizon’s Vitrage menu (in globals.yaml: enable_horizon_vitrage, which sets env variable disabling or enabling menu globally for all users), maybe policy will do the trick, but it requires a lot of testing, that’s why I was searching for quicker solution ;) Best regards Adam > Wiadomość napisana przez Eyal B w dniu 27.04.2021, o godz. 09:26: > > HI, > > Vitrage is supposed to support multi-tenancy so the tenant in horizon should see only its own resources. > Regarding horizon vitrage menu, I don't know, maybe you can write some code to horizon that can disable the vitrage menu for certain users. > You need to consult with the horizon guys for that if there is an option for that. > > Eyal > > On Tue, Apr 27, 2021 at 9:57 AM Adam Tomas > wrote: > Hi Eyal, thank you for your answer. > So it’s pointless to install Vitrage in a region other than „parent” region :( Is there any way to prevent users (other than admin, let’s say with member role) from seeing „Vitrage” menu in Horizon? If I put role:admin in horizon’s vitrage_policy.json file (for all options) would it make the Vitrage menu disappear for users other than admin? > > Best regards, > Adam > >> Wiadomość napisana przez Eyal B > w dniu 26.04.2021, o godz. 17:35: >> >> Hi, >> >> Currently Vitrage supports only one region to get the data from. The region is configured in vitrage.conf under section [service_credentials] >> and is used by its clients to get the data >> >> Eyal >> >> On Mon, Apr 26, 2021 at 5:50 PM Adam Tomas > wrote: >> >> Hi, >> after deploying Vitrage in multi region environment Horizon always uses first public Vitrage endpoint (I have one public endpoint in each region) regardless if I’m logged in first or second region. So in both regions I see exactly the same entity map, etc. (from the first region). When I disable this endpoint, Horizon uses next one - and again I see the same things in both regions but this time from the second region. Horizon should check in which region I’m logged in and display Vitrage data for that region - right? So what’s wrong? >> >> Best regards >> Adam Tomas >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xin-ran.wang at intel.com Tue Apr 27 10:17:46 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Tue, 27 Apr 2021 10:17:46 +0000 Subject: [venus][ptg]Venus vPTG Summary April 2021 In-Reply-To: <87a323fff39c43b197d328f69a7f1638@inspur.com> References: <87a323fff39c43b197d328f69a7f1638@inspur.com> Message-ID: Thanks for your sharing during PTG, it is impressive. Hope we can have more communication in the future. Thanks, Xin-Ran From: Liye Pang(逄立业) Sent: Tuesday, April 27, 2021 9:59 AM To: openstack-discuss at lists.openstack.org Subject: [venus][ptg]Venus vPTG Summary April 2021 Hi all, Thanks for all your participation! We've conducted the first meeting of venus last week and it is successful. Here's the summary of the topics we have discussed: *Progress in the past six months: - Develop devstack-based deployment for venus - Add log retrieval of modules such as vitrage - Develop the configuration, based on which you can retrieve the chain log of the call *Next step: - Develop alarm task code to set threshold for the number of error logs of different modules at different times, and provides alarm services and notification services - The configuration, analysis and alarm of Venus will be integrated into horizon in the form of plugin. - Develop a deployment method based on kolla-ansible - (discuss)Summarize the log specifications of some typical scenarios and develop them to venus - Evaluate whether to collect event data (considering usage, pressure, etc.We will continue to discuss with the telemetry project team) *Promblem: - Many projects's log records are not standardized, so they can only support full-text retrieval, not multi-dimensional analysis Full discussion (PTG etherpad): [1] https://etherpad.opendev.org/p/venus-xena-ptg Introduction of venus: [2] https://youtu.be/mE2MoEx3awM -------------- next part -------------- An HTML attachment was scrubbed... URL: From tecno at charne.net Tue Apr 27 10:51:28 2021 From: tecno at charne.net (Tecnologia Charne.Net) Date: Tue, 27 Apr 2021 07:51:28 -0300 Subject: [glance][ceph] Openstack Wallaby and Ceph Pacific- ERROR cinder.scheduler.flows.create_volume In-Reply-To: References: Message-ID: Hello! I'm working with Openstack Wallaby (1 controller, 2 compute nodes) connected to Ceph Pacific cluster in a devel environment. With Openstack Victoria and Ceph Pacific (before last friday update) everything was running like a charm. Then, I upgraded Openstack to Wallaby and Ceph  to version 16.2.1. (Because of auth_allow_insecure_global_id_reclaim I had to upgrade many clients... but that's another story...) After upgrade, when I try to create a volume from image,      openstack volume create --image f1df058d-be99-4401-82d9-4af9410744bc debian10_volume1 --size 5 with "show_image_direct_url = True", I get "No valid backend" in /var/log/cinder/cinder-scheduler.log 2021-04-26 20:35:24.957 41348 ERROR cinder.scheduler.flows.create_volume [req-651937e5-148f-409c-8296-33f200892e48 c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd1b3be964 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. Exceeded max scheduling attempts 3 for resource 56fbb645-2c34-477d-9a59-beec78f4fd3f: cinder.exception.NoValidBackend: No valid backend was found. Exceeded max scheduling attempts 3 for resource 56fbb645-2c34-477d-9a59-beec78f4fd3f and 2021-04-26 20:35:24.968 41347 ERROR oslo_messaging.rpc.server [req-651937e5-148f-409c-8296-33f200892e48 c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd 1b3be964 - - -] Exception during message handling: rbd.InvalidArgument: [errno 22] RBD invalid argument (error creating clone) in /var/log/cinder/cinder-volume.log If I disable "show_image_direct_url = False", volume creation from image works fine. I have spent the last four days googling and reading lots of docs, old and new ones, unlucly... Does anybody have a clue, (please)? Thanks in advance! Javier.- From smooney at redhat.com Tue Apr 27 10:53:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 27 Apr 2021 11:53:41 +0100 Subject: Live migration fails In-Reply-To: References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> Message-ID: <79d897d2d7106df9ca5dcb32c152b7bdaeb01661.camel@redhat.com> On Tue, 2021-04-27 at 09:17 +0000, Szabo, Istvan (Agoda) wrote: > Hi, > > We are trying to live migrate instances out from compute nodes and tries to automate but seems like can't really do, when the migration stuck. > Let me explain the issue a bit: > > 1. We initiate live migration > 2. live migration finished, the machine disappeared from the /var/lib/nova/instances/ directory on the source server. > 3. but when I query or see in horizon it stucked in migrating phase. We collected information like migration id and we try to force it but it is already finished, and can't force to complete. > 4. I've restarted the nova service on the source node, it just make the machine to error phase, and the force not working also. > 5. I changed the state from error to active but that one also can't force complete. > > What can I do to change the name of the compute node in the DB? > you should not change the name of the compute node in the db. we do not support changing the compute node name if it has instances on it. if you ment in the migration record you also should not change it as the resouces woudl not be claimed correctly. > How can I force it without touching the db? > i dont think you can fix it without touching the db. so if the vm is removed form the source node there are 2 things you chould check 1 is the instance.host set to the dest host where it is now running 2 if you look in the logs was there an error in post live migrate. baiscaly what i think was the most likely issue is that an operation in post live migrate failed before the migations recored was set to complete. the precondiotns for force complete are The server OS-EXT-STS:vm_state value must be active and the server OS-EXT-STS:task_state value must be migrating. https://docs.openstack.org/api-ref/compute/?expanded=force-migration-complete-action-force-complete-action-detail#force-migration-complete-action-force-complete-action if the instance.host matches the host on which it is now rungin then you should be able to set the status and taskstate back to active/migrating respectivly. at which point you can force complete the migration. if the vm is running correctly on the destiatnion host and its host and the instance.host is set correctly it might just be simpler to updte the migration record to complete and ensure the task state is set to none on the instance. if the instace.host still has the source host but its running on the dest host then you should update it to refelct the correct host then mark the migration as complete. all of the above will require at least some db modifcations. > > > The goal is to automate the compute node draining as less as possible user intervention. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Szabo, Istvan (Agoda) > Sent: Friday, April 23, 2021 9:13 AM > To: Sean Mooney ; openstack-discuss at lists.openstack.org > Subject: RE: Live migration fails > > My /etc/hostname has only short name. > The nova.conf host value is also short name. > The host has been selected by the scheduler: nova live-migration --block-migrate 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0 > > What has been changed is in the instances table in the nova DB the node field of the vm. So actually I don't change the compute host value just edited the VM value actually. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Sean Mooney > Sent: Thursday, April 22, 2021 4:13 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: Live migration fails > > On Thu, 2021-04-22 at 06:01 +0000, Eugen Block wrote: > > Yeah, the column "node" has the FQDN in my DB, too, only "host" is the > > short name. The question is how did the short name get into the "node" > > column, but it will probably be difficult to get to the bottom of that. > well by default we do not expect to have FQDNs in either filed. > novas default for both is the hostname of the host which will be the short name not the fqdn unless you set an fqdn in /etc/hostname which is not generally the recommended pratice. > > nova in general does nto support changing the hostname(/etc/hostname) of a host and you should avoid changeing the "host" value in the nova.conf too. > > changing these values can result in the creation fo addtional placment RP, compute service records and compute nodes and that can result in hard to fix situation wehre old vms are using one set of resouce and new vms are using the updated ones. > > so you should not modify either value in the db. > > did you perhaps specify a host when live migrating and just pass the wrong value or was the host selected by the scheduler. > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > I think I found the issue, in the instances nova db in the node > > > column the compute node name somehow changed to short hostname. It > > > works fith FQDN but it doesn't work with short ☹ I hope I didn't > > > mess-up anything if I change to FQDN to make it work. > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Szabo, Istvan (Agoda) > > > Sent: Thursday, April 22, 2021 11:19 AM > > > To: Eugen Block > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: RE: Live migration fails > > > > > > Sorry, in the log I haven't commented out the servername ☹ it is > > > xy-osfecn-40250 > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Eugen Block > > > Sent: Wednesday, April 21, 2021 5:37 PM > > > To: Szabo, Istvan (Agoda) > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: Re: Live migration fails > > > > > > The error message seems correct, I can't find am-osfecn-4025 either > > > in the list of compute nodes. Can you check in the database if > > > there's an active instance (or several) allocated to that compute > > > node? In that case you would need to correct the allocation in order > > > for the migration to work. > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > Sure: > > > > > > > > https://jpst.it/2u3uh > > > > > > > > These are the one where can't live migrate: > > > > xy-osfecn-40250 > > > > xy-osfecn-40281 > > > > xy-osfecn-40290 > > > > xy-osbecn-40073 > > > > xy-osfecn-40238 > > > > > > > > The compute service are disabled on these because we don't want > > > > anybody spawn a vm on these anymore so want to evacuate all vms. > > > > > > > > Istvan Szabo > > > > Senior Infrastructure Engineer > > > > --------------------------------------------------- > > > > Agoda Services Co., Ltd. > > > > e: istvan.szabo at agoda.com > > > > --------------------------------------------------- > > > > > > > > -----Original Message----- > > > > From: Eugen Block > > > > Sent: Wednesday, April 21, 2021 3:26 PM > > > > To: openstack-discuss at lists.openstack.org > > > > Subject: Re: Live migration fails > > > > > > > > Hi, > > > > > > > > can you share the output of these commands? > > > > > > > > nova-manage cell_v2 list_hosts > > > > openstack compute service list > > > > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > > > Hi, > > > > > > > > > > I have couple of compute nodes where the live migration fails > > > > > with existing vms. > > > > > When I quickly spawn a vm and try live migration it works so I > > > > > assume shouldn't be a big problem with the compute node. > > > > > However I have many existing vms where it fails with a > > > > > servername not found. > > > > > > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > > ERROR nova.conductor.tasks.migrate > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > > Unable to find record for source node servername on servername: > > > > > ComputeHostNotFound: Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > > WARNING nova.scheduler.utils > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] Failed to > > > > > compute_task_migrate_server: Compute host servername could not > > > > > be found.: ComputeHostNotFound: Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 227612 > > > > > WARNING nova.scheduler.utils > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > > Setting instance to ACTIVE state.: ComputeHostNotFound: Compute > > > > > host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 227612 > > > > > ERROR oslo_messaging.rpc.server > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] Exception during message handling: > > > > > ComputeHostNotFound: Compute host am-osfecn-4025 > > > > > > > > > > Tried with this command: > > > > > > > > > > nova live-migration --block-migrate id. > > > > > > > > > > Any idea? > > > > > > > > > > Thank you. > > > > > > > > > > ________________________________ This message is confidential > > > > > and is for the sole use of the intended recipient(s). It may > > > > > also be privileged or otherwise protected by copyright or other > > > > > legal rules. If you have received it by mistake please let us > > > > > know by reply email and delete it from your system. It is > > > > > prohibited to copy this message or disclose its content to anyone. > > > > > Any confidentiality or privilege is not waived or lost by any > > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > > messages sent to and from Agoda may be monitored to ensure > > > > > compliance with company policies, to protect the company's > > > > > interests and to remove potential malware. Electronic messages > > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > > > > > > > ________________________________ > > > > This message is confidential and is for the sole use of the > > > > intended recipient(s). It may also be privileged or otherwise > > > > protected by copyright or other legal rules. If you have received > > > > it by mistake please let us know by reply email and delete it from > > > > your system. It is prohibited to copy this message or disclose its content to anyone. > > > > Any confidentiality or privilege is not waived or lost by any > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > messages sent to and from Agoda may be monitored to ensure > > > > compliance with company policies, to protect the company's > > > > interests and to remove potential malware. Electronic messages may > > > > be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the intended > > > recipient(s). It may also be privileged or otherwise protected by > > > copyright or other legal rules. If you have received it by mistake > > > please let us know by reply email and delete it from your system. It > > > is prohibited to copy this message or disclose its content to > > > anyone. Any confidentiality or privilege is not waived or lost by > > > any mistaken delivery or unauthorized disclosure of the message. All > > > messages sent to and from Agoda may be monitored to ensure > > > compliance with company policies, to protect the company's interests > > > and to remove potential malware. Electronic messages may be > > > intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > ________________________________ > This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From mtint.stfc at gmail.com Tue Apr 27 11:45:37 2021 From: mtint.stfc at gmail.com (Michael STFC) Date: Tue, 27 Apr 2021 12:45:37 +0100 Subject: OS X install openstackclient 10.15.7 In-Reply-To: References: Message-ID: <2F38812C-F85D-4904-B740-6D0DDB76909C@gmail.com> Hi I know this is a forum for openstack but I have some issues with OS X and upgrading or reinstalling openstackclient. Any advise welcome. Regards, Michael Tint System Administrator -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: openstack-errors.txt URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Apr 27 12:20:39 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 27 Apr 2021 13:20:39 +0100 Subject: cyborg launchpad suppport - https://www.stackalytics.io In-Reply-To: References: Message-ID: On Tue, 2021-04-27 at 09:52 +0000, Alex Song (宋文平) wrote: >   > > > > > > > > > Hi, > >   We change to use launchpad to track cyborg bugs and features, with this > patch merged: > https://review.opendev.org/c/openstack/project-config/+/787306. But we > cannot count on the website: https://www.stackalytics.io/ > > Please help us with this problem. the main repo for developemt fo stackalytics is https://opendev.org/x/stackalytics https://www.stackalytics.io is a seperate instance that was hostsed when https://www.stackalytics.com stoped updating and is not maintained by the openstack foundation infra team. the person that runs it is on this list and contribues upstream so hopefully they will see this and can help > >   > > > > > Thanks. > From jgeng25 at bloomberg.net Mon Apr 26 22:03:33 2021 From: jgeng25 at bloomberg.net (Jing Geng (BLOOMBERG/ 120 PARK)) Date: Mon, 26 Apr 2021 22:03:33 -0000 Subject: =?UTF-8?B?W1dhdGNoZXJdIERhdGFzb3VyY2UgcGx1Z2lucz8=?= Message-ID: <608738B500F501AA00390001_0_70781@msllnjpmsgsv06> Hi There, Currently Watcher supports many kinds of plugins, including actions, cluster data model collector, goals... In fact, it seems everything has plugins except for datasources (metrics). I wonder if there is any reason that so far Watcher doesn't provide plugins for datasources? Another question is, if I'd like to build plugin infrastructure for datasources, which means changing this DataSourceBase class so it extends loadable.Loadable and abc.ABCMeta, like other base classes for plugins, will this be supported by the community? I hope this idea will be supported, especially when there have already been several projects that tried to make datasources more flexible (Formal Datasource Interface, File based Metric Map, Grafana proxy datasource...). It seems to me that we should just provide plugins for datasources so Watcher can work with any metric system that has been deployed. If I missed something and it isn't a good idea, please also let me know why. Looking forward to contributing to the community! Regards, Jing Geng -------------- next part -------------- An HTML attachment was scrubbed... URL: From reyren at 163.com Tue Apr 27 02:02:24 2021 From: reyren at 163.com (Rey) Date: Tue, 27 Apr 2021 10:02:24 +0800 (CST) Subject: [neutron]Really need some help in network aspect Message-ID: <7747bf4b.1757.17911103182.Coremail.reyren@163.com> Hello, I'm not sure this is the right place, but I want to try whether if can I get any useful help. I stuck in some network problem for couple of days, and the problem I listed in https://github.com/projectcalico/calico/issues/4563 In short, the environment is kubernetes over openstack vm, and calico is kubernetes network componenet. However, the container to container network is very very slowly than host to host. So, I confused in questions: 1. Does kernel support double virturalization network? Like container network in vm 2. Is there any way to dig more? Sincerely, Yuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtint.stfc at gmail.com Tue Apr 27 11:43:40 2021 From: mtint.stfc at gmail.com (Michael STFC) Date: Tue, 27 Apr 2021 12:43:40 +0100 Subject: OS X install openstackclient 10.15.7 Message-ID: <30357184-C7DE-42AC-A9E5-F731AD94B6EE@gmail.com> Hi I know this is a forum for openstack but I have some issues with OS X and upgrading or reinstalling openstackclient. Any advise welcome. -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.tiff Type: image/tiff Size: 5883870 bytes Desc: not available URL: -------------- next part -------------- —— -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: openstack-errors.txt URL: -------------- next part -------------- Regards, Michael Tint System Administrator From artem.goncharov at gmail.com Tue Apr 27 14:55:56 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 27 Apr 2021 16:55:56 +0200 Subject: OS X install openstackclient 10.15.7 In-Reply-To: <30357184-C7DE-42AC-A9E5-F731AD94B6EE@gmail.com> References: <30357184-C7DE-42AC-A9E5-F731AD94B6EE@gmail.com> Message-ID: <3BECFBA0-7471-4AAC-AD39-681836FA8B3B@gmail.com> Hi, Not sure this will ever work properly. The whole world of open source projects on Mac is relying on brew and virtual environments. Here you seem to really try installing OSC as a global thing. What you can do instead is install local python and use it: - brew install python at 3.9 - /usr/local/bin/python3.9 -m pip install python-openstackclient This works for me. Alternatively just make some virtual environment (python one) and install OSC into it. In this case you would need to take care yourself on first sourcing into this venv. Artem > On 27. Apr 2021, at 13:43, Michael STFC wrote: > > > Hi > > I know this is a forum for openstack but I have some issues with OS X and upgrading or reinstalling openstackclient. > > Any advise welcome. > > > > —— > > > Regards, > > Michael Tint > System Administrator > > > > > > > > From mtint.stfc at gmail.com Tue Apr 27 16:10:23 2021 From: mtint.stfc at gmail.com (Michael STFC) Date: Tue, 27 Apr 2021 17:10:23 +0100 Subject: OS X install openstackclient 10.15.7 In-Reply-To: <3BECFBA0-7471-4AAC-AD39-681836FA8B3B@gmail.com> References: <30357184-C7DE-42AC-A9E5-F731AD94B6EE@gmail.com> <3BECFBA0-7471-4AAC-AD39-681836FA8B3B@gmail.com> Message-ID: <6393F38F-B846-4C00-ACA0-4C3C37B85E07@gmail.com> Hi Artem, Thanks for replying, I had some lack after posting this to the group. Solved it by doing this brew install pyenv Add eval "$(pyenv init -)” to ~/.bashrc pyenv install 3.9.4 pyenv shell 3.9.4 pip3 install netifaces pip3 install --upgrade python-openstackclient pip3 install --upgrade python-heatclient pip3 install --upgrade python-magnumclient Only warning I now get is WARNING: Value for scheme.headers does not match. Please report this to distutils: /Users/bbm17567/.pyenv/versions/3.9.4/include/python3.9/UNKNOWN sysconfig: /Users/bbm17567/.pyenv/versions/3.9.4/include/python3.9 WARNING: Additional context: user = False home = None root = None prefix = None All seemed to be good. I will consider your advise on setting up linux vm and that way I keep Mac clean. Regards, Michael > On 27 Apr 2021, at 15:55, Artem Goncharov wrote: > > Hi, > > Not sure this will ever work properly. The whole world of open source projects on Mac is relying on brew and virtual environments. Here you seem to really try installing OSC as a global thing. > > What you can do instead is install local python and use it: > - brew install python at 3.9 > - /usr/local/bin/python3.9 -m pip install python-openstackclient > > This works for me. > > Alternatively just make some virtual environment (python one) and install OSC into it. In this case you would need to take care yourself on first sourcing into this venv. > > Artem > >> On 27. Apr 2021, at 13:43, Michael STFC wrote: >> >> >> Hi >> >> I know this is a forum for openstack but I have some issues with OS X and upgrading or reinstalling openstackclient. >> >> Any advise welcome. >> >> >> >> —— >> >> >> Regards, >> >> Michael Tint >> System Administrator >> >> >> >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Tue Apr 27 16:20:57 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 27 Apr 2021 16:20:57 +0000 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: References: Message-ID: Comments inline. From: Martin Kopec Sent: Monday, April 26, 2021 10:48 AM To: openstack-discuss; Kanevsky, Arkady; Slawek Kaplonski Subject: [neutron][interop][refstack] New tests and capabilities to track in interop [EXTERNAL EMAIL] Hi everyone, I would like to further discuss the topics we covered with the neutron team during the PTG [1]. * adding address_group API capability It's tested by tests in neutron-tempest-plugin. First question is if tests which are not directly in tempest can be a part of a non-add-on marketing program? AK – yes as long as it is commonly supported by the clouds and ditto for users. As long as it meet criteria as defined by Interop. It's possible to move them to tempest though, by the time we do so, could they be marked as advisory? AK – I like that approach. * Shall we include QoS tempest tests since we don't know what share of vendors enable QoS? Could it be an add-on? These tests are also in neutron-tempest-plugin, I assume we're talking about neutron_tempest_plugin.api.test_qos tests. If we want to include these tests, which program should they belong to? Do we wanna create a new one? AK – by definition they belong to OpenStack powered compute and platform. AT some old time ago we talked about program for specific verticals. NFV was considered first but did not go anywhere. Lack of bandwidth to make it happen as well as lack of interest in OpenSTack since OPNFV (currently Anuket) was handling it. [1] https://etherpad.opendev.org/p/neutron-xena-ptg [etherpad.opendev.org] Thanks, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Apr 27 18:14:26 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 27 Apr 2021 21:14:26 +0300 Subject: [openstack-ansible][osa][PTG] OpenStack-Ansible Xena PTG summary Message-ID: <378841619547228@mail.yandex.ru> Hi everyone! First of all I would like to thank everyone for attendance. We have discussed a lot of topics both regarding things that we need to merge before releasing Wallaby (we are trailing behind and may release in up to 3 month after official release) and regarding plans for the Xena. First comes list of things that we agreed to land for W in descending priority: 1. ansible-role-pki. Gerrit topic [1] 2. Centos8-Stream. Gerrit topic [2] 3. Move neutron-server to standalone group in env.d 4. Debian Bullseye. We're currently blocked with infra [3] 5. Senlin tempest test [4] 6. Fix broken roles 7. Implement send_service_user_token and service_token_roles_required Also we agreed to keep ansible-base 2.11.* release for Wallaby, and not switch to the ansible-core yet. List of descisions that were taken for the Xena release: 1. Deprecate Ubuntu Bionic at the beggining of X cycle 2. Deprecate Centos-8 (classic) at the beginning of X cycle 3. Switch MariaDB balancing frontend from HAproxy to MaxScale 4. Drop Nginx for the Keystone and use Apache is all scenarios as most unified solution. 5. Add option to roles to configure systemd timers for cleaning up soft-deleted records in DB 6. Create operating systems support table to clear out possible upgrade paths between releases 7. Replace dashes with underscores in group names and dynamic inventory and stop ignoring TRANSFORM_INVALID_GROUP_CHARS. 8. Adding ability to enable prometheus support for MariaDB and HAproxy 9. Revise possibility to support deployments on top of arm64 10. Look into creation common roles for Ansible OpenStack collection. At least write a blueprint/spec regarding scope of these roles. 11. Deploy ARA on localhost for OSA logging, configured by variables [1] https://review.opendev.org/q/topic:%22osa%252Fpki%22+(status:open) [2] https://review.opendev.org/q/topic:%22osa-stream%22+(status:open) [3] https://review.opendev.org/q/topic:%22debian_bullseye%22+(status:open) [4] https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/754045 -- Kind Regards, Dmitriy Rabotyagov From gmann at ghanshyammann.com Tue Apr 27 18:14:34 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 27 Apr 2021 13:14:34 -0500 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: References: Message-ID: <179148a3dda.118eddfd8654506.4355687528145230570@ghanshyammann.com> ---- On Mon, 26 Apr 2021 10:48:08 -0500 Martin Kopec wrote ---- > Hi everyone, > I would like to further discuss the topics we covered with the neutron team during > the PTG [1]. > * adding address_group API capability > It's tested by tests in neutron-tempest-plugin. First question is if tests which arenot directly in tempest can be a part of a non-add-on marketing program?It's possible to move them to tempest though, by the time we do so, could they be > marked as advisory? > > * Shall we include QoS tempest tests since we don't know what share of vendorsenable QoS? Could it be an add-on?These tests are also in neutron-tempest-plugin, I assume we're talking aboutneutron_tempest_plugin.api.test_qos tests.If we want to include these tests, which program should they belong to? Do we wannacreate a new one? I remember the discussion on the location of tests required by the interop group 2-3 years back (when heat and dns adds-on were added). We all agreed that having tests in tempest plugins is all fine, they do not need to be in Tempest as such. That is why heat and dns or now manila tests stays in their respective plugins. For neutron also, as there are existing neutron tempest plugin tests we do not need to move them to Tempest and refstack can run it from neutron-tempest-plugin, if we do move then we might break existing upstream or downstream testing scripts running those existing tests from the current location. -gmann > [1] https://etherpad.opendev.org/p/neutron-xena-ptg > > Thanks, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > > > > > > From Arkady.Kanevsky at dell.com Tue Apr 27 19:33:12 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 27 Apr 2021 19:33:12 +0000 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: <179148a3dda.118eddfd8654506.4355687528145230570@ghanshyammann.com> References: <179148a3dda.118eddfd8654506.4355687528145230570@ghanshyammann.com> Message-ID: Agree. But can we run both tempest and project plug-ins together for the same project? -----Original Message----- From: Ghanshyam Mann Sent: Tuesday, April 27, 2021 1:15 PM To: Martin Kopec Cc: openstack-discuss; Kanevsky, Arkady; Slawek Kaplonski Subject: Re: [neutron][interop][refstack] New tests and capabilities to track in interop [EXTERNAL EMAIL] ---- On Mon, 26 Apr 2021 10:48:08 -0500 Martin Kopec wrote ---- > Hi everyone, > I would like to further discuss the topics we covered with the neutron team during > the PTG [1]. > * adding address_group API capability > It's tested by tests in neutron-tempest-plugin. First question is if tests which arenot directly in tempest can be a part of a non-add-on marketing program?It's possible to move them to tempest though, by the time we do so, could they be > marked as advisory? > > * Shall we include QoS tempest tests since we don't know what share of vendorsenable QoS? Could it be an add-on?These tests are also in neutron-tempest-plugin, I assume we're talking aboutneutron_tempest_plugin.api.test_qos tests.If we want to include these tests, which program should they belong to? Do we wannacreate a new one? I remember the discussion on the location of tests required by the interop group 2-3 years back (when heat and dns adds-on were added). We all agreed that having tests in tempest plugins is all fine, they do not need to be in Tempest as such. That is why heat and dns or now manila tests stays in their respective plugins. For neutron also, as there are existing neutron tempest plugin tests we do not need to move them to Tempest and refstack can run it from neutron-tempest-plugin, if we do move then we might break existing upstream or downstream testing scripts running those existing tests from the current location. -gmann > [1] https://urldefense.com/v3/__https://etherpad.opendev.org/p/neutron-xena-ptg__;!!LpKI!0kxKX2qrtBJop9NFWHvzlJvzFwY4C6P03O-TIqKkamqKSeW9X3rXyDwB2sttrJmqnh5M$ [etherpad[.]opendev[.]org] > > Thanks, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > > > > > > From gmann at ghanshyammann.com Tue Apr 27 19:38:03 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 27 Apr 2021 14:38:03 -0500 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: References: <179148a3dda.118eddfd8654506.4355687528145230570@ghanshyammann.com> Message-ID: <17914d6a9bf.1124e22b6656990.6997228222974023298@ghanshyammann.com> ---- On Tue, 27 Apr 2021 14:33:12 -0500 Kanevsky, Arkady wrote ---- > Agree. But can we run both tempest and project plug-ins together for the same project? Yes, we can run. There is no restriction on that, we can provide regex matching those tests in Tempest run CLI or tox commands. -gmann > > -----Original Message----- > From: Ghanshyam Mann > Sent: Tuesday, April 27, 2021 1:15 PM > To: Martin Kopec > Cc: openstack-discuss; Kanevsky, Arkady; Slawek Kaplonski > Subject: Re: [neutron][interop][refstack] New tests and capabilities to track in interop > > > [EXTERNAL EMAIL] > > ---- On Mon, 26 Apr 2021 10:48:08 -0500 Martin Kopec wrote ---- > Hi everyone, > I would like to further discuss the topics we covered with the neutron team during > the PTG [1]. > > * adding address_group API capability > It's tested by tests in neutron-tempest-plugin. First question is if tests which arenot directly in tempest can be a part of a non-add-on marketing program?It's possible to move them to tempest though, by the time we do so, could they be > marked as advisory? > > > > * Shall we include QoS tempest tests since we don't know what share of vendorsenable QoS? Could it be an add-on?These tests are also in neutron-tempest-plugin, I assume we're talking aboutneutron_tempest_plugin.api.test_qos tests.If we want to include these tests, which program should they belong to? Do we wannacreate a new one? > > I remember the discussion on the location of tests required by the interop group 2-3 years back (when heat and dns adds-on were added). > We all agreed that having tests in tempest plugins is all fine, they do not need to be in Tempest as such. That is why heat and dns or now manila tests stays in their respective plugins. > > For neutron also, as there are existing neutron tempest plugin tests we do not need to move them to Tempest and refstack can run it from neutron-tempest-plugin, if we do move then we might break existing upstream or downstream testing scripts running those existing tests from the current location. > > -gmann > > > [1] https://urldefense.com/v3/__https://etherpad.opendev.org/p/neutron-xena-ptg__;!!LpKI!0kxKX2qrtBJop9NFWHvzlJvzFwY4C6P03O-TIqKkamqKSeW9X3rXyDwB2sttrJmqnh5M$ [etherpad[.]opendev[.]org] > > Thanks, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > > > > > > > From skaplons at redhat.com Tue Apr 27 20:02:52 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 27 Apr 2021 22:02:52 +0200 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: <17914d6a9bf.1124e22b6656990.6997228222974023298@ghanshyammann.com> References: <17914d6a9bf.1124e22b6656990.6997228222974023298@ghanshyammann.com> Message-ID: <7791602.rI3xBskekn@p1> Hi, Dnia wtorek, 27 kwietnia 2021 21:38:03 CEST Ghanshyam Mann pisze: > ---- On Tue, 27 Apr 2021 14:33:12 -0500 Kanevsky, Arkady wrote ---- > > > Agree. But can we run both tempest and project plug-ins together for the same project? > > Yes, we can run. There is no restriction on that, we can provide regex matching those tests in Tempest run CLI or tox commands. Great news for us as we don't need to move the tests to tempest :) Thx. > > -gmann > > > -----Original Message----- > > From: Ghanshyam Mann > > Sent: Tuesday, April 27, 2021 1:15 PM > > To: Martin Kopec > > Cc: openstack-discuss; Kanevsky, Arkady; Slawek Kaplonski > > Subject: Re: [neutron][interop][refstack] New tests and capabilities to track in interop > > > > > > [EXTERNAL EMAIL] > > > > ---- On Mon, 26 Apr 2021 10:48:08 -0500 Martin Kopec wrote ---- > Hi everyone, > I would like to further discuss the topics we > > covered with the neutron team during > the PTG [1]. > > > > * adding address_group API capability > It's tested by tests in neutron-tempest-plugin. First question is if tests which arenot directly in tempest can > > > be a part of a non-add-on marketing program?It's possible to move them to tempest though, by the time we do so, could they be > marked as advisory? > > > > > > * Shall we include QoS tempest tests since we don't know what share of vendorsenable QoS? Could it be an add-on?These tests are also in > > > neutron-tempest-plugin, I assume we're talking aboutneutron_tempest_plugin.api.test_qos tests.If we want to include these tests, which program should > > > they belong to? Do we wannacreate a new one? > > > I remember the discussion on the location of tests required by the interop group 2-3 years back (when heat and dns adds-on were added). > > We all agreed that having tests in tempest plugins is all fine, they do not need to be in Tempest as such. That is why heat and dns or now manila tests > > stays in their respective plugins. > > > > For neutron also, as there are existing neutron tempest plugin tests we do not need to move them to Tempest and refstack can run it from > > neutron-tempest-plugin, if we do move then we might break existing upstream or downstream testing scripts running those existing tests from the current > > location. > > > > -gmann > > > > > [1] > > > https://urldefense.com/v3/__https://etherpad.opendev.org/p/neutron-xena-ptg__;!!LpKI!0kxKX2qrtBJop9NFWHvzlJvzFwY4C6P03O-TIqKkamqKSeW9X3rXyDwB2sttrJmqnh > > > 5M$ [etherpad[.]opendev[.]org] > > Thanks, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > > > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From Arkady.Kanevsky at dell.com Tue Apr 27 20:18:46 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 27 Apr 2021 20:18:46 +0000 Subject: [Interop][PTG] highlights from Interop PTG Message-ID: Fellow stackers, Thank you for successful Xena PTG. The PTG agenda and notes are at https://etherpad.opendev.org/p/april2021-ptg-interop-refstack Highlights: 1. Ghanshayam and Goutham are now co-chair of Interop along with Arkady 2. Superuser article for Interop is in progress highlighting major changes to Interop program including Rfestack transitioning to Python3, support of refstack of add-on projects, and adding share file system as add-on program. 3. Working on updated to Interop processes, and specifically no need for Board to approve new guidelines, only process changes. * See https://review.opendev.org/c/osf/interop/+/787646 * Discuss updates to all wiki pages starting from https://wiki.openstack.org/wiki/Governance/InteropWG - expect changes in Xena cycle 4. Agreed on how we will list add-on Logo to marketplace listing 5. Working on how to handle in automated matter in refstack admin privileges for setting up for tempest user testing, Manila in particular. 6. Starting discussion on how to handle microiservice versions of APIs. 7. Interlocks with all projects under interop guidelines, except Swift and Keystone. 8. Discussed several possible add-on projects Happy to provide more details. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Tue Apr 27 20:47:46 2021 From: mthode at mthode.org (Matthew Thode) Date: Tue, 27 Apr 2021 15:47:46 -0500 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade Message-ID: <20210427204746.c4jigic2bru4o3wp@mthode.org> Looks like a new release of sqlalchemy is upon us and is breaking tests in openstack (1.4.11 for now). Please test against https://review.opendev.org/788339 to get your project working against the newest version. currently failing are cinder ironic keystone masakari neutron and nova cross gates Thanks, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dmsimard at redhat.com Tue Apr 27 23:05:38 2021 From: dmsimard at redhat.com (David Moreau Simard) Date: Tue, 27 Apr 2021 19:05:38 -0400 Subject: [openstack-ansible][osa][PTG] OpenStack-Ansible Xena PTG summary In-Reply-To: <378841619547228@mail.yandex.ru> References: <378841619547228@mail.yandex.ru> Message-ID: On Tue, Apr 27, 2021 at 2:20 PM Dmitriy Rabotyagov wrote: > Also we agreed to keep ansible-base 2.11.* release for Wallaby, and not switch to the ansible-core yet. Just to make sure, do you mean ansible-base 2.10.* ? In 2.11 (released this week) it is renamed to ansible-core so there is no such thing as ansible-base 2.11. > 11. Deploy ARA on localhost for OSA logging, configured by variables If you would like some inspiration, there is ara collection for deploying the server: https://github.com/ansible-community/ara-collection You know where to find me if you have any questions :) David Moreau Simard dmsimard = [irc, github, twitter] From songwenping at inspur.com Wed Apr 28 01:41:56 2021 From: songwenping at inspur.com (=?utf-8?B?QWxleCBTb25nICjlrovmloflubMp?=) Date: Wed, 28 Apr 2021 01:41:56 +0000 Subject: =?utf-8?B?562U5aSNOiBjeWJvcmcgbGF1bmNocGFkIHN1cHBwb3J0IC0gaHR0cHM6Ly93?= =?utf-8?Q?ww.stackalytics.io?= In-Reply-To: References: Message-ID: <04a7461adf0e4e6fa1b3f5aa0a4e54c8@inspur.com> Thanks Sean for your tips. Andrii Ostapenko: Do you know what we need modify in the `stackalytics` project. Thanks. -----邮件原件----- 发件人: Sean Mooney [mailto:smooney at redhat.com] 发送时间: 2021年4月27日 20:21 收件人: openstack-discuss at lists.openstack.org 主题: Re: cyborg launchpad suppport - https://www.stackalytics.io On Tue, 2021-04-27 at 09:52 +0000, Alex Song (宋文平) wrote: > > > > > > > > > > Hi, > > We change to use launchpad to track cyborg bugs and features, with > this patch merged: > https://review.opendev.org/c/openstack/project-config/+/787306. But we > cannot count on the website: https://www.stackalytics.io/ > > Please help us with this problem. the main repo for developemt fo stackalytics is https://opendev.org/x/stackalytics https://www.stackalytics.io is a seperate instance that was hostsed when https://www.stackalytics.com stoped updating and is not maintained by the openstack foundation infra team. the person that runs it is on this list and contribues upstream so hopefully they will see this and can help > > > > > > > Thanks. > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From anost1986 at gmail.com Wed Apr 28 03:22:05 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Tue, 27 Apr 2021 22:22:05 -0500 Subject: cyborg launchpad suppport - https://www.stackalytics.io In-Reply-To: <04a7461adf0e4e6fa1b3f5aa0a4e54c8@inspur.com> References: <04a7461adf0e4e6fa1b3f5aa0a4e54c8@inspur.com> Message-ID: Hi, Sure. Should be there tomorrow with https://review.opendev.org/c/x/stackalytics/+/788397 On Tue, Apr 27, 2021 at 8:43 PM Alex Song (宋文平) wrote: > > Thanks Sean for your tips. > > Andrii Ostapenko: > Do you know what we need modify in the `stackalytics` project. > > Thanks. > > -----邮件原件----- > 发件人: Sean Mooney [mailto:smooney at redhat.com] > 发送时间: 2021年4月27日 20:21 > 收件人: openstack-discuss at lists.openstack.org > 主题: Re: cyborg launchpad suppport - https://www.stackalytics.io > > On Tue, 2021-04-27 at 09:52 +0000, Alex Song (宋文平) wrote: > > > > > > > > > > > > > > > > > > > > Hi, > > > > We change to use launchpad to track cyborg bugs and features, with > > this patch merged: > > https://review.opendev.org/c/openstack/project-config/+/787306. But we > > cannot count on the website: https://www.stackalytics.io/ > > > > Please help us with this problem. > > > the main repo for developemt fo stackalytics is > https://opendev.org/x/stackalytics > https://www.stackalytics.io is a seperate instance that was hostsed when > https://www.stackalytics.com stoped updating and is not maintained by the > openstack foundation infra team. > the person that runs it is on this list and contribues upstream so > hopefully they will see this and can help > > > > > > > > > > > > > > > > Thanks. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Istvan.Szabo at agoda.com Wed Apr 28 03:36:09 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Wed, 28 Apr 2021 03:36:09 +0000 Subject: Live migration fails In-Reply-To: <79d897d2d7106df9ca5dcb32c152b7bdaeb01661.camel@redhat.com> References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> <79d897d2d7106df9ca5dcb32c152b7bdaeb01661.camel@redhat.com> Message-ID: Hi, Answering your question: 1. no, still the old compute node name is in the list 2. This is the log about that machine ID on that day: https://justpaste.it/7usq5 Maybe this network client issue is the cause on the destination host? You can also see that the memory copy took too long and started many time from the beginning. You said you don't support changes in the db. Actually I change this value of a compute node that we are planning to drain and of course I try to avoid to touch. Also it is I think not the compute node entry in the db, it is in the instance table in the nova db. Have a look the screenshot of the entry please: https://i.ibb.co/WKB3sGM/Capture.png We are using the openstack-commands not the API calls but I guess the result is the same. When it was in active migrating state and we tried it we got this: 1. nova live-migration-force-complete 6b3c5ef1-293a-426d-89e5-230f59c2d06f 870 ERROR (BadRequest): Migration 870 state of instance 6b3c5ef1-293a-426d-89e5-230f59c2d06f is completed. Cannot force complete while the migration is in this state. (HTTP 400) (Request-ID: req-e409114b-c4ec-4f25-8ff0-d9dc34460bc9) 2. After I restarted on the source compute node the nova-compute service and it puts the machine to error state, however the vm is still running. nova live-migration-force-complete 6b3c5ef1-293a-426d-89e5-230f59c2d06f 870 ERROR (Conflict): Cannot 'force_complete' instance 6b3c5ef1-293a-426d-89e5-230f59c2d06f while it is in vm_state error (HTTP 409) (Request-ID: req-2f0a2fcd-62be-44ae-bafc-cac952a63c82) 3. After changed back to active and tried the migration, and I got this: nova live-migration-force-complete 6b3c5ef1-293a-426d-89e5-230f59c2d06f 870 ERROR (Conflict): Instance 6b3c5ef1-293a-426d-89e5-230f59c2d06f is in an invalid state for 'force_complete' (HTTP 409) (Request-ID: req-3d1631e9-973a-4549-a998-fb1d3d95572a) Just to summarize your options: 1. if the instance.host matches the host on which it is now rungin then you should be able to set the status and taskstate back to active/migrating respectivly. at which point you can force complete the migration. This is not our case unfortunately ☹ 2. if the vm is running correctly on the destiatnion host and its host and the instance.host is set correctly it might just be simpler to updte the migration record to complete and ensure the task state is set to none on the instance. Does it has to be done in the DB? Haven't really find option for this to update the migration record. Or you mean the force complete? 3. if the instace.host still has the source host but its running on the dest host then you should update it to refelct the correct host then mark the migration as complete. I guess it is the force complete also right? Change in the instance table the host and node and force complete? Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Sean Mooney Sent: Tuesday, April 27, 2021 5:54 PM To: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails On Tue, 2021-04-27 at 09:17 +0000, Szabo, Istvan (Agoda) wrote: > Hi, > > We are trying to live migrate instances out from compute nodes and tries to automate but seems like can't really do, when the migration stuck. > Let me explain the issue a bit: > > 1. We initiate live migration > 2. live migration finished, the machine disappeared from the /var/lib/nova/instances/ directory on the source server. > 3. but when I query or see in horizon it stucked in migrating phase. We collected information like migration id and we try to force it but it is already finished, and can't force to complete. > 4. I've restarted the nova service on the source node, it just make the machine to error phase, and the force not working also. > 5. I changed the state from error to active but that one also can't force complete. > > What can I do to change the name of the compute node in the DB? > you should not change the name of the compute node in the db. we do not support changing the compute node name if it has instances on it. if you ment in the migration record you also should not change it as the resouces woudl not be claimed correctly. > How can I force it without touching the db? > i dont think you can fix it without touching the db. so if the vm is removed form the source node there are 2 things you chould check 1 is the instance.host set to the dest host where it is now running 2 if you look in the logs was there an error in post live migrate. baiscaly what i think was the most likely issue is that an operation in post live migrate failed before the migations recored was set to complete. the precondiotns for force complete are The server OS-EXT-STS:vm_state value must be active and the server OS-EXT-STS:task_state value must be migrating. https://docs.openstack.org/api-ref/compute/?expanded=force-migration-complete-action-force-complete-action-detail#force-migration-complete-action-force-complete-action if the instance.host matches the host on which it is now rungin then you should be able to set the status and taskstate back to active/migrating respectivly. at which point you can force complete the migration. if the vm is running correctly on the destiatnion host and its host and the instance.host is set correctly it might just be simpler to updte the migration record to complete and ensure the task state is set to none on the instance. if the instace.host still has the source host but its running on the dest host then you should update it to refelct the correct host then mark the migration as complete. all of the above will require at least some db modifcations. > > > The goal is to automate the compute node draining as less as possible user intervention. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Szabo, Istvan (Agoda) > Sent: Friday, April 23, 2021 9:13 AM > To: Sean Mooney ; > openstack-discuss at lists.openstack.org > Subject: RE: Live migration fails > > My /etc/hostname has only short name. > The nova.conf host value is also short name. > The host has been selected by the scheduler: nova live-migration > --block-migrate 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0 > > What has been changed is in the instances table in the nova DB the node field of the vm. So actually I don't change the compute host value just edited the VM value actually. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Sean Mooney > Sent: Thursday, April 22, 2021 4:13 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: Live migration fails > > On Thu, 2021-04-22 at 06:01 +0000, Eugen Block wrote: > > Yeah, the column "node" has the FQDN in my DB, too, only "host" is > > the short name. The question is how did the short name get into the "node" > > column, but it will probably be difficult to get to the bottom of that. > well by default we do not expect to have FQDNs in either filed. > novas default for both is the hostname of the host which will be the short name not the fqdn unless you set an fqdn in /etc/hostname which is not generally the recommended pratice. > > nova in general does nto support changing the hostname(/etc/hostname) of a host and you should avoid changeing the "host" value in the nova.conf too. > > changing these values can result in the creation fo addtional placment RP, compute service records and compute nodes and that can result in hard to fix situation wehre old vms are using one set of resouce and new vms are using the updated ones. > > so you should not modify either value in the db. > > did you perhaps specify a host when live migrating and just pass the wrong value or was the host selected by the scheduler. > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > I think I found the issue, in the instances nova db in the node > > > column the compute node name somehow changed to short hostname. It > > > works fith FQDN but it doesn't work with short ☹ I hope I didn't > > > mess-up anything if I change to FQDN to make it work. > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Szabo, Istvan (Agoda) > > > Sent: Thursday, April 22, 2021 11:19 AM > > > To: Eugen Block > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: RE: Live migration fails > > > > > > Sorry, in the log I haven't commented out the servername ☹ it is > > > xy-osfecn-40250 > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Eugen Block > > > Sent: Wednesday, April 21, 2021 5:37 PM > > > To: Szabo, Istvan (Agoda) > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: Re: Live migration fails > > > > > > The error message seems correct, I can't find am-osfecn-4025 > > > either in the list of compute nodes. Can you check in the database > > > if there's an active instance (or several) allocated to that > > > compute node? In that case you would need to correct the > > > allocation in order for the migration to work. > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > Sure: > > > > > > > > https://jpst.it/2u3uh > > > > > > > > These are the one where can't live migrate: > > > > xy-osfecn-40250 > > > > xy-osfecn-40281 > > > > xy-osfecn-40290 > > > > xy-osbecn-40073 > > > > xy-osfecn-40238 > > > > > > > > The compute service are disabled on these because we don't want > > > > anybody spawn a vm on these anymore so want to evacuate all vms. > > > > > > > > Istvan Szabo > > > > Senior Infrastructure Engineer > > > > --------------------------------------------------- > > > > Agoda Services Co., Ltd. > > > > e: istvan.szabo at agoda.com > > > > --------------------------------------------------- > > > > > > > > -----Original Message----- > > > > From: Eugen Block > > > > Sent: Wednesday, April 21, 2021 3:26 PM > > > > To: openstack-discuss at lists.openstack.org > > > > Subject: Re: Live migration fails > > > > > > > > Hi, > > > > > > > > can you share the output of these commands? > > > > > > > > nova-manage cell_v2 list_hosts > > > > openstack compute service list > > > > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > > > Hi, > > > > > > > > > > I have couple of compute nodes where the live migration fails > > > > > with existing vms. > > > > > When I quickly spawn a vm and try live migration it works so I > > > > > assume shouldn't be a big problem with the compute node. > > > > > However I have many existing vms where it fails with a > > > > > servername not found. > > > > > > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 > > > > > 227612 ERROR nova.conductor.tasks.migrate > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > > Unable to find record for source node servername on servername: > > > > > ComputeHostNotFound: Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 > > > > > 227612 WARNING nova.scheduler.utils > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] Failed to > > > > > compute_task_migrate_server: Compute host servername could not > > > > > be found.: ComputeHostNotFound: Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 > > > > > 227612 WARNING nova.scheduler.utils > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > > Setting instance to ACTIVE state.: ComputeHostNotFound: > > > > > Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 > > > > > 227612 ERROR oslo_messaging.rpc.server > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] Exception during message handling: > > > > > ComputeHostNotFound: Compute host am-osfecn-4025 > > > > > > > > > > Tried with this command: > > > > > > > > > > nova live-migration --block-migrate id. > > > > > > > > > > Any idea? > > > > > > > > > > Thank you. > > > > > > > > > > ________________________________ This message is confidential > > > > > and is for the sole use of the intended recipient(s). It may > > > > > also be privileged or otherwise protected by copyright or > > > > > other legal rules. If you have received it by mistake please > > > > > let us know by reply email and delete it from your system. It > > > > > is prohibited to copy this message or disclose its content to anyone. > > > > > Any confidentiality or privilege is not waived or lost by any > > > > > mistaken delivery or unauthorized disclosure of the message. > > > > > All messages sent to and from Agoda may be monitored to ensure > > > > > compliance with company policies, to protect the company's > > > > > interests and to remove potential malware. Electronic messages > > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > > > > > > > ________________________________ This message is confidential > > > > and is for the sole use of the intended recipient(s). It may > > > > also be privileged or otherwise protected by copyright or other > > > > legal rules. If you have received it by mistake please let us > > > > know by reply email and delete it from your system. It is > > > > prohibited to copy this message or disclose its content to anyone. > > > > Any confidentiality or privilege is not waived or lost by any > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > messages sent to and from Agoda may be monitored to ensure > > > > compliance with company policies, to protect the company's > > > > interests and to remove potential malware. Electronic messages > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the > > > intended recipient(s). It may also be privileged or otherwise > > > protected by copyright or other legal rules. If you have received > > > it by mistake please let us know by reply email and delete it from > > > your system. It is prohibited to copy this message or disclose its > > > content to anyone. Any confidentiality or privilege is not waived > > > or lost by any mistaken delivery or unauthorized disclosure of the > > > message. All messages sent to and from Agoda may be monitored to > > > ensure compliance with company policies, to protect the company's > > > interests and to remove potential malware. Electronic messages may > > > be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > ________________________________ > This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From Istvan.Szabo at agoda.com Wed Apr 28 04:28:59 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Wed, 28 Apr 2021 04:28:59 +0000 Subject: Live migration fails In-Reply-To: References: <20210421082552.Horde.jW8NZ_TsVjSodSi2J_ppxNe@webmail.nde.ag> <20210421103639.Horde.X3XOTa79EibIuWYyD7LPMib@webmail.nde.ag> <20210422060127.Horde.IA0j7WyO6k1W5b6eXaUmVrf@webmail.nde.ag> <3f619959d66535fa1745dd320c40a10addb20608.camel@redhat.com> <79d897d2d7106df9ca5dcb32c152b7bdaeb01661.camel@redhat.com> Message-ID: Plazed around a bit more, so additional comment to my previous mail: When you say: The server OS-EXT-STS:vm_state value must be active and the server OS-EXT-STS:task_state value must be migrating. This I can't achieve. So when I reset the state to active it changed the power state to running from migrating but the source host remains the old. And I think the power state can't be changed like the state: nova reset-state --active 34905d35-1100-47de-a7db-3e1b5e900b9d Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Szabo, Istvan (Agoda) Sent: Wednesday, April 28, 2021 10:36 AM To: Sean Mooney ; openstack-discuss at lists.openstack.org Subject: RE: Live migration fails Hi, Answering your question: 1. no, still the old compute node name is in the list 2. This is the log about that machine ID on that day: https://justpaste.it/7usq5 Maybe this network client issue is the cause on the destination host? You can also see that the memory copy took too long and started many time from the beginning. You said you don't support changes in the db. Actually I change this value of a compute node that we are planning to drain and of course I try to avoid to touch. Also it is I think not the compute node entry in the db, it is in the instance table in the nova db. Have a look the screenshot of the entry please: https://i.ibb.co/WKB3sGM/Capture.png We are using the openstack-commands not the API calls but I guess the result is the same. When it was in active migrating state and we tried it we got this: 1. nova live-migration-force-complete 6b3c5ef1-293a-426d-89e5-230f59c2d06f 870 ERROR (BadRequest): Migration 870 state of instance 6b3c5ef1-293a-426d-89e5-230f59c2d06f is completed. Cannot force complete while the migration is in this state. (HTTP 400) (Request-ID: req-e409114b-c4ec-4f25-8ff0-d9dc34460bc9) 2. After I restarted on the source compute node the nova-compute service and it puts the machine to error state, however the vm is still running. nova live-migration-force-complete 6b3c5ef1-293a-426d-89e5-230f59c2d06f 870 ERROR (Conflict): Cannot 'force_complete' instance 6b3c5ef1-293a-426d-89e5-230f59c2d06f while it is in vm_state error (HTTP 409) (Request-ID: req-2f0a2fcd-62be-44ae-bafc-cac952a63c82) 3. After changed back to active and tried the migration, and I got this: nova live-migration-force-complete 6b3c5ef1-293a-426d-89e5-230f59c2d06f 870 ERROR (Conflict): Instance 6b3c5ef1-293a-426d-89e5-230f59c2d06f is in an invalid state for 'force_complete' (HTTP 409) (Request-ID: req-3d1631e9-973a-4549-a998-fb1d3d95572a) Just to summarize your options: 1. if the instance.host matches the host on which it is now rungin then you should be able to set the status and taskstate back to active/migrating respectivly. at which point you can force complete the migration. This is not our case unfortunately ☹ 2. if the vm is running correctly on the destiatnion host and its host and the instance.host is set correctly it might just be simpler to updte the migration record to complete and ensure the task state is set to none on the instance. Does it has to be done in the DB? Haven't really find option for this to update the migration record. Or you mean the force complete? 3. if the instace.host still has the source host but its running on the dest host then you should update it to refelct the correct host then mark the migration as complete. I guess it is the force complete also right? Change in the instance table the host and node and force complete? Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- -----Original Message----- From: Sean Mooney Sent: Tuesday, April 27, 2021 5:54 PM To: openstack-discuss at lists.openstack.org Subject: Re: Live migration fails On Tue, 2021-04-27 at 09:17 +0000, Szabo, Istvan (Agoda) wrote: > Hi, > > We are trying to live migrate instances out from compute nodes and tries to automate but seems like can't really do, when the migration stuck. > Let me explain the issue a bit: > > 1. We initiate live migration > 2. live migration finished, the machine disappeared from the /var/lib/nova/instances/ directory on the source server. > 3. but when I query or see in horizon it stucked in migrating phase. We collected information like migration id and we try to force it but it is already finished, and can't force to complete. > 4. I've restarted the nova service on the source node, it just make the machine to error phase, and the force not working also. > 5. I changed the state from error to active but that one also can't force complete. > > What can I do to change the name of the compute node in the DB? > you should not change the name of the compute node in the db. we do not support changing the compute node name if it has instances on it. if you ment in the migration record you also should not change it as the resouces woudl not be claimed correctly. > How can I force it without touching the db? > i dont think you can fix it without touching the db. so if the vm is removed form the source node there are 2 things you chould check 1 is the instance.host set to the dest host where it is now running 2 if you look in the logs was there an error in post live migrate. baiscaly what i think was the most likely issue is that an operation in post live migrate failed before the migations recored was set to complete. the precondiotns for force complete are The server OS-EXT-STS:vm_state value must be active and the server OS-EXT-STS:task_state value must be migrating. https://docs.openstack.org/api-ref/compute/?expanded=force-migration-complete-action-force-complete-action-detail#force-migration-complete-action-force-complete-action if the instance.host matches the host on which it is now rungin then you should be able to set the status and taskstate back to active/migrating respectivly. at which point you can force complete the migration. if the vm is running correctly on the destiatnion host and its host and the instance.host is set correctly it might just be simpler to updte the migration record to complete and ensure the task state is set to none on the instance. if the instace.host still has the source host but its running on the dest host then you should update it to refelct the correct host then mark the migration as complete. all of the above will require at least some db modifcations. > > > The goal is to automate the compute node draining as less as possible user intervention. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Szabo, Istvan (Agoda) > Sent: Friday, April 23, 2021 9:13 AM > To: Sean Mooney ; > openstack-discuss at lists.openstack.org > Subject: RE: Live migration fails > > My /etc/hostname has only short name. > The nova.conf host value is also short name. > The host has been selected by the scheduler: nova live-migration > --block-migrate 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0 > > What has been changed is in the instances table in the nova DB the node field of the vm. So actually I don't change the compute host value just edited the VM value actually. > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > -----Original Message----- > From: Sean Mooney > Sent: Thursday, April 22, 2021 4:13 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: Live migration fails > > On Thu, 2021-04-22 at 06:01 +0000, Eugen Block wrote: > > Yeah, the column "node" has the FQDN in my DB, too, only "host" is > > the short name. The question is how did the short name get into the "node" > > column, but it will probably be difficult to get to the bottom of that. > well by default we do not expect to have FQDNs in either filed. > novas default for both is the hostname of the host which will be the short name not the fqdn unless you set an fqdn in /etc/hostname which is not generally the recommended pratice. > > nova in general does nto support changing the hostname(/etc/hostname) of a host and you should avoid changeing the "host" value in the nova.conf too. > > changing these values can result in the creation fo addtional placment RP, compute service records and compute nodes and that can result in hard to fix situation wehre old vms are using one set of resouce and new vms are using the updated ones. > > so you should not modify either value in the db. > > did you perhaps specify a host when live migrating and just pass the wrong value or was the host selected by the scheduler. > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > I think I found the issue, in the instances nova db in the node > > > column the compute node name somehow changed to short hostname. It > > > works fith FQDN but it doesn't work with short ☹ I hope I didn't > > > mess-up anything if I change to FQDN to make it work. > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Szabo, Istvan (Agoda) > > > Sent: Thursday, April 22, 2021 11:19 AM > > > To: Eugen Block > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: RE: Live migration fails > > > > > > Sorry, in the log I haven't commented out the servername ☹ it is > > > xy-osfecn-40250 > > > > > > Istvan Szabo > > > Senior Infrastructure Engineer > > > --------------------------------------------------- > > > Agoda Services Co., Ltd. > > > e: istvan.szabo at agoda.com > > > --------------------------------------------------- > > > > > > -----Original Message----- > > > From: Eugen Block > > > Sent: Wednesday, April 21, 2021 5:37 PM > > > To: Szabo, Istvan (Agoda) > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: Re: Live migration fails > > > > > > The error message seems correct, I can't find am-osfecn-4025 > > > either in the list of compute nodes. Can you check in the database > > > if there's an active instance (or several) allocated to that > > > compute node? In that case you would need to correct the > > > allocation in order for the migration to work. > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > Sure: > > > > > > > > https://jpst.it/2u3uh > > > > > > > > These are the one where can't live migrate: > > > > xy-osfecn-40250 > > > > xy-osfecn-40281 > > > > xy-osfecn-40290 > > > > xy-osbecn-40073 > > > > xy-osfecn-40238 > > > > > > > > The compute service are disabled on these because we don't want > > > > anybody spawn a vm on these anymore so want to evacuate all vms. > > > > > > > > Istvan Szabo > > > > Senior Infrastructure Engineer > > > > --------------------------------------------------- > > > > Agoda Services Co., Ltd. > > > > e: istvan.szabo at agoda.com > > > > --------------------------------------------------- > > > > > > > > -----Original Message----- > > > > From: Eugen Block > > > > Sent: Wednesday, April 21, 2021 3:26 PM > > > > To: openstack-discuss at lists.openstack.org > > > > Subject: Re: Live migration fails > > > > > > > > Hi, > > > > > > > > can you share the output of these commands? > > > > > > > > nova-manage cell_v2 list_hosts > > > > openstack compute service list > > > > > > > > > > > > Zitat von "Szabo, Istvan (Agoda)" : > > > > > > > > > Hi, > > > > > > > > > > I have couple of compute nodes where the live migration fails > > > > > with existing vms. > > > > > When I quickly spawn a vm and try live migration it works so I > > > > > assume shouldn't be a big problem with the compute node. > > > > > However I have many existing vms where it fails with a > > > > > servername not found. > > > > > > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 > > > > > 227612 ERROR nova.conductor.tasks.migrate > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > > Unable to find record for source node servername on servername: > > > > > ComputeHostNotFound: Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 > > > > > 227612 WARNING nova.scheduler.utils > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] Failed to > > > > > compute_task_migrate_server: Compute host servername could not > > > > > be found.: ComputeHostNotFound: Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.605 > > > > > 227612 WARNING nova.scheduler.utils > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] [instance: > > > > > 1517a2ac-3b51-4d8d-80b3-89a5614d1ae0] > > > > > Setting instance to ACTIVE state.: ComputeHostNotFound: > > > > > Compute host servername could not be found. > > > > > /var/log/nova/nova-conductor.log:2021-04-21 14:47:12.672 > > > > > 227612 ERROR oslo_messaging.rpc.server > > > > > [req-f4067a26-a233-4673-8c07-9a8a290980b0 > > > > > dce35e6eceea4312bb0baa0510cef363 > > > > > ca7e35079f4440c78bd9870724b9638b - default default] Exception during message handling: > > > > > ComputeHostNotFound: Compute host am-osfecn-4025 > > > > > > > > > > Tried with this command: > > > > > > > > > > nova live-migration --block-migrate id. > > > > > > > > > > Any idea? > > > > > > > > > > Thank you. > > > > > > > > > > ________________________________ This message is confidential > > > > > and is for the sole use of the intended recipient(s). It may > > > > > also be privileged or otherwise protected by copyright or > > > > > other legal rules. If you have received it by mistake please > > > > > let us know by reply email and delete it from your system. It > > > > > is prohibited to copy this message or disclose its content to anyone. > > > > > Any confidentiality or privilege is not waived or lost by any > > > > > mistaken delivery or unauthorized disclosure of the message. > > > > > All messages sent to and from Agoda may be monitored to ensure > > > > > compliance with company policies, to protect the company's > > > > > interests and to remove potential malware. Electronic messages > > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > > > > > > > ________________________________ This message is confidential > > > > and is for the sole use of the intended recipient(s). It may > > > > also be privileged or otherwise protected by copyright or other > > > > legal rules. If you have received it by mistake please let us > > > > know by reply email and delete it from your system. It is > > > > prohibited to copy this message or disclose its content to anyone. > > > > Any confidentiality or privilege is not waived or lost by any > > > > mistaken delivery or unauthorized disclosure of the message. All > > > > messages sent to and from Agoda may be monitored to ensure > > > > compliance with company policies, to protect the company's > > > > interests and to remove potential malware. Electronic messages > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the > > > intended recipient(s). It may also be privileged or otherwise > > > protected by copyright or other legal rules. If you have received > > > it by mistake please let us know by reply email and delete it from > > > your system. It is prohibited to copy this message or disclose its > > > content to anyone. Any confidentiality or privilege is not waived > > > or lost by any mistaken delivery or unauthorized disclosure of the > > > message. All messages sent to and from Agoda may be monitored to > > > ensure compliance with company policies, to protect the company's > > > interests and to remove potential malware. Electronic messages may > > > be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > ________________________________ > This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From shalabhgoel13 at gmail.com Wed Apr 28 04:33:32 2021 From: shalabhgoel13 at gmail.com (Shalabh Goel) Date: Wed, 28 Apr 2021 10:03:32 +0530 Subject: [victoria][ha]Rabbitmq HA configuration in victoria Message-ID: Hi, I have been trying to configure HA for rabbitmq service for my Openstack installation. I found varying options on the Internet. I am facing some issues with rabbitmq with HAproxy. Option 1. I want to know if victoria supports the following configuration directive for openstack services. rabbit_hosts=rabbit1:5672,rabbit2:5672,rabbit3:5672 as told in this guide https://docs.openstack.org/ha-guide/control-plane-stateful.html Option 2. Can we configure as follows: transport_url = rabbit://RABBIT_USER:RABBIT_PASS at rabbit1:5672,RABBIT_USER:RABBIT_PASS at rabbit2:5672,RABBIT_USER:RABBIT_PASS at rabbit3:5672 Option 3. I have been using this configuration in haproxy listen rabbitmq_cluster_openstack bind controllers:5672 mode tcp balance roundrobin server controller1 controller1:5672 check inter 2000 rise 2 fall 5 server controller2 controller2:5672 backup check inter 2000 rise 2 fall 5 I am getting the following error in my log files for all the services. 2021-04-28 09:59:05.857 129188 INFO oslo.messaging._drivers.impl_rabbit [-] [bbf40d1a-e095-47ad-8629-931af485d4cf] Reconnected to AMQP server on controllers:5672 via [amqp] client with port 44818. I have set transport_url as follows: > transport_url = rabbit://openstack:rabbit123 at controllers:5672 > The error goes away if I change host to controller1 or controller. Please suggest which is the recommended one of the 3. Thanks in advance -- Shalabh Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From songwenping at inspur.com Wed Apr 28 05:54:32 2021 From: songwenping at inspur.com (=?utf-8?B?QWxleCBTb25nICjlrovmloflubMp?=) Date: Wed, 28 Apr 2021 05:54:32 +0000 Subject: =?utf-8?B?562U5aSNOiBjeWJvcmcgbGF1bmNocGFkIHN1cHBwb3J0IC0gaHR0cHM6Ly93?= =?utf-8?Q?ww.stackalytics.io?= In-Reply-To: References: Message-ID: <261d54f1caed452796b38f48f0d4212f@inspur.com> Great. Thanks a lot! 发件人: Andrii Ostapenko [mailto:anost1986 at gmail.com] 发送时间: 2021年4月28日 11:22 收件人: Alex Song (宋文平) 抄送: smooney at redhat.com; openstack-discuss at lists.openstack.org 主题: Re: cyborg launchpad suppport - https://www.stackalytics.io Hi, Sure. Should be there tomorrow with https://review.opendev.org/c/x/stackalytics/+/788397 On Tue, Apr 27, 2021 at 8:43 PM Alex Song (宋文平) > wrote: Thanks Sean for your tips. Andrii Ostapenko: Do you know what we need modify in the `stackalytics` project. Thanks. -----邮件原件----- 发件人: Sean Mooney [mailto:smooney at redhat.com ] 发送时间: 2021年4月27日 20:21 收件人: openstack-discuss at lists.openstack.org 主题: Re: cyborg launchpad suppport - https://www.stackalytics.io On Tue, 2021-04-27 at 09:52 +0000, Alex Song (宋文平) wrote: > > > > > > > > > > Hi, > > We change to use launchpad to track cyborg bugs and features, with > this patch merged: > https://review.opendev.org/c/openstack/project-config/+/787306. But we > cannot count on the website: https://www.stackalytics.io/ > > Please help us with this problem. the main repo for developemt fo stackalytics is https://opendev.org/x/stackalytics https://www.stackalytics.io is a seperate instance that was hostsed when https://www.stackalytics.com stoped updating and is not maintained by the openstack foundation infra team. the person that runs it is on this list and contribues upstream so hopefully they will see this and can help > > > > > > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From dev.faz at gmail.com Wed Apr 28 06:00:53 2021 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Wed, 28 Apr 2021 08:00:53 +0200 Subject: [victoria][ha]Rabbitmq HA configuration in victoria In-Reply-To: References: Message-ID: Hi, Option 2 is afaik the most stable option. Please also check your "durable"-setting in openstack and "replication"-policies in rabbitmq. I would suggest to use * durable-queue without replication *OR* * non-durable-queue with replication *OR* * do it the big-scale-way (f.e. vexxhost) and run one rabbitmq server per service and HA (failover) is done by some external service (like k8s, pacemaker, ...) Fabian Am Mi., 28. Apr. 2021 um 06:45 Uhr schrieb Shalabh Goel : > > Hi, > > I have been trying to configure HA for rabbitmq service for my Openstack installation. I found varying options on the Internet. I am facing some issues with rabbitmq with HAproxy. > > Option 1. I want to know if victoria supports the following configuration directive for openstack services. > > rabbit_hosts=rabbit1:5672,rabbit2:5672,rabbit3:5672 > > as told in this guide > > https://docs.openstack.org/ha-guide/control-plane-stateful.html > > Option 2. Can we configure as follows: > > transport_url = rabbit://RABBIT_USER:RABBIT_PASS at rabbit1:5672, > RABBIT_USER:RABBIT_PASS at rabbit2:5672,RABBIT_USER:RABBIT_PASS at rabbit3:5672 > > Option 3. I have been using this configuration in haproxy > > listen rabbitmq_cluster_openstack > bind controllers:5672 > mode tcp > balance roundrobin > server controller1 controller1:5672 check inter 2000 rise 2 fall 5 > server controller2 controller2:5672 backup check inter 2000 rise 2 fall 5 > > I am getting the following error in my log files for all the services. > > 2021-04-28 09:59:05.857 129188 INFO oslo.messaging._drivers.impl_rabbit [-] [bbf40d1a-e095-47ad-8629-931af485d4cf] Reconnected to AMQP server on controllers:5672 via [amqp] client with port 44818. > > I have set transport_url as follows: >> >> transport_url = rabbit://openstack:rabbit123 at controllers:5672 > > > The error goes away if I change host to controller1 or controller. > > Please suggest which is the recommended one of the 3. > > Thanks in advance > -- > Shalabh Goel From skaplons at redhat.com Wed Apr 28 06:01:41 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 28 Apr 2021 08:01:41 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: <20210427204746.c4jigic2bru4o3wp@mthode.org> References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: <12689290.aD5p8Cbv8m@p1> Hi, Dnia wtorek, 27 kwietnia 2021 22:47:46 CEST Matthew Thode pisze: > Looks like a new release of sqlalchemy is upon us and is breaking tests > in openstack (1.4.11 for now). Please test against > https://review.opendev.org/788339 to get your project working against > the newest version. > > currently failing are cinder ironic keystone masakari neutron and nova > cross gates > > Thanks, > > -- > Matthew Thode Thx for the heads-up. I just opened LP bug for it in Neutron: https://bugs.launchpad.net/ neutron/+bug/1926399[1] I will try to get to it this week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://bugs.launchpad.net/neutron/+bug/1926399 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From syedammad83 at gmail.com Wed Apr 28 06:36:50 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 28 Apr 2021 11:36:50 +0500 Subject: [wallaby][neutron] Neuton Server Upgrade Message-ID: Hi, I have upgraded neutron server from victoria to wallaby. Everytime I restart neutron server, I see below errors in neutron-server.log. Can you guys please advise on it. 2021-04-28 06:34:12.144 1767 INFO ovsdbapp.backend.ovs_idl.vlog [req-cc4f309f-1851-47cd-bab1-de0445964d56 - - - - -] tcp:172.16.30.51:6641: connecting... 2021-04-28 06:34:12.145 1767 INFO ovsdbapp.backend.ovs_idl.vlog [req-cc4f309f-1851-47cd-bab1-de0445964d56 - - - - -] tcp:172.16.30.51:6641: connected 2021-04-28 06:34:12.214 1770 INFO neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for MaintenanceWorker with retry 2021-04-28 06:34:12.238 1770 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp: 172.16.30.51:6641: connecting... 2021-04-28 06:34:12.238 1770 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp: 172.16.30.51:6641: connected 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event [-] Unexpected exception in notify_loop: AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event Traceback (most recent call last): 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/ovsdbapp/event.py", line 159, in notify_loop 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event match.run(event, row, updates) 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py", line 443, in run 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event self.driver.delete_mac_binding_entries(row.external_ip) 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 1058, in delete_mac_binding_entries 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event mac_binds = self._sb_ovn.db_find_rows( 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.248 1767 ERROR ovsdbapp.event 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event [-] Unexpected exception in notify_loop: AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event Traceback (most recent call last): 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/ovsdbapp/event.py", line 159, in notify_loop 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event match.run(event, row, updates) 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py", line 443, in run 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event self.driver.delete_mac_binding_entries(row.external_ip) 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 1058, in delete_mac_binding_entries 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event mac_binds = self._sb_ovn.db_find_rows( 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event [-] Unexpected exception in notify_loop: AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event Traceback (most recent call last): 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/ovsdbapp/event.py", line 159, in notify_loop 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event match.run(event, row, updates) 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py", line 443, in run 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event self.driver.delete_mac_binding_entries(row.external_ip) 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 1058, in delete_mac_binding_entries 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event mac_binds = self._sb_ovn.db_find_rows( 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.250 1767 ERROR ovsdbapp.event 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event [-] Unexpected exception in notify_loop: AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event Traceback (most recent call last): 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/ovsdbapp/event.py", line 159, in notify_loop 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event match.run(event, row, updates) 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py", line 443, in run 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event self.driver.delete_mac_binding_entries(row.external_ip) 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 1058, in delete_mac_binding_entries 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event mac_binds = self._sb_ovn.db_find_rows( 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event AttributeError: 'NoneType' object has no attribute 'db_find_rows' 2021-04-28 06:34:12.251 1767 ERROR ovsdbapp.event 2021-04-28 06:34:12.251 1767 INFO neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver [-] OVN reports status down for port: 3d040d1d-c3d5-49da-9103-8ab4f9a8ab01 2021-04-28 06:34:12.252 1767 INFO neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn [req-cc4f309f-1851-47cd-bab1-de0445964d56 - - - - -] Getting OvsdbSbOvnIdl for WorkerService with retry Secondly after upgrade, I can only see upgraded network agents in openstack network agent list. The other agents that are on victoria release are not showing up. Need your help. -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Apr 28 06:51:15 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 28 Apr 2021 09:51:15 +0300 Subject: [openstack-ansible][osa][PTG] OpenStack-Ansible Xena PTG summary In-Reply-To: References: <378841619547228@mail.yandex.ru> Message-ID: <397241619592495@mail.yandex.ru> An HTML attachment was scrubbed... URL: From xxxcloudlearner at gmail.com Wed Apr 28 07:10:36 2021 From: xxxcloudlearner at gmail.com (cloud learner) Date: Wed, 28 Apr 2021 12:40:36 +0530 Subject: internet not working in instance Message-ID: Hi All, Have installed rocky on centos 7 with 2 node controller and compute, used linuxbridge as mentioned in the document used self service network. My question is, as we have to create the br-ex and attach it to eth when we use OVS, so as I am using linuxbridge there is need to create the bridge. If there are any doc kindly suggest. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Apr 28 07:12:49 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 28 Apr 2021 09:12:49 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: <20210427204746.c4jigic2bru4o3wp@mthode.org> References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: On Tue, Apr 27, 2021 at 10:50 PM Matthew Thode wrote: > Looks like a new release of sqlalchemy is upon us and is breaking tests > in openstack (1.4.11 for now). Please test against > https://review.opendev.org/788339 to get your project working against > the newest version. > > currently failing are cinder ironic keystone masakari neutron and nova > cross gates > The ironic failure seems to be because of the oslo.db fixture: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f04/788339/1/check/cross-ironic-py38/f04c826/testr_results.html. Could someone from oslo take a look? Dmitry > > Thanks, > > -- > Matthew Thode > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Apr 28 08:00:44 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 28 Apr 2021 09:00:44 +0100 Subject: [openstack-ansible][osa][PTG] OpenStack-Ansible Xena PTG summary In-Reply-To: <378841619547228@mail.yandex.ru> References: <378841619547228@mail.yandex.ru> Message-ID: On Tue, 27 Apr 2021 at 19:15, Dmitriy Rabotyagov wrote: > > Hi everyone! > > First of all I would like to thank everyone for attendance. > > We have discussed a lot of topics both regarding things that we need to merge before releasing Wallaby (we are trailing behind and may release in up to 3 month after official release) and regarding plans for the Xena. > > First comes list of things that we agreed to land for W in descending priority: > 1. ansible-role-pki. Gerrit topic [1] > 2. Centos8-Stream. Gerrit topic [2] > 3. Move neutron-server to standalone group in env.d > 4. Debian Bullseye. We're currently blocked with infra [3] > 5. Senlin tempest test [4] > 6. Fix broken roles > 7. Implement send_service_user_token and service_token_roles_required > > Also we agreed to keep ansible-base 2.11.* release for Wallaby, and not switch to the ansible-core yet. > > List of descisions that were taken for the Xena release: > 1. Deprecate Ubuntu Bionic at the beggining of X cycle > 2. Deprecate Centos-8 (classic) at the beginning of X cycle > 3. Switch MariaDB balancing frontend from HAproxy to MaxScale FWIW, Michal Arbet (kevko) started adding MaxScale to Kolla but switched to ProxySQL due to licensing issues. > 4. Drop Nginx for the Keystone and use Apache is all scenarios as most unified solution. > 5. Add option to roles to configure systemd timers for cleaning up soft-deleted records in DB > 6. Create operating systems support table to clear out possible upgrade paths between releases > 7. Replace dashes with underscores in group names and dynamic inventory and stop ignoring TRANSFORM_INVALID_GROUP_CHARS. > 8. Adding ability to enable prometheus support for MariaDB and HAproxy > 9. Revise possibility to support deployments on top of arm64 > 10. Look into creation common roles for Ansible OpenStack collection. At least write a blueprint/spec regarding scope of these roles. > 11. Deploy ARA on localhost for OSA logging, configured by variables > > > [1] https://review.opendev.org/q/topic:%22osa%252Fpki%22+(status:open) > [2] https://review.opendev.org/q/topic:%22osa-stream%22+(status:open) > [3] https://review.opendev.org/q/topic:%22debian_bullseye%22+(status:open) > [4] https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/754045 > > > -- > Kind Regards, > Dmitriy Rabotyagov > > From radoslaw.piliszek at gmail.com Wed Apr 28 08:05:26 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 28 Apr 2021 10:05:26 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: <20210427204746.c4jigic2bru4o3wp@mthode.org> References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: On Tue, Apr 27, 2021 at 10:50 PM Matthew Thode wrote: > > Looks like a new release of sqlalchemy is upon us and is breaking tests > in openstack (1.4.11 for now). Please test against > https://review.opendev.org/788339 to get your project working against > the newest version. > > currently failing are cinder ironic keystone masakari neutron and nova > cross gates Thanks for the early notice. It seems we need to fix oslo.db as well since I am getting this: File "/home/zuul/src/opendev.org/openstack/masakari/.tox/py36/lib/python3.6/site-packages/oslo_db/sqlalchemy/test_base.py", line 180, in setUp self, skip_on_unavailable_db=self.SKIP_ON_UNAVAILABLE_DB)) ... File "/home/zuul/src/opendev.org/openstack/masakari/.tox/py36/lib/python3.6/site-packages/oslo_db/sqlalchemy/test_base.py", line 66, in setUp self.test, self.test.resources, testresources._get_result()) ... File "/home/zuul/src/opendev.org/openstack/masakari/.tox/py36/lib/python3.6/site-packages/oslo_db/sqlalchemy/provision.py", line 133, in make url = backend.provisioned_database_url(db_token) File "/home/zuul/src/opendev.org/openstack/masakari/.tox/py36/lib/python3.6/site-packages/oslo_db/sqlalchemy/provision.py", line 337, in provisioned_database_url return self.impl.provisioned_database_url(self.url, ident) File "/home/zuul/src/opendev.org/openstack/masakari/.tox/py36/lib/python3.6/site-packages/oslo_db/sqlalchemy/provision.py", line 498, in provisioned_database_url url.database = ident AttributeError: can't set attribute I've just run unit tests on oslo.db with new SQLAlchemy and it fails 5 tests there. -yoctozepto From noonedeadpunk at ya.ru Wed Apr 28 08:36:19 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 28 Apr 2021 11:36:19 +0300 Subject: [openstack-ansible][osa][PTG] OpenStack-Ansible Xena PTG summary In-Reply-To: References: <378841619547228@mail.yandex.ru> Message-ID: <483211619598612@mail.yandex.ru> Thank for pointing to that! MaxScale is indeed licensed under BSL, which means you can't use it for production environments [1]. Also I haven't followed ProxySQL a lot so didn't know that they have native Galera support from 2.0.0, which is really a great alternative! [1] https://mariadb.com/bsl-faq-mariadb/ 28.04.2021, 11:00, "Mark Goddard" : > On Tue, 27 Apr 2021 at 19:15, Dmitriy Rabotyagov wrote: >>  Hi everyone! >> >>  First of all I would like to thank everyone for attendance. >> >>  We have discussed a lot of topics both regarding things that we need to merge before releasing Wallaby (we are trailing behind and may release in up to 3 month after official release) and regarding plans for the Xena. >> >>  First comes list of things that we agreed to land for W in descending priority: >>  1. ansible-role-pki. Gerrit topic [1] >>  2. Centos8-Stream. Gerrit topic [2] >>  3. Move neutron-server to standalone group in env.d >>  4. Debian Bullseye. We're currently blocked with infra [3] >>  5. Senlin tempest test [4] >>  6. Fix broken roles >>  7. Implement send_service_user_token and service_token_roles_required >> >>  Also we agreed to keep ansible-base 2.11.* release for Wallaby, and not switch to the ansible-core yet. >> >>  List of descisions that were taken for the Xena release: >>  1. Deprecate Ubuntu Bionic at the beggining of X cycle >>  2. Deprecate Centos-8 (classic) at the beginning of X cycle >>  3. Switch MariaDB balancing frontend from HAproxy to MaxScale > > FWIW, Michal Arbet (kevko) started adding MaxScale to Kolla but > switched to ProxySQL due to licensing issues. > >>  4. Drop Nginx for the Keystone and use Apache is all scenarios as most unified solution. >>  5. Add option to roles to configure systemd timers for cleaning up soft-deleted records in DB >>  6. Create operating systems support table to clear out possible upgrade paths between releases >>  7. Replace dashes with underscores in group names and dynamic inventory and stop ignoring TRANSFORM_INVALID_GROUP_CHARS. >>  8. Adding ability to enable prometheus support for MariaDB and HAproxy >>  9. Revise possibility to support deployments on top of arm64 >>  10. Look into creation common roles for Ansible OpenStack collection. At least write a blueprint/spec regarding scope of these roles. >>  11. Deploy ARA on localhost for OSA logging, configured by variables >> >>  [1] https://review.opendev.org/q/topic:%22osa%252Fpki%22+(status:open) >>  [2] https://review.opendev.org/q/topic:%22osa-stream%22+(status:open) >>  [3] https://review.opendev.org/q/topic:%22debian_bullseye%22+(status:open) >>  [4] https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/754045 >> >>  -- >>  Kind Regards, >>  Dmitriy Rabotyagov --  Kind Regards, Dmitriy Rabotyagov From balazs.gibizer at est.tech Wed Apr 28 11:18:56 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Wed, 28 Apr 2021 13:18:56 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: <20210427204746.c4jigic2bru4o3wp@mthode.org> References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: On Tue, Apr 27, 2021 at 15:47, Matthew Thode wrote: > Looks like a new release of sqlalchemy is upon us and is breaking > tests > in openstack (1.4.11 for now). Please test against > https://review.opendev.org/788339 to get your project working against > the newest version. > > currently failing are cinder ironic keystone masakari neutron and nova > cross gates I've opened https://bugs.launchpad.net/nova/+bug/1926426 from nova side as besides the generic oslo.db related error I see now specific issue in the test run as well. gibi > > Thanks, > > -- > Matthew Thode From sshnaidm at redhat.com Wed Apr 28 11:43:49 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 28 Apr 2021 14:43:49 +0300 Subject: [openstack-ansible-modules] Openstack Ansible collections PTG session summary Message-ID: Hi, all Highlights from Openstack Ansible collection/modules PTG session[1] In the last cycle we had 68 commits, 3 released versions, 5 new contributors, 14 solved issues. The roadmap for next cycle: All modules in Xena should convert SDK output to a dictionary to match last SDK changes. CI tasks: 1. Automatic release jobs 2. Docs generation and publishing jobs - docs.openstack.org and readthedocs will be evaluated. 3. Gating jobs for projects starting to use the collections, like TripleO. Bugtracker - to evaluate Launchpad for bugs and issues submission. We continue to convert modules to use a standard OpenstackModule class. Features coverage for Openstack Ansible modules would help us to see how good we cover the current functionality. It would be useful to have roles in collection which will run specific and common tasks, which can be used by OPs easily. (Like Terraform "modules" concept) For fetching Openstack SDK logs during the tasks execution we may have a specific role or/and OpenstackModule class modifications. Ansible-test is used now for sanity testing only, we'll evaluate it for integration tests next cycle. Thanks [1] https://etherpad.opendev.org/p/xena-ptg-os-ansible-collections -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Apr 28 11:48:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 28 Apr 2021 13:48:24 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: Hello, Without too digging this topic either, I just made a round of prioritization on oslo.db patches and I started merging some important fixes related to sqlalchemy 1.4. I think that some of them will help to fix the issue. https://review.opendev.org/c/openstack/oslo.db/+/747762 https://review.opendev.org/c/openstack/oslo.db/+/758142 Some have been merged a few minutes ago and some are in our gates. Le mer. 28 avr. 2021 à 13:23, Balazs Gibizer a écrit : > > > On Tue, Apr 27, 2021 at 15:47, Matthew Thode wrote: > > Looks like a new release of sqlalchemy is upon us and is breaking > > tests > > in openstack (1.4.11 for now). Please test against > > https://review.opendev.org/788339 to get your project working against > > the newest version. > > > > currently failing are cinder ironic keystone masakari neutron and nova > > cross gates > > I've opened https://bugs.launchpad.net/nova/+bug/1926426 from nova side > as besides the generic oslo.db related error I see now specific issue > in the test run as well. > > gibi > > > > > Thanks, > > > > -- > > Matthew Thode > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Apr 28 11:49:25 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 28 Apr 2021 13:49:25 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: When all these patches will be merged I'll release a new version of oslo.db Le mer. 28 avr. 2021 à 13:48, Herve Beraud a écrit : > Hello, > > Without too digging this topic either, I just made a round of > prioritization on oslo.db patches and I started merging some important > fixes related to sqlalchemy 1.4. I think that some of them will help to > fix the issue. > > https://review.opendev.org/c/openstack/oslo.db/+/747762 > https://review.opendev.org/c/openstack/oslo.db/+/758142 > > Some have been merged a few minutes ago and some are in our gates. > > Le mer. 28 avr. 2021 à 13:23, Balazs Gibizer a > écrit : > >> >> >> On Tue, Apr 27, 2021 at 15:47, Matthew Thode wrote: >> > Looks like a new release of sqlalchemy is upon us and is breaking >> > tests >> > in openstack (1.4.11 for now). Please test against >> > https://review.opendev.org/788339 to get your project working against >> > the newest version. >> > >> > currently failing are cinder ironic keystone masakari neutron and nova >> > cross gates >> >> I've opened https://bugs.launchpad.net/nova/+bug/1926426 from nova side >> as besides the generic oslo.db related error I see now specific issue >> in the test run as well. >> >> gibi >> >> > >> > Thanks, >> > >> > -- >> > Matthew Thode >> >> >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Apr 28 12:05:47 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 28 Apr 2021 14:05:47 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: I managed to reproduce the error with oslo.db by disabling upper-constraints on our tests and I confirm that both patches fix successfully the compatibility problem with SQLAlchemy 1.4. Le mer. 28 avr. 2021 à 13:49, Herve Beraud a écrit : > When all these patches will be merged I'll release a new version of oslo.db > > Le mer. 28 avr. 2021 à 13:48, Herve Beraud a écrit : > >> Hello, >> >> Without too digging this topic either, I just made a round of >> prioritization on oslo.db patches and I started merging some important >> fixes related to sqlalchemy 1.4. I think that some of them will help to >> fix the issue. >> >> https://review.opendev.org/c/openstack/oslo.db/+/747762 >> https://review.opendev.org/c/openstack/oslo.db/+/758142 >> >> Some have been merged a few minutes ago and some are in our gates. >> >> Le mer. 28 avr. 2021 à 13:23, Balazs Gibizer a >> écrit : >> >>> >>> >>> On Tue, Apr 27, 2021 at 15:47, Matthew Thode wrote: >>> > Looks like a new release of sqlalchemy is upon us and is breaking >>> > tests >>> > in openstack (1.4.11 for now). Please test against >>> > https://review.opendev.org/788339 to get your project working against >>> > the newest version. >>> > >>> > currently failing are cinder ironic keystone masakari neutron and nova >>> > cross gates >>> >>> I've opened https://bugs.launchpad.net/nova/+bug/1926426 from nova side >>> as besides the generic oslo.db related error I see now specific issue >>> in the test run as well. >>> >>> gibi >>> >>> > >>> > Thanks, >>> > >>> > -- >>> > Matthew Thode >>> >>> >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Apr 28 13:12:20 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 28 Apr 2021 15:12:20 +0200 Subject: [all][requirements] sqlalchemy 1.4.x release/upgrade In-Reply-To: References: <20210427204746.c4jigic2bru4o3wp@mthode.org> Message-ID: All fixes are now merged, I just proposed a new release of oslo.db for Xena (8.6.0) [1] and I updated the sqlalchemy requirement bump [2] with a "depends-on" on my release patch [1]. Let me know if that solves the issue on your end. [1] https://review.opendev.org/c/openstack/releases/+/788488 [2] https://review.opendev.org/c/openstack/requirements/+/788339 Le mer. 28 avr. 2021 à 14:05, Herve Beraud a écrit : > I managed to reproduce the error with oslo.db by disabling > upper-constraints on our tests and I confirm that both patches fix > successfully the compatibility problem with SQLAlchemy 1.4. > > Le mer. 28 avr. 2021 à 13:49, Herve Beraud a écrit : > >> When all these patches will be merged I'll release a new version of >> oslo.db >> >> Le mer. 28 avr. 2021 à 13:48, Herve Beraud a écrit : >> >>> Hello, >>> >>> Without too digging this topic either, I just made a round of >>> prioritization on oslo.db patches and I started merging some important >>> fixes related to sqlalchemy 1.4. I think that some of them will help to >>> fix the issue. >>> >>> https://review.opendev.org/c/openstack/oslo.db/+/747762 >>> https://review.opendev.org/c/openstack/oslo.db/+/758142 >>> >>> Some have been merged a few minutes ago and some are in our gates. >>> >>> Le mer. 28 avr. 2021 à 13:23, Balazs Gibizer >>> a écrit : >>> >>>> >>>> >>>> On Tue, Apr 27, 2021 at 15:47, Matthew Thode wrote: >>>> > Looks like a new release of sqlalchemy is upon us and is breaking >>>> > tests >>>> > in openstack (1.4.11 for now). Please test against >>>> > https://review.opendev.org/788339 to get your project working against >>>> > the newest version. >>>> > >>>> > currently failing are cinder ironic keystone masakari neutron and nova >>>> > cross gates >>>> >>>> I've opened https://bugs.launchpad.net/nova/+bug/1926426 from nova >>>> side >>>> as besides the generic oslo.db related error I see now specific issue >>>> in the test run as well. >>>> >>>> gibi >>>> >>>> > >>>> > Thanks, >>>> > >>>> > -- >>>> > Matthew Thode >>>> >>>> >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Apr 28 19:49:59 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 28 Apr 2021 14:49:59 -0500 Subject: Xena PTG Policy Summary Message-ID: Hey all, Last week we spent a lot of time discussing RBAC, where everything stands, and where we need to go to offer a consistent experience for operators and users. We kept track of all the sessions in a single etherpad, which also served as a place for daily summaries [0]. There is a lot of information in there, but we started working through the action items and correlating them to bugs or opening new bugs. Hopefully this helps us track progress through Xena. One of the biggest outcomes from last week was the discussion about how system users should interact with project-owned resources. For context, administrators have always been able to do things for project users because they both have project-scoped tokens. That's no longer going to be the case as services adopt system-scope. We came up with an interesting way to solve the problem and we compared it to other approaches. This all starts at about line 136 in the etherpad [0]. Ultimately, we think it will be the least invasive approach, we have a specification up for review [1], and a PoC in flight [2]. Please look over the summary and links to any actionables for your project. We can use this thread to discuss any questions if you have them. Thanks again for all the dedication and focus on policy last week. I know the discussions aren't easy and it's a tough problem to work through, but landing something this big across OpenStack services will be a huge win for operators and users. Lance [0] https://etherpad.opendev.org/p/policy-popup-xena-ptg [1] https://review.opendev.org/c/openstack/keystone-specs/+/787640 [2] https://review.opendev.org/c/openstack/keystonemiddleware/+/787822 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Apr 29 00:53:46 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 28 Apr 2021 19:53:46 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 29th at 1500 UTC Message-ID: <1791b1e1234.12635d152724627.2561837496381557774@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on April 29th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * PTG ** https://etherpad.opendev.org/p/tc-xena-ptg * TC tracker for Xena cycle (gmann) ** https://etherpad.opendev.org/p/tc-xena-tracker * Gate performance and heavy job configs (dansmith) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * TC's context, name, and documenting formal responsibilities (TheJulia) * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann From yingjisun at vmware.com Thu Apr 29 00:54:22 2021 From: yingjisun at vmware.com (Yingji Sun) Date: Thu, 29 Apr 2021 00:54:22 +0000 Subject: An compute service hang issue In-Reply-To: <872c4008732daa005ec95856db078e9f973110bf.camel@redhat.com> References: <4e5d3beb2f7921b3a494ca853621da5d59cda1f5.camel@redhat.com> <872c4008732daa005ec95856db078e9f973110bf.camel@redhat.com> Message-ID: <57E3115F-31BE-45A1-A47B-278A7C459C57@vmware.com> Sean, I think your comments on synchronized lock should be the root cause of my issue. In my logs, I see after a log of " Lock "compute_resources" acquired by ", my compute node get "stucked". Mar 12 01:13:58 controller-mpltc45f7n nova-compute[756]: 2021-03-12 01:13:58.044 1 DEBUG oslo_concurrency.lockutils [req-7f57447c-7aae-48fe-addd-46f80e80246a - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._safe_update_available_resource" :: waited 0.000s inner /usr/lib/python3.7/site-packages/oslo_concurrency/lockutils.py:327 I think it is because the lock "compute_resources" is not released any more that caused this issue. At that time there is some mysql issue when calling _safe_update_available_resource . So I think the exception is not handled the the lock is not released. Yingji 在 4/27/21 下午4:11,“Sean Mooney” 写入: On Tue, 2021-04-27 at 01:19 +0000, Yingji Sun wrote: > > Sean, > > > > You are right. I am working with vmware driver. Is it possible that you share some code fix samples so that I can have a try in my environment ? > in the libvirt case we had service wide hangs https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.launchpad.net%2Fnova%2F%2Bbug%2F1840912&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078669991854%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=HmsgFfteGe%2Ff3WelUzFLwtyQSfp387Lnjpr14EVZbEY%3D&reserved=0 that were resovled by https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Fnova%2Fcommit%2F36ee9c1913a449defd3b35f5ee5fb4afcd44169e&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=s8x5tjb3UBr%2FaYuPgNt4F3v4PwLFioJCfTpekqzJRVY%3D&reserved=0 > > > > > so this synchronized decorator prints a log message which you should see > when it is aquired and released > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Foslo.concurrency%2Fblob%2F4da91987d6ce7de2bb61c6ed760a019961a0a344%2Foslo_concurrency%2Flockutils.py%23L355-L371&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RY4MR1YIMCSo0smYk9AhzUZWkyUumv5W1bwohk7auHE%3D&reserved=0 > you should see that in the logs. > > i notice also that in your code you do not have the fair=true argmument on master and for a few releases now > we have enable the use of fair locking with https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Fnova%2Fcommit%> 2F1ed9f9dac59c36cdda54a9852a1f93939b3ebbc3&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZX6GQ2TmyznoLo3FhjXGJl8MSb9hAUrjMeDtDEzcjTI%3D&reserved=0 > to resolve long delays in the ironic diriver https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.launchpad.net%2Fnova%2F%2Bbug%2F1864122&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Yy0yRX3fv5dTJlIbRoQmKGd2SYNe6%2BTqD684WWUwIGk%3D&reserved=0 but the same issues would also affect vmware or any other clustered hypervior where the resouce tracker is manageing multiple nodes. > its very possible that that is what is causing your current issues. > > @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE) > > > > if instance.node: > > LOG.warning("Node field should not be set on the instance " > > "until resources have been claimed.", > > instance=instance) > > > > cn = self.compute_nodes[nodename] > > > > I did not see the rabbitmq messsge that should be sent here. > > > > pci_requests = objects.InstancePCIRequests.get_by_instance_uuid( > > context, instance.uuid) > > From gmann at ghanshyammann.com Thu Apr 29 03:13:56 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 28 Apr 2021 22:13:56 -0500 Subject: [tc][all][ Xena Virtual PTG Summary Message-ID: <1791b9e6779.b33ba09e725080.7541511949911134775@ghanshyammann.com> Hello Everyone, I am summarizing the Technical Committee discussion that happened in the Xena cycle PTG. Pop Up Team Check-In --------------------------- Currently, we have two pop-up teams 1. policy-popup 2. Encryption, as per checks, both are active, and we will continue on both teams in the Xena cycle. policy pop-up team almost served its purpose and ready to be converted as community-wide goal as next step which can happen in Y cycle. Community-wide goals Check-In --------------------------------------- *In the Wallaby cycle, we selected two community-wide goals. In PTG we checked the progress and any blocker. 1. Migrate RBAC Policy Format from JSON to YAML[1] This is almost completed, only heat and few patches in openstack-ansible are pending. Tracking etherpad[2]. Action Items: Finish the pending patches to merge. Assignee: gmann 2. Migrate from oslo.rootwrap to oslo.privsep[3] ralonsoh could not join this session due to conflict with neutron sessions, but this needs more work and will continue in Xena cycle too. Action Items: Continue this work in Xena cycle Assignee: ralonsoh * Along with the above goals, we checked a few previous cycle goals pending work also, and the below two are pretty simple and important to complete. We will spend some time finishing these in Xena cycle. 1. offline pdf doc: https://storyboard.openstack.org/#!/story/list?tags=pdf-doc-enabled Assignee: gmann 2. Contributor guide: https://storyboard.openstack.org/#!/story/2007236 Assignee: gmann, diablo_rojo *Y cycle goal selection. As per the community-wide goal schedule[4], we need to start the goal selection for the Y cycle. We have two TC members volunteer to drive this work. Assignee: ricolin, diablo_rojo TC tag advertising and encourage project adoption. -------------------------------------------------------------- We discussed the TC tags[5] adoption and if it is worth continuing it or who actually gets benefit from these. It is not so clear if they are being checked by users or anyone. Projects are adopted in production by their stability and feature, not just by tag. We decided to review each tag for its usefulness and cleanup. Based on what left, we can make a decision on whether to continue the tag concept or not. Action Items: Review each tag for its usefulness and cleanup Assignee: yoctozepto, jungleboyj Discussion on moving OpenStack to a yearly release cycle -------------------------------------------------------------------- This topic has been discussed before also and it was live-streamed on youtube too, which you can watch[6]. Discussion is about if the current 6-month release cycle is fine/fast for consumers to adopt the releases/upgrade etc. There were both side arguments about moving OpenStack release to 1-year cycle and it was not so clear if that actually solve the upgrade/consuming feature or not. Also, there are few points on developer perspective and being a long 1-year release can slow down the community activeness and developer time for upstream work. But there were more important points also which I did not summarize here but you to watch the youtube video for details. In summary, there is no consensus on this and we gathered some of the action items to continue this get feedback or educating about upgrade operation guides& best practices. Action Items: * Reach out to the Project team + operators about feedback via ML. * Encourage "Operation Docs and Tooling" SIG to add upgrades and operational education guideline Assignee: I forgot to ask the volunteer for this, please let me know if anyone would like to help with this. Wallaby Retrospective --------------------------- Below is what we discussed in the previous cycle retrospective * TC is more productive with the weekly meetings which helped to speed up the review and discussions. * Happy to see TC engagement on the gate and infra stuff. We will be continuing this as our weekly meeting agenda too. Thanks to dansmith to initiate this * Did not promote election well. We discussed this in detail as a separate topic, I am writing the summary for this in the below section. * We need more interaction with project/SIG(and also popups?:) ) team. Ditto, I am writing the summary for this in the below section. * Doing great with settling projects to DPL and also we have zero projects left behind as leaderless. * Landed the TC stance on SDK/OSC Patch * Started SIG audit too in terms of health checks which is good to keep SIG list active and maintained Thanks to ricolin and diablo_rojo. * Thanks to mnaser and diablo_rojo for serving as Chair/Vice-chair and running the show in a great way. Getting projects to broadcast out/mentor ++ ---------------------------------------------------- This topic is about encouraging projects to broadcast out successes, wins, and possibly get people in projects to engage in some public speaking of their work, why it is important to them, the change they see it making in the world, team values, what does the team find important. We discussed the various way to achieve that and some of part is about marketing the things. There is openinfra live tv which broadcast the live sessions every Thursday, take more help from the foundation or reach out to local user groups. Apart from marketing stuff, TC will definitely encourage projects to start talking about the more and more technical stuff in public speaking. Action Items: * TC to reachout to board/foundation to provide an exact platform for broadcast these, existing or any new platform * TC to document this as best practice to do by projects in project team guide. Assignee: spotz, belmoreira How do we not break cross-vendor drivers? --------------------------------------------------- This discussion was about cross-vendor drivers. In Ironic, w recent change to redfish drivers was found to break in one case on another vendor. Even current stable policy state in appropriate-fixes section[7]on taking the best decision on backport fixes. Projects can take decisions as per the project and user's best interest which will help everyone. We talked about the stable policy process in the next topic which is going to help projects in such cases. Stable core team process ----------------------------- We have talked about this in shanghai PTG also but did not proceed with any resolution. We are facing a few issues with the stable backport merges. The stable maintenance team is not so active as it used to be. There is a delay in adding more core in stable project groups or merging the backports. Elod mentioned that right now that generally when people have questions, they refer them to the policy documents and make a decision. We definitely want to fix the current issues and the good news is that we agreed to do few changes on stable policy like let the project core team manages the project side stable team and also policy they would like to define as per the project's use case. Action Items: * Add a resolution and modify stable policy doc for the below changes: ** Let the project team manage the project's stable core team, Global stable maintenance core team is continuing on general advice and help. ** Projects are allowed to define any addition or own policy on top of general stable policy guidelines. Assignee: jungleboyj, mnaser/gmann (gmann to check if mnaser is ok with this item) Stable EM to EOL transition --------------------------------- There are 9 stable branches currently (will be 8 soon after ocata moves to EOL) including EM or maintained stable. Discussion is about the maintenance effort for these many stable branches. how we can handle the supporting team load for EM branches like QA, infra etc. Many projects gate on EM stable branches is broken. QA team or elod end up spending lot of time during py2 drop time on EM branch even we still fix those. But in future, we might not have that much volume on EM branch maintenance. With that, we did not conclude on reducing the number of EM branch and continue as it is. But as 'unmaintained phase is not so clear and not real phase branch transition to, we agreed to remove this phase and let EM branch directly move to EOL state based on their current lifecycle or maintenance. Action Items: Remove the "Unmaintained" phase Assignee: gmann Next, we talked about the TC process & workflow things. Meeting time check ------------------------ As we have two new TC members, we checked on the current meeting time. The current time Thursday 15 UTC is ok for all TC members and we will continue that. Office Hour continuation ------------------------------ TC office hours are still inactive and we discussed if we want to continue on these or we can remove especially when we have weekly meetings. The consensus on this is to continue with the one office hour which is in Asia TZ. Action Items: Keep one office hour Assignee: gmann Charter revision for election vacant seat ----------------------------------------------- This was the first time in the TC election that we had one vacant seat. There was discussion on reducing the TC seats (this requires the charter change but we did not have full TC members to do the charter change) or doing a special election which we went with a special election. This is something we tool as an action item to make our charter handle this situation in future if occur. We also talked about how we can engage AUC in election or voting and decided to add a way for them to be extra ATC. Action Items: * Charter change to write the process to handle these vacant seat situations and how to make a charter change in absence of complete TC or so. * Document the process for adding SIG+project contributors, AUC as extra ATC. Assignee: dansmith How we can improve our election promotion ----------------------------------------------------- In past couple of elections, we have noticed that many members miss the nomination deadline also fewer members step up for the leadership role. We discussed few ways to improve the election promotion but there is no concrete things we agreed on Another but important thing we discussed is how to add more members to the election official team. Every time it's fungi or diablo_rojo helping in front. We decided that we will have two TC members volunteer for every cycle election official or encourage community members to be part of this wG. Action Items: Make a process to ask two TC volunteers for an election official in addition to existing election officials. Assignee: spotz, diablo_rojo Project Health checks -------------------------- We tried a few ways in past to improve the interaction between TC and the project side. TC liaison is our current way to check project health and interaction but that is not so active or helpful. Rico suggested trying some automatic way to check the contribution stats. But apart from that we did not decide any conclusion on that and will continue this discussion in the TC meeting. Action Items: * automate the process to check the health in term of contribution/review/release/gate * Continue this topic in the weekly meetings. Assignee: belmoreira, ricolin Mechanism to define the TC track-able targets per cycle (or redefine the TC tracker) ------------------------------------------------------------------------------------------------- This is to try a mechanism for TC working items per cycle. We agreed to track these in etherpad and check status in our weekly meeting. Action Items: Create the etherpad for Xena cycle tracker: https://etherpad.opendev.org/p/tc-xena-tracker Assignee: gmann [1] https://governance.openstack.org/tc/goals/selected/wallaby/migrate-policy-format-from-json-to-yaml.html [2] https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml [3] https://governance.openstack.org/tc/goals/selected/wallaby/migrate-to-privsep.html [4] https://governance.openstack.org/tc/goals/#goal-selection-schedule [5] https://governance.openstack.org/tc/reference/tags/index.html [6] https://www.youtube.com/watch?v=s4HOyAdQx8A [7] https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes -gmann From syedammad83 at gmail.com Thu Apr 29 05:37:58 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Thu, 29 Apr 2021 10:37:58 +0500 Subject: [wallaby][trove] MySQL 8.0 Support Message-ID: Hi, I am using trove wallaby release and trying to create a database instance with MySQL 8.0. I have created a mysql datastore version 8.0. The instance deployment failed with ERROR. Digging it further in guest agent logs, it is found that MySQL 8.0 container is not booting up and getting failed with below errors. 2021-04-28 09:35:27+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.24-1debian10 started. 2021-04-28T09:35:28.496216Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.24) starting as process 1 2021-04-28T09:35:28.574978Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2021-04-28T09:35:29.318823Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. mysqld: Table 'mysql.plugin' doesn't exist 2021-04-28T09:35:29.566920Z 0 [ERROR] [MY-010735] [Server] Could not open the mysql.plugin table. Please perform the MySQL upgrade procedure. 2021-04-28T09:35:29.568058Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.569103Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.570029Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.570986Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.571894Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.572835Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.573939Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.576559Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2021-04-28T09:35:29.665764Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2021-04-28T09:35:29.705075Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2021-04-28T09:35:29.718900Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2021-04-28T09:35:29.719698Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2021-04-28T09:35:29.724833Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2021-04-28T09:35:29.725997Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-001146 - Table 'mysql.component' doesn't exist 2021-04-28T09:35:29.726837Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-003543 - The mysql.component table is missing or has an incorrect definition. 2021-04-28T09:35:29.727818Z 0 [ERROR] [MY-000067] [Server] unknown variable 'ignore-db-dir=lost+found'. 2021-04-28T09:35:29.729297Z 0 [ERROR] [MY-010119] [Server] Aborting 2021-04-28T09:35:31.237003Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.24) MySQL Community Server - GPL. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:80 2021-04-28 09:35:31.266 930 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:319 I think this requires some changes in config.template of mysql. -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Thu Apr 29 05:58:50 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Thu, 29 Apr 2021 10:58:50 +0500 Subject: [wallaby][trove] MySQL 8.0 Support In-Reply-To: References: Message-ID: It Looks like the problem is in the guest image. I am using victoria image from below repo. https://tarballs.opendev.org/openstack/trove/images/ On Thu, Apr 29, 2021 at 10:37 AM Ammad Syed wrote: > Hi, > > I am using trove wallaby release and trying to create a database instance > with MySQL 8.0. I have created a mysql datastore version 8.0. The instance > deployment failed with ERROR. > > Digging it further in guest agent logs, it is found that MySQL 8.0 > container is not booting up and getting failed with below errors. > > 2021-04-28 09:35:27+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL > Server 8.0.24-1debian10 started. > 2021-04-28T09:35:28.496216Z 0 [System] [MY-010116] [Server] > /usr/sbin/mysqld (mysqld 8.0.24) starting as process 1 > 2021-04-28T09:35:28.574978Z 1 [System] [MY-013576] [InnoDB] InnoDB > initialization has started. > 2021-04-28T09:35:29.318823Z 1 [System] [MY-013577] [InnoDB] InnoDB > initialization has ended. > mysqld: Table 'mysql.plugin' doesn't exist > 2021-04-28T09:35:29.566920Z 0 [ERROR] [MY-010735] [Server] Could not open > the mysql.plugin table. Please perform the MySQL upgrade procedure. > 2021-04-28T09:35:29.568058Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.569103Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.570029Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.570986Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.571894Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.572835Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.573939Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.576559Z 0 [System] [MY-011323] [Server] X Plugin ready > for connections. Bind-address: '::' port: 33060, socket: > /var/run/mysqld/mysqlx.sock > 2021-04-28T09:35:29.665764Z 0 [Warning] [MY-010015] [Repl] Gtid table is > not ready to be used. Table 'mysql.gtid_executed' cannot be opened. > 2021-04-28T09:35:29.705075Z 0 [Warning] [MY-010015] [Repl] Gtid table is > not ready to be used. Table 'mysql.gtid_executed' cannot be opened. > 2021-04-28T09:35:29.718900Z 0 [Warning] [MY-010068] [Server] CA > certificate ca.pem is self signed. > 2021-04-28T09:35:29.719698Z 0 [System] [MY-013602] [Server] Channel > mysql_main configured to support TLS. Encrypted connections are now > supported for this channel. > 2021-04-28T09:35:29.724833Z 0 [Warning] [MY-010441] [Server] Failed to > open optimizer cost constant tables > 2021-04-28T09:35:29.725997Z 0 [ERROR] [MY-013129] [Server] A message > intended for a client cannot be sent there as no client-session is > attached. Therefore, we're sending the information to the error-log > instead: MY-001146 - Table 'mysql.component' doesn't exist > 2021-04-28T09:35:29.726837Z 0 [Warning] [MY-013129] [Server] A message > intended for a client cannot be sent there as no client-session is > attached. Therefore, we're sending the information to the error-log > instead: MY-003543 - The mysql.component table is missing or has an > incorrect definition. > 2021-04-28T09:35:29.727818Z 0 [ERROR] [MY-000067] [Server] unknown > variable 'ignore-db-dir=lost+found'. > 2021-04-28T09:35:29.729297Z 0 [ERROR] [MY-010119] [Server] Aborting > 2021-04-28T09:35:31.237003Z 0 [System] [MY-010910] [Server] > /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.24) MySQL Community Server > - GPL. get_actual_db_status > /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:80 > 2021-04-28 09:35:31.266 930 DEBUG trove.guestagent.datastore.service [-] > Waiting for DB status to change from running to healthy. wait_for_status > /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:319 > > I think this requires some changes in config.template of mysql. > > -- > Regards, > > > Syed Ammad Ali > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Apr 29 06:48:26 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 29 Apr 2021 08:48:26 +0200 Subject: [neutron] Drivers meeting agenda - 30.04.2021 Message-ID: <5348364.Jdb7Gc43JI@p1> Hi, Agenda for our tomorrow's drivers meeting is available at https://wiki.openstack.org/wiki/ Meetings/NeutronDrivers[1] We have 1 RFE to discuss: - https://bugs.launchpad.net/neutron/+bug/1922716[2] It was already discussed briefly during the PTG last week but let's get it official during the drivers meeting :) Have a great day and see You all tomorrow. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers [2] https://bugs.launchpad.net/neutron/+bug/1922716 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From hberaud at redhat.com Thu Apr 29 09:34:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 29 Apr 2021 11:34:21 +0200 Subject: [oslo] Team meetings updates for Xena Message-ID: Hi Osloers, Please notice the following updates concerning the team's meetings during Xena. Meetings are now scheduled the first and the third Monday of each month. No longer weekly meetings. Meeting's courtesy ping list will be updated soon for Xena. If you want to continue to be called at the beginning of each meeting please add you nick here: https://wiki.openstack.org/wiki/Meetings/Oslo#Xena_Courtesy_Ping Thanks for your attention. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Apr 29 09:27:05 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 29 Apr 2021 10:27:05 +0100 Subject: An compute service hang issue In-Reply-To: <57E3115F-31BE-45A1-A47B-278A7C459C57@vmware.com> References: <4e5d3beb2f7921b3a494ca853621da5d59cda1f5.camel@redhat.com> <872c4008732daa005ec95856db078e9f973110bf.camel@redhat.com> <57E3115F-31BE-45A1-A47B-278A7C459C57@vmware.com> Message-ID: <2fda14ec3fb83205a8d53c8213badf0b2cef7439.camel@redhat.com> On Thu, 2021-04-29 at 00:54 +0000, Yingji Sun wrote: > Sean, > > I think your comments on synchronized lock should be the root cause of my issue. > > In my logs, I see after a log of " Lock "compute_resources" acquired by ", my compute node get "stucked". > > Mar 12 01:13:58 controller-mpltc45f7n nova-compute[756]: 2021-03-12 01:13:58.044 1 DEBUG oslo_concurrency.lockutils [req-7f57447c-7aae-48fe-addd-46f80e80246a - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._safe_update_available_resource" :: waited 0.000s inner /usr/lib/python3.7/site-packages/oslo_concurrency/lockutils.py:327 > > I think it is because the lock "compute_resources" is not released any more that caused this issue. At that time there is some mysql issue when calling _safe_update_available_resource . So I think the exception is not handled the the lock is not released. we aquire the locks with decorators so that they cant be leaked in that way if there is an exception. but without the use of "fair" locks there is no garunette what greenthread will be resumed so indivigual request can end up waiting a very long time. i think its more likely that the lock is being aquired and release properly but some operations like the periodics might be starving the other operartion and the api request are just not getting processed. > > Yingji > > 在 4/27/21 下午4:11,“Sean Mooney” 写入: > > On Tue, 2021-04-27 at 01:19 +0000, Yingji Sun wrote: > > > Sean, > > > > > > You are right. I am working with vmware driver. Is it possible that you share some code fix samples so that I can have a try in my environment ? > > > in the libvirt case we had service wide hangs https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.launchpad.net%2Fnova%2F%2Bbug%2F1840912&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078669991854%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=HmsgFfteGe%2Ff3WelUzFLwtyQSfp387Lnjpr14EVZbEY%3D&reserved=0 that were resovled by > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Fnova%2Fcommit%2F36ee9c1913a449defd3b35f5ee5fb4afcd44169e&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=s8x5tjb3UBr%2FaYuPgNt4F3v4PwLFioJCfTpekqzJRVY%3D&reserved=0 > > > > > > > > so this synchronized decorator prints a log message which you should see > > when it is aquired and released > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Foslo.concurrency%2Fblob%2F4da91987d6ce7de2bb61c6ed760a019961a0a344%2Foslo_concurrency%2Flockutils.py%23L355-L371&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RY4MR1YIMCSo0smYk9AhzUZWkyUumv5W1bwohk7auHE%3D&reserved=0 > > you should see that in the logs. > > > > i notice also that in your code you do not have the fair=true argmument on master and for a few releases now > > we have enable the use of fair locking with https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenstack%2Fnova%2Fcommit%> 2F1ed9f9dac59c36cdda54a9852a1f93939b3ebbc3&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZX6GQ2TmyznoLo3FhjXGJl8MSb9hAUrjMeDtDEzcjTI%3D&reserved=0 > > to resolve long delays in the ironic diriver https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.launchpad.net%2Fnova%2F%2Bbug%2F1864122&data=04%7C01%7Cyingjisun%40vmware.com%7C24653a0ee7d3480b975e08d90954010c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637551078670001853%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Yy0yRX3fv5dTJlIbRoQmKGd2SYNe6%2BTqD684WWUwIGk%3D&reserved=0 but the same issues > would also affect vmware or any other clustered hypervior where the resouce tracker is manageing multiple nodes. > > > its very possible that that is what is causing your current issues. > > > > >     @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE) > > > > > >         if instance.node: > > >             LOG.warning("Node field should not be set on the instance " > > >                         "until resources have been claimed.", > > >                         instance=instance) > > > > > >         cn = self.compute_nodes[nodename] > > >          > > >           I did not see the rabbitmq messsge that should be sent here. > > >          > > >         pci_requests = objects.InstancePCIRequests.get_by_instance_uuid( > > >             context, instance.uuid) > > > > > > From hberaud at redhat.com Thu Apr 29 09:58:46 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 29 Apr 2021 11:58:46 +0200 Subject: [release] Release countdown for week R-23, April 26-30 Message-ID: Welcome back to the release countdown emails! These will be sent at major points in the Xena development cycle, which should conclude with a final release on 06 October, 2021. Development Focus ----------------- At this stage in the release cycle, focus should be on planning the Xena development cycle, assessing Xena community goals and approving Xena specs. General Information ------------------- Xena is a 25 weeks development cycle. In case you haven't seen it yet, please take a look over the schedule for this release: https://releases.openstack.org/xena/schedule.html By default, the team PTL is responsible for handling the release cycle and approving release requests. This task can (and probably should) be delegated to release liaisons. Now is a good time to review release liaison information for your team and make sure it is up to date: https://opendev.org/openstack/releases/src/branch/master/data/release_liaisons.yaml By default, all your team deliverables from the Xena release are continued in Xena with a similar release model. Upcoming Deadlines & Dates -------------------------- Xena-1 milestone: 27 May, 2021 Hervé Beraud and the Release Management team -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Apr 29 11:31:39 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 29 Apr 2021 13:31:39 +0200 Subject: Fwd: cinder + Unity In-Reply-To: References: Message-ID: <20210429113139.7pko53jc6bykf6h4@localhost> On 20/04, Albert Shih wrote: > Le 20/04/2021 à 00:35:46+0530, Rajat Dhasmana a écrit > Hi Rajat, > > > > > This might be something to look at with the wrong spelling causing mismatch. > >   > > > >   unity_io_ports = *_enp1s0 > >   unity_storage_pool_names = onering > > > > When I'm trying to create a storage through a > > > >     openstack volume create volumetest --type thick_volume_type --size 100 > > > > I don't even see (with tcpdump) the cinder server trying to connect to > > > >   onering-remote.FQDN > > > > Inside my /var/log/cinder/cinder-scheduler.log I have > > > >   2021-04-19 18:06:56.805 21315 INFO cinder.scheduler.base_filter > > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > > b1d58ebae6b84f7586ad63b94203d7ae - - -] Filtering removed all hosts for the > > request with volume ID '06e5f07d-766f-4d07-b3bf-6153a2cf6abd'. Filter > > results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: > > (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0) > > > > > > This log mentions that no host is valid to pass the 3 filters in the scheduler. > >   > > > >   2021-04-19 18:06:56.806 21315 WARNING cinder.scheduler.filter_scheduler > > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > > b1d58ebae6b84f7586ad63b94203d7ae - - -] No weighed backend found for volume > > with properties: {'id': '5f16fc1f-76ff-41ee-8927-56925cf7b00f', 'name': > > 'thick_volume_type', 'description': None, 'is_public': True, 'projects': > > [], 'extra_specs': {'provisioning:type': 'thick', > > 'thick_provisioning_support': 'True'}, 'qos_specs_id': None, 'created_at': > > '2021-04-19T15:07:09.000000', 'updated_at': None, 'deleted_at': None, > > 'deleted': False} > >   2021-04-19 18:06:56.806 21315 INFO cinder.message.api > > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > > b1d58ebae6b84f7586ad63b94203d7ae - - -] Creating message record for > > request_id = req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 > >   2021-04-19 18:06:56.811 21315 ERROR cinder.scheduler.flows.create_volume > > [req-4808cc9d-b9c3-44cb-8cae-7503db0b0256 f5e5c9ea20064b17851f07c276d71aee > > b1d58ebae6b84f7586ad63b94203d7ae - - -] Failed to run task > > cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask; > > volume:create: No valid backend was found. No weighed backends available: > > cinder.exception.NoValidBackend: No valid backend was found. No weighed > > backends available > > > > > > It seem (for me) cinder don't try to use unity.... > > > > > > > > The cinder-volume service is responsible for communicating with the backend and > > this create request fails on scheduler only, hence no sign of it. > > Absolutly. I didn't known I need to install the cinder-volume, inside the > docs it's seem the cinder-volume is for LVM backend. My bad. > > After installing the cinder-volume and > > > > Looking at the scheduler logs, there are a few things you can check: > > > > 1) execute ``cinder-manage service list`` command and check the status of > > cinder-volume service if it's active or not. If it shows an X sign then check > > in cinder-volume logs for any startup failure. > > 2) Check the volume type properties and see if ``volume_backend_name`` is set > > to the right value i.e. Unitiy_ISCSI (which looks suspicious because the > > spelling is wrong and there might be a mismatch somewhere) > > changing through openstack volume type I was able to create a volume on my > Unity storage unit. > > I'm not sure it's working perfectly because I still don't have nova and > neutron running. Hi, If you can create a volume in the array then you have confirmed that the control plane connection works. To do a preliminary check of the data plane connection without needing nova, you can create a cinder volume from a glance image. That operation will ensure that the volume can be exported an mapped (control plane operation), that the host can connect via iSCSI to the volume, that cinder connects to glance correctly, and that it can do I/O on the connected volume. Cheers, Gorka. > > But now I'm going to configure nova and neutron. > > > > > Also it's good to mention the openstack version you're using since the code > > changes every cycle and it's hard to track the issues with every release. > > Sorry. I will do that next time. > > Thanks you very much. > -- > Albert SHIH > Observatoire de Paris > xmpp: jas at obspm.fr > Heure local/Local time: > Tue Apr 20 03:32:01 PM CEST 2021 > From geguileo at redhat.com Thu Apr 29 11:48:24 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 29 Apr 2021 13:48:24 +0200 Subject: [Cinder] Problem in iSCSI Portal In-Reply-To: References: Message-ID: <20210429114824.fnhln2enr6ww7klk@localhost> On 20/04, Taha Adel wrote: > Hello, > > The situation is, I have one storage node that has cinder-volume service up > and running on top of it and has a dedicated physical NIC for storage > traffic. I have set the following configuration in /etc/cinder/cinder.conf > file at the storage node: > > volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver > volume_group = cinder-volumes > target_protocol = iscsi > target_helper = lioadm > iscsi_ip_address = 10.0.102.11 (different than the management ip) > > The service is able to create the volume and attach it as a backstore for > iSCSI, but it can't create a Portal for iSCSI service. Should I create the > portal entry by myself? or is there a mistake I have made in the config > file? > > Thanks in advance Hi, What operation are you trying to do? What is the failure you are seeing? If you are creating a volume from image using the LVM backend, then there won't be an iSCSI target/portal for the operation. If you are attaching the volume to a nova instance and it's failing, there should be an error either on the cinder volume logs or the nova compute logs that would help figure out the issue. Cheers, Gorka. From eharney at redhat.com Thu Apr 29 14:20:39 2021 From: eharney at redhat.com (Eric Harney) Date: Thu, 29 Apr 2021 10:20:39 -0400 Subject: [glance][ceph] Openstack Wallaby and Ceph Pacific- ERROR cinder.scheduler.flows.create_volume In-Reply-To: References: Message-ID: <70636132-f3a6-d37c-e12a-9f8d2a5302a8@redhat.com> On 4/27/21 6:51 AM, Tecnologia Charne.Net wrote: > > Hello! > > I'm working with Openstack Wallaby (1 controller, 2 compute nodes) > connected to Ceph Pacific cluster in a devel environment. > > With Openstack Victoria and Ceph Pacific (before last friday update) > everything was running like a charm. > > Then, I upgraded Openstack to Wallaby and Ceph  to version 16.2.1. > (Because of auth_allow_insecure_global_id_reclaim I had to upgrade many > clients... but that's another story...) > > After upgrade, when I try to create a volume from image, > >      openstack volume create --image > f1df058d-be99-4401-82d9-4af9410744bc debian10_volume1 --size 5 > > with "show_image_direct_url = True", I get "No valid backend" in > /var/log/cinder/cinder-scheduler.log > > 2021-04-26 20:35:24.957 41348 ERROR cinder.scheduler.flows.create_volume > [req-651937e5-148f-409c-8296-33f200892e48 > c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd1b3be964 - - -] > Failed to run task > cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: > No valid backend was found. Exceeded max scheduling attempts 3 for > resource 56fbb645-2c34-477d-9a59-beec78f4fd3f: > cinder.exception.NoValidBackend: No valid backend was found. Exceeded > max scheduling attempts 3 for resource 56fbb645-2c34-477d-9a59-beec78f4fd3f > > and > > 2021-04-26 20:35:24.968 41347 ERROR oslo_messaging.rpc.server > [req-651937e5-148f-409c-8296-33f200892e48 > c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd > 1b3be964 - - -] Exception during message handling: rbd.InvalidArgument: > [errno 22] RBD invalid argument (error creating clone) > > in /var/log/cinder/cinder-volume.log > > > If I disable "show_image_direct_url = False", volume creation from image > works fine. > > > I have spent the last four days googling and reading lots of docs, old > and new ones, unlucly... > > Does anybody have a clue, (please)? > > Thanks in advance! > > > Javier.- > Hi, Some fixes are still in progress for Ceph Pacific support in Cinder. These WIP patches are targeting this problem: https://review.opendev.org/c/openstack/cinder/+/786260 https://review.opendev.org/c/openstack/cinder/+/786266 Thanks, Eric From james.slagle at gmail.com Thu Apr 29 15:53:57 2021 From: james.slagle at gmail.com (James Slagle) Date: Thu, 29 Apr 2021 11:53:57 -0400 Subject: =?UTF-8?Q?Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2Dcore?= Message-ID: I'm proposing we formally promote Cédric to full tripleo-core duties. He is already in the gerrit group with the understanding that his +2 is for validations. His experience and contributions have grown a lot since then, and I'd like to see that +2 expanded to all of TripleO. If there are no objections, we'll consider the change official at the end of next week. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Thu Apr 29 15:56:45 2021 From: james.slagle at gmail.com (James Slagle) Date: Thu, 29 Apr 2021 11:56:45 -0400 Subject: =?UTF-8?Q?=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tr?= =?UTF-8?Q?ipleo=2Dcore?= In-Reply-To: References: Message-ID: (resending with TripleO tag) On Thu, Apr 29, 2021 at 11:53 AM James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > > If there are no objections, we'll consider the change official at the end > of next week. > > -- > -- James Slagle > -- > -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Thu Apr 29 16:02:15 2021 From: johfulto at redhat.com (John Fulton) Date: Thu, 29 Apr 2021 12:02:15 -0400 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Thu, Apr 29, 2021 at 12:01 PM James Slagle wrote: > > (resending with TripleO tag) > > On Thu, Apr 29, 2021 at 11:53 AM James Slagle wrote: >> >> I'm proposing we formally promote Cédric to full tripleo-core duties. He is already in the gerrit group with the understanding that his +2 is for validations. His experience and contributions have grown a lot since then, and I'd like to see that +2 expanded to all of TripleO. >> >> If there are no objections, we'll consider the change official at the end of next week. >> >> -- >> -- James Slagle >> -- > > > > -- > -- James Slagle > -- From dvd at redhat.com Thu Apr 29 16:16:47 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Thu, 29 Apr 2021 12:16:47 -0400 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 DVD On Thu, Apr 29, 2021 at 12:16 PM John Fulton wrote: > +1 > > On Thu, Apr 29, 2021 at 12:01 PM James Slagle > wrote: > > > > (resending with TripleO tag) > > > > On Thu, Apr 29, 2021 at 11:53 AM James Slagle > wrote: > >> > >> I'm proposing we formally promote Cédric to full tripleo-core duties. > He is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > >> > >> If there are no objections, we'll consider the change official at the > end of next week. > >> > >> -- > >> -- James Slagle > >> -- > > > > > > > > -- > > -- James Slagle > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Apr 29 16:24:46 2021 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 29 Apr 2021 10:24:46 -0600 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Thu, Apr 29, 2021 at 10:05 AM James Slagle wrote: > (resending with TripleO tag) > > On Thu, Apr 29, 2021 at 11:53 AM James Slagle > wrote: > >> I'm proposing we formally promote Cédric to full tripleo-core duties. He >> is already in the gerrit group with the understanding that his +2 is for >> validations. His experience and contributions have grown a lot since then, >> and I'd like to see that +2 expanded to all of TripleO. >> >> If there are no objections, we'll consider the change official at the end >> of next week. >> >> -- >> -- James Slagle >> -- >> > > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From beagles at redhat.com Thu Apr 29 18:01:29 2021 From: beagles at redhat.com (Brent Eagles) Date: Thu, 29 Apr 2021 15:31:29 -0230 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Thu, Apr 29, 2021 at 1:58 PM Alex Schultz wrote: > +1 > > On Thu, Apr 29, 2021 at 10:05 AM James Slagle > wrote: > >> (resending with TripleO tag) >> >> On Thu, Apr 29, 2021 at 11:53 AM James Slagle >> wrote: >> >>> I'm proposing we formally promote Cédric to full tripleo-core duties. He >>> is already in the gerrit group with the understanding that his +2 is for >>> validations. His experience and contributions have grown a lot since then, >>> and I'd like to see that +2 expanded to all of TripleO. >>> >>> If there are no objections, we'll consider the change official at the >>> end of next week. >>> >>> -- >>> -- James Slagle >>> -- >>> >> >> >> -- >> -- James Slagle >> -- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Apr 29 18:08:26 2021 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 29 Apr 2021 14:08:26 -0400 Subject: [tripleo] removing myself from tripleo-core Message-ID: Hi people, If you didn't notice, I am not working on TripleO anymore (as a developer, but still a heavy user) and therefore my amount of contribution and awareness in the project makes me think I shouldn't be considered a maintainer anymore. I'll remove myself from tripleo-core in Gerrit. I just want to let you know that I'm not far, I'm still on IRC and sometimes participate in the chat. I can also provide support for the crappy code that I wrote (Alex made me do it most of the time). However don't expect me to be heavily involved like it used to be the case. I just want to say that I already miss a lot working on TripleO and in OpenStack in general. The Community, the maturity of the project and all that we built is something that can't be compared to anything else out there. My focus is now "Kubernetes (OpenShift) on top of OpenStack", heavily consuming what OpenStack has to offer and connecting the two worlds. It's a lot of fun and a lot of new things to learn! But I'm sure OpenStack won't go too far away from my plate, so let's keep in touch. Take care, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksambor at redhat.com Thu Apr 29 18:27:13 2021 From: ksambor at redhat.com (Kamil Sambor) Date: Thu, 29 Apr 2021 20:27:13 +0200 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Thu, Apr 29, 2021 at 8:09 PM Brent Eagles wrote: > +1 > > On Thu, Apr 29, 2021 at 1:58 PM Alex Schultz wrote: > >> +1 >> >> On Thu, Apr 29, 2021 at 10:05 AM James Slagle >> wrote: >> >>> (resending with TripleO tag) >>> >>> On Thu, Apr 29, 2021 at 11:53 AM James Slagle >>> wrote: >>> >>>> I'm proposing we formally promote Cédric to full tripleo-core duties. >>>> He is already in the gerrit group with the understanding that his +2 is for >>>> validations. His experience and contributions have grown a lot since then, >>>> and I'd like to see that +2 expanded to all of TripleO. >>>> >>>> If there are no objections, we'll consider the change official at the >>>> end of next week. >>>> >>>> -- >>>> -- James Slagle >>>> -- >>>> >>> >>> >>> -- >>> -- James Slagle >>> -- >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Apr 29 18:50:35 2021 From: amy at demarco.com (Amy Marrich) Date: Thu, 29 Apr 2021 13:50:35 -0500 Subject: [tripleo] removing myself from tripleo-core In-Reply-To: References: Message-ID: <6696B552-AE8F-4310-A7D7-E740E51B839F@demarco.com> Emilien, Thanks for all your hard work, contributions, and leadership! Amy (spotz) > On Apr 29, 2021, at 1:12 PM, Emilien Macchi wrote: > >  > Hi people, > > If you didn't notice, I am not working on TripleO anymore (as a developer, but still a heavy user) and therefore my amount of contribution and awareness in the project makes me think I shouldn't be considered a maintainer anymore. > I'll remove myself from tripleo-core in Gerrit. > > I just want to let you know that I'm not far, I'm still on IRC and sometimes participate in the chat. I can also provide support for the crappy code that I wrote (Alex made me do it most of the time). > However don't expect me to be heavily involved like it used to be the case. > > I just want to say that I already miss a lot working on TripleO and in OpenStack in general. The Community, the maturity of the project and all that we built is something that can't be compared to anything else out there. > My focus is now "Kubernetes (OpenShift) on top of OpenStack", heavily consuming what OpenStack has to offer and connecting the two worlds. > It's a lot of fun and a lot of new things to learn! But I'm sure OpenStack won't go too far away from my plate, so let's keep in touch. > > Take care, > -- > Emilien Macchi From marios at redhat.com Thu Apr 29 18:59:35 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 29 Apr 2021 21:59:35 +0300 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: On Thursday, April 29, 2021, James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > +1 thanks for raising the proposal slagle and thanks for all your hard work tengu > > > If there are no objections, we'll consider the change official at the end > of next week. > > -- > -- James Slagle > -- > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tecno at charne.net Thu Apr 29 15:45:15 2021 From: tecno at charne.net (Tecnologia Charne.Net) Date: Thu, 29 Apr 2021 12:45:15 -0300 Subject: [glance][ceph] Openstack Wallaby and Ceph Pacific- ERROR cinder.scheduler.flows.create_volume In-Reply-To: <70636132-f3a6-d37c-e12a-9f8d2a5302a8@redhat.com> References: <70636132-f3a6-d37c-e12a-9f8d2a5302a8@redhat.com> Message-ID: <2439f209-86c8-e91b-097b-0dd5019ae5c8@charne.net> Thanks Eric! I'll see your patches. Javier.- El 29/4/21 a las 11:20, Eric Harney escribió: > On 4/27/21 6:51 AM, Tecnologia Charne.Net wrote: >> >> Hello! >> >> I'm working with Openstack Wallaby (1 controller, 2 compute nodes) >> connected to Ceph Pacific cluster in a devel environment. >> >> With Openstack Victoria and Ceph Pacific (before last friday update) >> everything was running like a charm. >> >> Then, I upgraded Openstack to Wallaby and Ceph  to version 16.2.1. >> (Because of auth_allow_insecure_global_id_reclaim I had to upgrade >> many clients... but that's another story...) >> >> After upgrade, when I try to create a volume from image, >> >>       openstack volume create --image >> f1df058d-be99-4401-82d9-4af9410744bc debian10_volume1 --size 5 >> >> with "show_image_direct_url = True", I get "No valid backend" in >> /var/log/cinder/cinder-scheduler.log >> >> 2021-04-26 20:35:24.957 41348 ERROR >> cinder.scheduler.flows.create_volume >> [req-651937e5-148f-409c-8296-33f200892e48 >> c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd1b3be964 - - >> -] Failed to run task >> cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: >> No valid backend was found. Exceeded max scheduling attempts 3 for >> resource 56fbb645-2c34-477d-9a59-beec78f4fd3f: >> cinder.exception.NoValidBackend: No valid backend was found. Exceeded >> max scheduling attempts 3 for resource >> 56fbb645-2c34-477d-9a59-beec78f4fd3f >> >> and >> >> 2021-04-26 20:35:24.968 41347 ERROR oslo_messaging.rpc.server >> [req-651937e5-148f-409c-8296-33f200892e48 >> c048e887df994f9cb978554008556546 f02ae99c34cf44fd8ab3b1fd >> 1b3be964 - - -] Exception during message handling: >> rbd.InvalidArgument: [errno 22] RBD invalid argument (error creating >> clone) >> >> in /var/log/cinder/cinder-volume.log >> >> >> If I disable "show_image_direct_url = False", volume creation from >> image works fine. >> >> >> I have spent the last four days googling and reading lots of docs, >> old and new ones, unlucly... >> >> Does anybody have a clue, (please)? >> >> Thanks in advance! >> >> >> Javier.- >> > > > Hi, > > Some fixes are still in progress for Ceph Pacific support in Cinder. > > These WIP patches are targeting this problem: > > https://review.opendev.org/c/openstack/cinder/+/786260 > https://review.opendev.org/c/openstack/cinder/+/786266 > > Thanks, > Eric > From gmann at ghanshyammann.com Thu Apr 29 22:25:12 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 29 Apr 2021 17:25:12 -0500 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 Message-ID: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Hello Everyone, As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch by 2nd week of May. - https://review.opendev.org/c/openstack/devstack/+/788754 I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: Cinder- cinder-plugin-ceph-tempest-mn-aa - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 python-cinderclient - python-cinderclient-functional-py36 - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 Murani- murano-dashboard-sanity-check -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html -gmann From jpodivin at redhat.com Fri Apr 30 06:17:26 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Fri, 30 Apr 2021 08:17:26 +0200 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: ++ without reservations. On Thu, Apr 29, 2021 at 9:05 PM Marios Andreou wrote: > > > On Thursday, April 29, 2021, James Slagle wrote: > >> I'm proposing we formally promote Cédric to full tripleo-core duties. He >> is already in the gerrit group with the understanding that his +2 is for >> validations. His experience and contributions have grown a lot since then, >> and I'd like to see that +2 expanded to all of TripleO. >> > > > +1 thanks for raising the proposal slagle and thanks for all your hard > work tengu > > > > > > > >> >> >> If there are no objections, we'll consider the change official at the end >> of next week. >> >> -- >> -- James Slagle >> -- >> > > > > > > > -- > _sent from my mobile - sorry for spacing spelling etc_ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Fri Apr 30 06:59:58 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 30 Apr 2021 12:29:58 +0530 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: On Thu, Apr 29, 2021 at 9:29 PM James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > > +1 > If there are no objections, we'll consider the change official at the end > of next week. > > -- > -- James Slagle > -- > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Fri Apr 30 07:30:20 2021 From: jaosorior at redhat.com (Juan Osorio Robles) Date: Fri, 30 Apr 2021 10:30:20 +0300 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: +1 Great job Cédric! On Thu, 29 Apr 2021 at 18:55, James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > > If there are no objections, we'll consider the change official at the end > of next week. > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Fri Apr 30 07:38:09 2021 From: ykarel at redhat.com (Yatin Karel) Date: Fri, 30 Apr 2021 13:08:09 +0530 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: On Thu, Apr 29, 2021 at 9:27 PM James Slagle wrote: > > I'm proposing we formally promote Cédric to full tripleo-core duties. He is already in the gerrit group with the understanding that his +2 is for validations. His experience and contributions have grown a lot since then, and I'd like to see that +2 expanded to all of TripleO. > +1 > If there are no objections, we'll consider the change official at the end of next week. > > -- > -- James Slagle > -- Thanks and Regards Yatin Karel From chkumar at redhat.com Fri Apr 30 07:55:52 2021 From: chkumar at redhat.com (Chandan Kumar) Date: Fri, 30 Apr 2021 13:25:52 +0530 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: On Fri, Apr 30, 2021 at 1:10 PM Yatin Karel wrote: > > On Thu, Apr 29, 2021 at 9:27 PM James Slagle wrote: > > > > I'm proposing we formally promote Cédric to full tripleo-core duties. He is already in the gerrit group with the understanding that his +2 is for validations. His experience and contributions have grown a lot since then, and I'd like to see that +2 expanded to all of TripleO. > > +1 Thanks, Chandan Kumar From ltoscano at redhat.com Fri Apr 30 08:53:32 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 30 Apr 2021 10:53:32 +0200 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: <4829611.ejJDZkT8p0@whitebase.usersys.redhat.com> On Friday, 30 April 2021 00:25:12 CEST Ghanshyam Mann wrote: > Hello Everyone, > > As per the testing runtime since Victoria [1], we need to move our CI/CD to > Ubuntu Focal 20.04 but it seems there are few jobs still running on Bionic. > As devstack team is planning to drop the Bionic support you need to move > those to Focal otherwise they will start failing. We are planning to merge > the devstack patch by 2nd week of May. > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > I have not listed all the job but few of them which were failing with ' > rtslib-fb-targetctl error' are below: > > Cinder- cinder-plugin-ceph-tempest-mn-aa > - > https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f6 > 69eec72fee7ce/.zuul.yaml#L166 Looking at this job, I suspect the idea was just to use the proper nodeset with an exiting job, and at the time the default nodeset was the bionic one. I suspect we may avoid future bumps for this job (and probably others) by defining a set of nodeset to track the default nodeset used by devstack. We would just need openstack-single-node-devstackdefault, openstack-two-nodes- devstackdefault. Unfortunately the nodeset definitions don't support inheritance or aliases, so that would mean duplicating some definition in the devstack repository, but - it would be just one additional place to maintain - aliasing could be added to zuul in the future maybe. What do you think? > > python-cinderclient - python-cinderclient-functional-py36 > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > Octavia- > https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul > .d/jobs.yaml#L182 > > Murani- murano-dashboard-sanity-check > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6 > 650450273004a41575d2d68/.zuul.yaml#L15 For the record, this is the last legacy job left voting in the gates, but it is a bit tricky to port, as it tries to run horizon integration tests with a custom setup. It may be ported by just wrapping the old scripts in the meantime, but I suspect it's broken anyway now. -- Luigi From sgolovat at redhat.com Fri Apr 30 10:26:47 2021 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Fri, 30 Apr 2021 12:26:47 +0200 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: Hi. +1 чт, 29 апр. 2021 г. в 17:58, James Slagle : > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > > If there are no objections, we'll consider the change official at the end > of next week. > > -- > -- James Slagle > -- > -- Sergii Golovatiuk Senior Software Developer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From premkumar at aarnanetworks.com Fri Apr 30 05:06:32 2021 From: premkumar at aarnanetworks.com (Premkumar Subramaniyan) Date: Fri, 30 Apr 2021 10:36:32 +0530 Subject: Openstack Stack issues Message-ID: Hi, I am using the Openstack *USURI *version in *Centos7*. Due to some issues my disk size is full,I freed up the space. Afte that some service went down. After that I have issues in creating the stack and list stack. Attached the Screenshot for your reference [opnfv at aio1 ~]$ openstack stack list --debug START with options: stack list --debug options: Namespace(access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='', auth_type='', auth_url='http://10.200.134.218:5000/v3', cacert=None, cert='', client_id='', client_secret='***', cloud='', code='', debug=True, default_domain='default', default_domain_id='', default_domain_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=False, os_compute_api_version='', os_identity_api_version='3', os_image_api_version='', os_network_api_version='', os_object_api_version='', os_orchestration_api_version='1', os_project_id=None, os_project_name=None, os_volume_api_version='', passcode='', password='***', project_domain_id='default', project_domain_name='', project_id='0e75229392c54a398ab302a8e43aebe6', project_name='admin', protocol='', redirect_uri='', region_name='RegionOne', remote_project_domain_id='', remote_project_domain_name='', remote_project_id='', remote_project_name='', service_provider='', system_scope='', timing=False, token='***', trust_id='', user_domain_id='', user_domain_name='Default', user_id='', username='admin', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'api_timeout': None, 'verify': True, 'cacert': None, 'cert': None, 'key': None, 'baremetal_status_code_retries': '5', 'baremetal_introspection_status_code_retries': '5', 'image_status_code_retries': '5', 'disable_vendor_agent': {}, 'interface': 'public', 'floating_ip_source': 'neutron', 'image_api_use_tasks': False, 'image_format': 'qcow2', 'message': '', 'network_api_version': '2', 'object_store_api_version': '1', 'secgroup_source': 'neutron', 'status': 'active', 'auth': {'user_domain_name': 'Default', 'project_domain_id': 'default', 'project_id': '0e75229392c54a398ab302a8e43aebe6', 'project_name': 'admin'}, 'verbose_level': 3, 'deferred_help': False, 'debug': True, 'region_name': 'RegionOne', 'default_domain': 'default', 'timing': False, 'username': 'admin', 'password': '***', 'auth_url': ' http://10.200.134.218:5000/v3', 'beta_command': False, 'identity_api_version': '3', 'orchestration_api_version': '1', 'auth_type': 'password', 'networks': []} defaults: {'api_timeout': None, 'verify': True, 'cacert': None, 'cert': None, 'key': None, 'auth_type': 'password', 'baremetal_status_code_retries': 5, 'baremetal_introspection_status_code_retries': 5, 'image_status_code_retries': 5, 'disable_vendor_agent': {}, 'interface': None, 'floating_ip_source': 'neutron', 'image_api_use_tasks': False, 'image_format': 'qcow2', 'message': '', 'network_api_version': '2', 'object_store_api_version': '1', 'secgroup_source': 'neutron', 'status': 'active'} cloud cfg: {'api_timeout': None, 'verify': True, 'cacert': None, 'cert': None, 'key': None, 'baremetal_status_code_retries': '5', 'baremetal_introspection_status_code_retries': '5', 'image_status_code_retries': '5', 'disable_vendor_agent': {}, 'interface': 'public', 'floating_ip_source': 'neutron', 'image_api_use_tasks': False, 'image_format': 'qcow2', 'message': '', 'network_api_version': '2', 'object_store_api_version': '1', 'secgroup_source': 'neutron', 'status': 'active', 'auth': {'user_domain_name': 'Default', 'project_domain_id': 'default', 'project_id': '0e75229392c54a398ab302a8e43aebe6', 'project_name': 'admin'}, 'verbose_level': 3, 'deferred_help': False, 'debug': True, 'region_name': 'RegionOne', 'default_domain': 'default', 'timing': False, 'username': 'admin', 'password': '***', 'auth_url': ' http://10.200.134.218:5000/v3', 'beta_command': False, 'identity_api_version': '3', 'orchestration_api_version': '1', 'auth_type': 'password', 'networks': []} compute API version 2.1, cmd group openstack.compute.v2 identity API version 3, cmd group openstack.identity.v3 image API version 2, cmd group openstack.image.v2 network API version 2, cmd group openstack.network.v2 object_store API version 1, cmd group openstack.object_store.v1 volume API version 3, cmd group openstack.volume.v3 orchestration API version 1, cmd group openstack.orchestration.v1 command: stack list -> heatclient.osc.v1.stack.ListStack (auth=True) Auth plugin password selected auth_config_hook(): {'api_timeout': None, 'verify': True, 'cacert': None, 'cert': None, 'key': None, 'baremetal_status_code_retries': '5', 'baremetal_introspection_status_code_retries': '5', 'image_status_code_retries': '5', 'disable_vendor_agent': {}, 'interface': 'public', 'floating_ip_source': 'neutron', 'image_api_use_tasks': False, 'image_format': 'qcow2', 'message': '', 'network_api_version': '2', 'object_store_api_version': '1', 'secgroup_source': 'neutron', 'status': 'active', 'auth': {'user_domain_name': 'Default', 'project_domain_id': 'default', 'project_id': '0e75229392c54a398ab302a8e43aebe6', 'project_name': 'admin'}, 'additional_user_agent': [('osc-lib', '2.3.1')], 'verbose_level': 3, 'deferred_help': False, 'debug': True, 'region_name': 'RegionOne', 'default_domain': 'default', 'timing': False, 'username': 'admin', 'password': '***', 'auth_url': 'http://10.200.134.218:5000/v3', 'beta_command': False, 'identity_api_version': '3', 'orchestration_api_version': '1', 'auth_type': 'password', 'networks': []} Using auth plugin: password Using parameters {'auth_url': 'http://10.200.134.218:5000/v3', 'project_id': '0e75229392c54a398ab302a8e43aebe6', 'project_name': 'admin', 'project_domain_id': 'default', 'username': 'admin', 'user_domain_name': 'Default', 'password': '***'} Get auth_ref REQ: curl -g -i -X GET http://10.200.134.218:5000/v3 -H "Accept: application/json" -H "User-Agent: openstacksdk/0.55.0 keystoneauth1/4.3.1 python-requests/2.25.1 CPython/3.6.8" Starting new HTTP connection (1): 10.200.134.218:5000 http://10.200.134.218:5000 "GET /v3 HTTP/1.1" 200 254 RESP: [200] Connection: close Content-Length: 254 Content-Security-Policy: default-src 'self' https: wss:; Content-Type: application/json Date: Thu, 29 Apr 2021 05:26:21 GMT Server: nginx/1.18.0 Vary: X-Auth-Token X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block x-openstack-request-id: req-5e95ffe0-5fde-4234-9e7c-f79287e539a0 RESP BODY: {"version": {"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": " http://10.200.134.218:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}} GET call to http://10.200.134.218:5000/v3 used request id req-5e95ffe0-5fde-4234-9e7c-f79287e539a0 Making authentication request to http://10.200.134.218:5000/v3/auth/tokens Resetting dropped connection: 10.200.134.218 http://10.200.134.218:5000 "POST /v3/auth/tokens HTTP/1.1" 201 6037 {"token": {"methods": ["password"], "user": {"domain": {"id": "default", "name": "Default"}, "id": "b3c6351eb7d841678297401969eb01c7", "name": "admin", "password_expires_at": null}, "audit_ids": ["-pXpoN1cRyS5ioGPNOEdkg"], "expires_at": "2021-04-29T17:26:22.000000Z", "issued_at": "2021-04-29T05:26:22.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "0e75229392c54a398ab302a8e43aebe6", "name": "admin"}, "is_domain": false, "roles": [{"id": "5123dc78efec436dad719099e4979536", "name": "admin"}, {"id": "6ddb974942354b53a6cc1dfe10cc58a9", "name": "member"}, {"id": "24de7013b1124fcd99c31482ede6a855", "name": "reader"}], "catalog": [{"endpoints": [{"id": "9ef24c86cb5e49f6aecb882adfe8b8be", "interface": "admin", "region_id": "RegionOne", "url": "http://172.29.236.100:5000", "region": "RegionOne"}, {"id": "c43060c06da940ab9c9de47d96efca5f", "interface": "internal", "region_id": "RegionOne", "url": " http://172.29.236.100:5000", "region": "RegionOne"}, {"id": "ff7a8af4a859400584fc2f4025f8110a", "interface": "public", "region_id": "RegionOne", "url": "http://10.200.134.218:5000", "region": "RegionOne"}], "id": "0177ada716b8496390b53f88de002762", "type": "identity", "name": "keystone"}, {"endpoints": [{"id": "01dcd05263a5499a9222459614c19099", "interface": "admin", "region_id": "RegionOne", "url": " http://172.29.236.100:8776/v2/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}, {"id": "8f8d9172c7b94baf8971d6fce50c3c22", "interface": "public", "region_id": "RegionOne", "url": " http://10.200.134.218:8776/v2/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}, {"id": "abe2895da1fc47e2a795136a0f154dad", "interface": "internal", "region_id": "RegionOne", "url": " http://172.29.236.100:8776/v2/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}], "id": "2a8111a2e7224baaa928ba19545754cd", "type": "volumev2", "name": "cinderv2"}, {"endpoints": [{"id": "027033523daf4adc8d1a180e41f57c00", "interface": "admin", "region_id": "RegionOne", "url": " http://172.29.236.100:8004/v1/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}, {"id": "765c7ea8a9c4456bb23ba5d519ffa033", "interface": "internal", "region_id": "RegionOne", "url": " http://172.29.236.100:8004/v1/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}, {"id": "f3a264f72c0a4960b9740e466eb52d83", "interface": "public", "region_id": "RegionOne", "url": " http://10.200.134.218:8004/v1/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}], "id": "31803d9d7a3e4d4e92d9522459526a68", "type": "orchestration", "name": "heat"}, {"endpoints": [{"id": "1183338890744524b28c274ca61e93ff", "interface": "admin", "region_id": "RegionOne", "url": "http://172.29.236.100:8780", "region": "RegionOne"}, {"id": "21bcc1ebb4de44d2931f0220d6fa1576", "interface": "internal", "region_id": "RegionOne", "url": "http://172.29.236.100:8780", "region": "RegionOne"}, {"id": "70f912ca1dc44c3a826e987d1ea9f5b4", "interface": "public", "region_id": "RegionOne", "url": "http://10.200.134.218:8780", "region": "RegionOne"}], "id": "5c4974e041d34241bc7edf17f8ce07ae", "type": "placement", "name": "placement"}, {"endpoints": [{"id": "3c105203c90845d0a523f34cf6bc037b", "interface": "public", "region_id": "RegionOne", "url": "http://10.200.134.218:8000/v1", "region": "RegionOne"}, {"id": "cf77bee702da43d68aee3863d4fa68f0", "interface": "internal", "region_id": "RegionOne", "url": "http://172.29.236.100:8000/v1", "region": "RegionOne"}, {"id": "deda0fe9872d472e8ad04bf9cf65caab", "interface": "admin", "region_id": "RegionOne", "url": " http://172.29.236.100:8000/v1", "region": "RegionOne"}], "id": "675d604be05245e38156198a2e0c955c", "type": "cloudformation", "name": "heat-cfn"}, {"endpoints": [{"id": "0f358e9c8a9e4d4aa87ba5827c5612a7", "interface": "public", "region_id": "RegionOne", "url": " http://10.200.134.218:9696", "region": "RegionOne"}, {"id": "e0fda98441e442ad84ad9638b97fb5a9", "interface": "admin", "region_id": "RegionOne", "url": "http://172.29.236.100:9696", "region": "RegionOne"}, {"id": "f63223fe921a44e48fffaa90a22619b2", "interface": "internal", "region_id": "RegionOne", "url": "http://172.29.236.100:9696", "region": "RegionOne"}], "id": "679d6c52473b4b87867b0839238a2533", "type": "network", "name": "neutron"}, {"endpoints": [{"id": "04618143b8764565bd1f292c481ec9fc", "interface": "internal", "region_id": "RegionOne", "url": "http://172.29.236.100:9292", "region": "RegionOne"}, {"id": "131339c1c9564d0d9f0cfdf6311c2893", "interface": "admin", "region_id": "RegionOne", "url": "http://172.29.236.100:9292", "region": "RegionOne"}, {"id": "f0725b46e74549d1b3d58efaa46dbf02", "interface": "public", "region_id": "RegionOne", "url": "http://10.200.134.218:9292", "region": "RegionOne"}], "id": "df050da4381c4209968693e4aca92da5", "type": "image", "name": "glance"}, {"endpoints": [{"id": "7cb5429849d741238e31f929132b224d", "interface": "admin", "region_id": "RegionOne", "url": "http://172.29.236.100:8774/v2.1", "region": "RegionOne"}, {"id": "ab640e3d4ccf4d66832b0d2d38f30be9", "interface": "internal", "region_id": "RegionOne", "url": " http://172.29.236.100:8774/v2.1", "region": "RegionOne"}, {"id": "d707d5092628423397cb02a46cb1deca", "interface": "public", "region_id": "RegionOne", "url": "http://10.200.134.218:8774/v2.1", "region": "RegionOne"}], "id": "e807659ac4f74b78805b0f9e605081f0", "type": "compute", "name": "nova"}, {"endpoints": [{"id": "152f11cd02de47069bb7fee11954395d", "interface": "public", "region_id": "RegionOne", "url": " http://10.200.134.218:8776/v3/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}, {"id": "71b96d9f9b994d72bf21c10810a532f9", "interface": "admin", "region_id": "RegionOne", "url": " http://172.29.236.100:8776/v3/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}, {"id": "935b1c8e4da547ca891334fe476291b6", "interface": "internal", "region_id": "RegionOne", "url": " http://172.29.236.100:8776/v3/0e75229392c54a398ab302a8e43aebe6", "region": "RegionOne"}], "id": "ff62f9bcded14444bbb4f1e1cdcaf60b", "type": "volumev3", "name": "cinderv3"}]}} run(Namespace(all_projects=False, columns=[], deleted=False, fit_width=False, formatter='table', hidden=False, limit=None, long=False, marker=None, max_width=0, nested=False, noindent=False, print_empty=False, properties=None, quote_mode='nonnumeric', short=False, sort=None, sort_columns=[], sort_direction=None, tag_mode=None, tags=None)) take_action(Namespace(all_projects=False, columns=[], deleted=False, fit_width=False, formatter='table', hidden=False, limit=None, long=False, marker=None, max_width=0, nested=False, noindent=False, print_empty=False, properties=None, quote_mode='nonnumeric', short=False, sort=None, sort_columns=[], sort_direction=None, tag_mode=None, tags=None)) Instantiating orchestration client: REQ: curl -g -i -X GET http://10.200.134.218:8004/v1/0e75229392c54a398ab302a8e43aebe6/stacks? -H "Accept: application/json" -H "Content-Type: application/json" -H "User-Agent: python-heatclient" -H "X-Auth-Token: {SHA256}a0ae405fe8fdb29acbcc65b1ce4feb9a304448592d6a8ae982955e903f45f3c0" -H "X-Region-Name: RegionOne" Starting new HTTP connection (1): 10.200.134.218:8004 http://10.200.134.218:8004 "GET /v1/0e75229392c54a398ab302a8e43aebe6/stacks HTTP/1.1" 503 None RESP: [503] Cache-Control: no-cache Connection: close Content-Type: text/html RESP BODY: Omitted, Content-Type is set to text/html. Only application/json responses have their bodies logged. ERROR: b'

503 Service Unavailable

\nNo server is available to handle this request.\n\n' Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/cliff/app.py", line 401, in run_subcommand result = cmd.run(parsed_args) File "/usr/local/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in run return super(Command, self).run(parsed_args) File "/usr/local/lib/python3.6/site-packages/cliff/display.py", line 115, in run column_names, data = self.take_action(parsed_args) File "/usr/local/lib/python3.6/site-packages/heatclient/osc/v1/stack.py", line 577, in take_action return _list(client, args=parsed_args) File "/usr/local/lib/python3.6/site-packages/heatclient/osc/v1/stack.py", line 691, in _list data = list(data) File "/usr/local/lib/python3.6/site-packages/heatclient/v1/stacks.py", line 136, in paginate stacks = self._list(url, 'stacks') File "/usr/local/lib/python3.6/site-packages/heatclient/common/base.py", line 114, in _list body = self.client.get(url).json() File "/usr/local/lib/python3.6/site-packages/keystoneauth1/adapter.py", line 395, in get return self.request(url, 'GET', **kwargs) File "/usr/local/lib/python3.6/site-packages/heatclient/common/http.py", line 323, in request raise exc.from_response(resp) heatclient.exc.HTTPServiceUnavailable: ERROR: b'

503 Service Unavailable

\nNo server is available to handle this request.\n\n' clean_up ListStack: ERROR: b'

503 Service Unavailable

\nNo server is available to handle this request.\n\n' END return value: 1 Warm Regards, Premkumar Subramaniyan Technical staff M: +91 9940743669 *CRN Top 10 Coolest Edge Computing Startups of 2020 * -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot_2021-04-30 Stacks - OpenStack Dashboard.png Type: image/png Size: 22264 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot_2021-04-30 System Information - OpenStack Dashboard(1).png Type: image/png Size: 28556 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot_2021-04-30 Hypervisors - OpenStack Dashboard.png Type: image/png Size: 31447 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot_2021-04-30 System Information - OpenStack Dashboard.png Type: image/png Size: 32880 bytes Desc: not available URL: From gthiemonge at redhat.com Fri Apr 30 12:48:21 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Fri, 30 Apr 2021 14:48:21 +0200 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: On Fri, Apr 30, 2021 at 12:27 AM Ghanshyam Mann wrote: > Hello Everyone, > > As per the testing runtime since Victoria [1], we need to move our CI/CD > to Ubuntu Focal 20.04 but > it seems there are few jobs still running on Bionic. As devstack team is > planning to drop the Bionic support > you need to move those to Focal otherwise they will start failing. We are > planning to merge the devstack patch > by 2nd week of May. > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > I have not listed all the job but few of them which were failing with ' > rtslib-fb-targetctl error' are below: > > Cinder- cinder-plugin-ceph-tempest-mn-aa > - > https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > python-cinderclient - python-cinderclient-functional-py36 > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > Octavia- > https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 Thanks, we will work to fix it for Octavia > Murani- murano-dashboard-sanity-check > - > https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > Also if your 3rd party CI is still running on Bionic, you can plan to > migrate it to Focal before devstack patch merge. > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > -gmann > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeacock at redhat.com Fri Apr 30 13:58:19 2021 From: dpeacock at redhat.com (David Peacock) Date: Fri, 30 Apr 2021 09:58:19 -0400 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: +1 On Fri, Apr 30, 2021 at 6:33 AM Sergii Golovatiuk wrote: > Hi. > > +1 > > чт, 29 апр. 2021 г. в 17:58, James Slagle : > >> I'm proposing we formally promote Cédric to full tripleo-core duties. He >> is already in the gerrit group with the understanding that his +2 is for >> validations. His experience and contributions have grown a lot since then, >> and I'd like to see that +2 expanded to all of TripleO. >> >> If there are no objections, we'll consider the change official at the end >> of next week. >> >> -- >> -- James Slagle >> -- >> > > > -- > Sergii Golovatiuk > > Senior Software Developer > > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fpantano at redhat.com Fri Apr 30 13:58:27 2021 From: fpantano at redhat.com (Francesco Pantano) Date: Fri, 30 Apr 2021 15:58:27 +0200 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: +1 On Thu, Apr 29, 2021 at 6:01 PM James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > > If there are no objections, we'll consider the change official at the end > of next week. > > -- > -- James Slagle > -- > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Fri Apr 30 14:31:07 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 30 Apr 2021 16:31:07 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: Hi Riccardo, Thanks for raising this! I do like the idea of having stable branches for the ipa-builder +1 Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau escreveu: > Hello fellow openstackers! > > During the recent xena ptg, the ironic community had a discussion about > the need to move the ironic-python-agent-builder project from an > independent model to the standard release model. > When we initially split the builder from ironic-python-agent, we decided > against it, but considering some problems we encountered during the road, > the ironic community seems to be in favor of the change. > The reasons for this are mainly to strictly align the image building > project to ironic-python-agent releases, and ease dealing with the > occasional upgrade of tinycore linux, the base image used to build the > "tinyipa" ironic-python-agent ramdisk. > > We'd like to involve the release team to ask for advice, not only on the > process, but also considering that we need to ask to cut the first branch > for the wallaby stable release, and we know we're a bit late for that! :) > > Thank you in advance for your help! > > Riccardo > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Apr 30 14:34:35 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 30 Apr 2021 23:34:35 +0900 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Fri, Apr 30, 2021 at 1:08 James Slagle wrote: > (resending with TripleO tag) > > On Thu, Apr 29, 2021 at 11:53 AM James Slagle > wrote: > >> I'm proposing we formally promote Cédric to full tripleo-core duties. He >> is already in the gerrit group with the understanding that his +2 is for >> validations. His experience and contributions have grown a lot since then, >> and I'd like to see that +2 expanded to all of TripleO. >> >> If there are no objections, we'll consider the change official at the end >> of next week. >> >> -- >> -- James Slagle >> -- >> > > > -- > -- James Slagle > -- > -- ---------- Takashi Kajinami Principal Software Maintenance Engineer Customer Experience and Engagement Red Hat email: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mschuppert at redhat.com Fri Apr 30 14:39:29 2021 From: mschuppert at redhat.com (Martin Schuppert) Date: Fri, 30 Apr 2021 16:39:29 +0200 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Thu, Apr 29, 2021 at 6:01 PM James Slagle wrote: > (resending with TripleO tag) > > On Thu, Apr 29, 2021 at 11:53 AM James Slagle > wrote: > >> I'm proposing we formally promote Cédric to full tripleo-core duties. He >> is already in the gerrit group with the understanding that his +2 is for >> validations. His experience and contributions have grown a lot since then, >> and I'd like to see that +2 expanded to all of TripleO. >> >> If there are no objections, we'll consider the change official at the end >> of next week. >> >> -- >> -- James Slagle >> -- >> > > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 30 17:02:09 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 30 Apr 2021 17:02:09 +0000 Subject: [interop] Weekly meeting is at Fridays 4:00pm UTC Message-ID: Every Fridays at 16:00 UTC <16:00%20UTC > on https://meetpad.opendev.org/Interop-WG-weekly-meeting Current Meeting Logs on etherpad - https://etherpad.opendev.org/p/interop Wiki is updated with the latest info. https://wiki.openstack.org/wiki/Governance/InteropWG Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Apr 30 17:16:46 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 30 Apr 2021 10:16:46 -0700 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: Does this mean the community is tagging all stable branches before Victoria as End-of-Life? According to the PTI, bionic was the tested platform for Train[1] and Ussuri[2] which are still under "Maintained"[3]. Since devstack is branchless[4], removing support for bionic would limit our ability to continue to support/test those stable branches. Don't get me wrong, I am not a supporter of maintaining older branches for too long, but Train still seems relevant for a lot of people. Michael [1] https://governance.openstack.org/tc/reference/runtimes/train.html [2] https://governance.openstack.org/tc/reference/runtimes/ussuri.html [3] https://releases.openstack.org/ [4] https://github.com/openstack/devstack/tags On Thu, Apr 29, 2021 at 3:35 PM Ghanshyam Mann wrote: > > Hello Everyone, > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > by 2nd week of May. > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > Cinder- cinder-plugin-ceph-tempest-mn-aa > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > python-cinderclient - python-cinderclient-functional-py36 > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > Murani- murano-dashboard-sanity-check > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > -gmann > From johnsomor at gmail.com Fri Apr 30 17:19:33 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 30 Apr 2021 10:19:33 -0700 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: Opps, just realized I was looking in the wrong place. Devstack does have stable branches. Ignore my post/comment. Sigh. Michael On Fri, Apr 30, 2021 at 10:16 AM Michael Johnson wrote: > > Does this mean the community is tagging all stable branches before > Victoria as End-of-Life? > > According to the PTI, bionic was the tested platform for Train[1] and > Ussuri[2] which are still under "Maintained"[3]. > > Since devstack is branchless[4], removing support for bionic would > limit our ability to continue to support/test those stable branches. > > Don't get me wrong, I am not a supporter of maintaining older branches > for too long, but Train still seems relevant for a lot of people. > > Michael > > [1] https://governance.openstack.org/tc/reference/runtimes/train.html > [2] https://governance.openstack.org/tc/reference/runtimes/ussuri.html > [3] https://releases.openstack.org/ > [4] https://github.com/openstack/devstack/tags > > On Thu, Apr 29, 2021 at 3:35 PM Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > by 2nd week of May. > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > python-cinderclient - python-cinderclient-functional-py36 > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > Murani- murano-dashboard-sanity-check > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > -gmann > > From zbitter at redhat.com Fri Apr 30 17:23:52 2021 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 30 Apr 2021 13:23:52 -0400 Subject: Openstack Stack issues In-Reply-To: References: Message-ID: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: > Hi, > >    I am using the Openstack *USURI *version in *Centos7*. Due to some > issues my disk size is full,I freed up the space. Afte that some service > went down. After that I have issues in creating the stack and list > stack. It looks like heat-api at least is still down. From gmann at ghanshyammann.com Fri Apr 30 18:07:21 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 30 Apr 2021 13:07:21 -0500 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: <17923f6b691.123939368832382.7998414044634751912@ghanshyammann.com> ---- On Fri, 30 Apr 2021 12:19:33 -0500 Michael Johnson wrote ---- > Opps, just realized I was looking in the wrong place. Devstack does > have stable branches. > Ignore my post/comment. yeah, devstack is branched and devstack old stable branch branch keep supporting their etsted/supported ditro version. Bionic support will be dropped only from devstack master(Xena) onwards. -gmann > Sigh. > Michael > > On Fri, Apr 30, 2021 at 10:16 AM Michael Johnson wrote: > > > > Does this mean the community is tagging all stable branches before > > Victoria as End-of-Life? > > > > According to the PTI, bionic was the tested platform for Train[1] and > > Ussuri[2] which are still under "Maintained"[3]. > > > > Since devstack is branchless[4], removing support for bionic would > > limit our ability to continue to support/test those stable branches. > > > > Don't get me wrong, I am not a supporter of maintaining older branches > > for too long, but Train still seems relevant for a lot of people. > > > > Michael > > > > [1] https://governance.openstack.org/tc/reference/runtimes/train.html > > [2] https://governance.openstack.org/tc/reference/runtimes/ussuri.html > > [3] https://releases.openstack.org/ > > [4] https://github.com/openstack/devstack/tags > > > > On Thu, Apr 29, 2021 at 3:35 PM Ghanshyam Mann wrote: > > > > > > Hello Everyone, > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > > by 2nd week of May. > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > > > Murani- murano-dashboard-sanity-check > > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > > > -gmann > > > > > From sshnaidm at redhat.com Fri Apr 30 19:43:28 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Fri, 30 Apr 2021 22:43:28 +0300 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Thu, Apr 29, 2021, 18:59 James Slagle wrote: > (resending with TripleO tag) > > On Thu, Apr 29, 2021 at 11:53 AM James Slagle > wrote: > >> I'm proposing we formally promote Cédric to full tripleo-core duties. He >> is already in the gerrit group with the understanding that his +2 is for >> validations. His experience and contributions have grown a lot since then, >> and I'd like to see that +2 expanded to all of TripleO. >> >> If there are no objections, we'll consider the change official at the end >> of next week. >> >> -- >> -- James Slagle >> -- >> > > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Apr 30 20:13:40 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 30 Apr 2021 16:13:40 -0400 Subject: =?UTF-8?Q?Re=3A_=5BTripleO=5D_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_fo?= =?UTF-8?Q?r_tripleo=2Dcore?= In-Reply-To: References: Message-ID: +1 On Fri, Apr 30, 2021 at 3:46 PM Sagi Shnaidman wrote: > +1 > > On Thu, Apr 29, 2021, 18:59 James Slagle wrote: > >> (resending with TripleO tag) >> >> On Thu, Apr 29, 2021 at 11:53 AM James Slagle >> wrote: >> >>> I'm proposing we formally promote Cédric to full tripleo-core duties. He >>> is already in the gerrit group with the understanding that his +2 is for >>> validations. His experience and contributions have grown a lot since then, >>> and I'd like to see that +2 expanded to all of TripleO. >>> >>> If there are no objections, we'll consider the change official at the >>> end of next week. >>> >>> -- >>> -- James Slagle >>> -- >>> >> >> >> -- >> -- James Slagle >> -- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Apr 30 22:14:59 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 30 Apr 2021 17:14:59 -0500 Subject: [Diversity] Diversity & Inclusion WG Meeting reminder Message-ID: The Diversity & Inclusion WG invites members of all OSF projects to our meeting Monday May 3rd, at 17:00 UTC in the #openinfra-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity -wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: